Process /

The 8-Week GTM Infrastructure Build: What Actually Happens

By Mathew Joseph

Every engagement I run follows the same five-phase framework: Diagnose, Architect, Build, Activate, Transfer. Eight weeks from kickoff to handoff.

People ask what that actually looks like in practice. Fair question. The framework sounds clean on a slide, but the reality of building a revenue system for a live business is messier, more detailed, and more interesting than any summary captures.

Here is the week-by-week breakdown.

Weeks 1-2: Diagnose

This is where most of the discovery happens. I need to understand three things: what exists, what is broken, and what is possible.

What the client sees: A structured intake process. I send a pre-work document covering current tools, team structure, data volumes, and key metrics. Then a series of working sessions, typically 3-4 hours total across the two weeks, where I interview stakeholders from sales, marketing, ops, and leadership.

What is happening under the hood: I am mapping the entire GTM data architecture. Every tool, every integration, every data flow. I instrument key workflows to measure where time goes. I pull CRM data to analyze conversion rates by stage, lead source performance, and pipeline velocity. I audit the enrichment pipeline, the routing logic, the automation sequences.

The output is a diagnostic report. Not a slide deck with vague recommendations. A specific, quantified assessment: here is where your system is losing revenue, here is how much, and here is the priority order for fixing it.

Common findings at this stage: Duplicate enrichment spend across 2-3 vendors. Manual lead routing that adds 4-6 hours of latency. Attribution gaps covering 40-60% of pipeline. CRM fields that haven’t been updated in 18 months but are still required. Automation sequences with 2% reply rates that nobody has paused.

Week 3: Architect

This is the design week. Based on the diagnostic, I design the target-state architecture.

What the client sees: A technical architecture document. Data model diagrams. Workflow specifications. Integration maps. A clear before-and-after showing what changes and why. A working session where I walk through the design and collect feedback.

What is happening under the hood: I am making hundreds of small decisions that determine whether the system will scale or break. Which fields become the canonical source of truth. How enrichment data gets merged when two sources disagree. What the scoring model weights and why. How routing logic handles edge cases. What happens when a lead re-enters the system after going dark for 6 months.

These decisions are boring individually but critical collectively. A revenue system is only as good as its data model, and the data model is only as good as the edge cases it handles.

Key deliverables: Unified data model specification. Automation workflow diagrams. Integration architecture map. Scoring and routing logic documentation. Migration plan for existing data.

Weeks 4-6: Build

Three weeks of heads-down construction. This is where the architecture becomes a working system.

What the client sees: Weekly demos showing progress. First demo covers the data model and enrichment pipeline. Second demo covers automation workflows and routing. Third demo covers reporting, attribution, and the full end-to-end flow. Each demo includes a working environment the client can log into and explore.

What is happening under the hood: Week 4 is foundation work. Standing up the unified data model, building the enrichment pipeline, configuring the CRM architecture. This is the unglamorous but essential layer that everything else depends on.

Week 5 is automation. Lead scoring, routing logic, nurture sequences, deal progression workflows, lifecycle triggers. Each one gets built, tested with sample data, and validated against the design spec.

Week 6 is the integration and reporting layer. Connecting all the pieces, building the dashboards, setting up attribution tracking, and running end-to-end tests. I also build monitoring: alerts for when enrichment fails, when routing stalls, when data quality drops below threshold.

What goes wrong at this stage: Almost always, something in the existing data is worse than the audit revealed. A field that was supposed to contain company size actually has free-text entries like “medium” and “50ish.” A critical integration turns out to have API rate limits that require a queuing system. An automation that worked in testing hits an edge case with 3% of the real data. These are normal. The architecture phase accounts for them. The build phase resolves them.

Week 7: Activate

The system goes live. But not all at once.

What the client sees: A phased rollout plan. Day 1: enrichment pipeline goes live on new leads only. Day 2-3: routing logic activates for the SDR team. Day 4-5: automation sequences begin running. End of week: full system operational with real data flowing through every component.

What is happening under the hood: I am monitoring everything. Enrichment hit rates, routing accuracy, automation trigger rates, data quality scores, CRM sync latency. Each component gets activated individually so that if something behaves unexpectedly, I can isolate and fix it without disrupting the rest of the system.

I also run parallel processing during this week. New leads go through both the old system and the new system. This lets the team compare outputs and build confidence that the new system is working correctly before the old one gets turned off.

The critical metric at this stage: Time to first value. How quickly does a new lead go from entering the system to being fully enriched, scored, and routed to the right rep? In a well-built system, this should be under 60 seconds. In the old system, it was typically 4-24 hours.

Week 8: Transfer

This is what separates infrastructure from dependency. The system needs to run without me.

What the client sees: A comprehensive handoff. Documentation covering every component, every workflow, every decision that was made and why. Training sessions for the ops team, the sales team, and leadership. A runbook for common scenarios: how to add a new enrichment field, how to modify the scoring model, how to troubleshoot a stalled automation.

What is happening under the hood: I am building the team’s operational muscle memory. Not through documentation alone, but through guided practice. The ops lead modifies a workflow while I watch. The sales manager adjusts a routing rule. The marketing ops person adds a new campaign to the attribution model. Each one does the work themselves, with me there to answer questions.

I also set up a monitoring dashboard that the ops team can use going forward. Pipeline velocity, enrichment accuracy, routing latency, automation health. If something degrades, they can see it before it becomes a problem.

What gets handed over: The system itself. All credentials and access. Complete documentation. A recorded walkthrough of every major component. A 30-day support window for questions that come up after I leave.

Why Eight Weeks

People sometimes ask if it can be done faster. Technically, yes. Practically, no.

The build itself could compress into 4-5 weeks. But Diagnose needs time because stakeholder schedules are real. Activate needs time because rolling out to a live revenue team requires careful sequencing. Transfer needs time because learning to operate a new system is not a one-day exercise.

Eight weeks is the minimum viable timeline for a system that works and sticks. I have seen companies try to do this in 3-4 weeks. The build gets done, but the activation is rushed, the transfer is incomplete, and the team ends up dependent on the builder. That defeats the purpose.

The Outcome

By week 8, the client has a fully operational revenue system. Enrichment pipelines running. Leads scoring and routing automatically. Attribution tracking every touchpoint. Automation handling the repetitive work. Reporting showing what is actually happening in the funnel.

And they own all of it. No ongoing dependency. No monthly retainer. No vendor lock-in.

That is the point. Build the system. Prove it works. Hand over the keys. Move on.