How I’d Build a Sales Forecasting System If I Were Starting Today (With 3 Years of Hard Lessons)

Three years ago, I thought sales forecasting was just about picking a number and hoping we hit it. It lived in a spreadsheet, got presented in leadership meetings, and was mostly based on rep intuition. No one questioned it–until we missed target.

I’ve since built forecasts across high-velocity sales teams and slow-moving enterprise orgs. Some were dead-on. Others were wildly off. The difference? Process, not just precision. If I were starting again, here’s exactly how I’d build a sales forecasting system from scratch–and what I’d never do again.

Why Most Forecasts Are Just Dressed-Up Guesswork

Let’s be honest: most early forecasts are a cocktail of gut feel, spreadsheet gymnastics, and wishful thinking. You ask reps what’s closing, they give you a number, and you roll it up.

That works until:

  • A rep ghosts their biggest deal.

  • A close date gets pushed twice with no explanation.

  • You hit 90% of your pipeline coverage but close 60% of your number.

I learned this the hard way. We once forecasted a strong Q2 with full confidence, only to miss by 43%. The data wasn’t wrong–the pipeline just wasn’t real. No one had inspected the deals. We were tracking forecasts, but not enforcing forecast discipline.

The 3 Forecast Layers That Now Power Every Model I Build

Today, every forecast I build is pressure-tested from three angles:

1. Historical Baseline Forecasting

Start with reality:

  • Look at prior quarters’ pipeline conversion by stage.

  • Model how much pipeline you actually converted, and how fast.

  • Build a momentum-based forecast based on trailing indicators.

Tip: Break this down by sales stage (e.g., Discovery, Proposal, Negotiation) to identify where deals typically stall. If you know 40% of deals die in "Proposal," your forecast should reflect that risk.

2. Pipeline Coverage Ratios

This is the macro view:

  • Coverage = Pipeline / Target

Let’s break that down:

  • Pipeline: The total value of all qualified deals currently active in your sales funnel – deals that have a realistic chance of closing.

  • Target: The revenue goal you've set for the quarter or year.

If you have £1.2M in pipeline and your revenue target is £400k, then your coverage ratio is 3x.

Why it matters: This ratio tells you if you mathematically have enough in play to hit your goal. It’s not a guarantee, but it’s your starting point. You want enough pipeline volume to account for deals that will inevitably slip, stall, or fall apart.

  • For new business: I aim for 3.5–4x coverage.

  • For expansion or renewals: 1.5–2x often works, since those deals usually close faster and with higher confidence.

Note: These ratios aren’t one-size-fits-all. They should be adjusted based on your average deal size, sales cycle length, and close rates by segment (e.g., SMB vs. Enterprise).

  • For new business, I look for 3.5–4x coverage.

  • For expansion, 1.5–2x may be enough.

But here’s the key: different teams need different ratios. One mistake I made early on was applying the same 3x rule across enterprise and SMB. Enterprise needed more time and more coverage. SMB? Less.

3. Rep-Level Forecasting (The Human Layer)

Every good forecast still has a human element. But it needs structure:

  • Reps input commit numbers.

  • I review every deal over a certain threshold with managers.

  • If a deal hasn’t had activity in 14+ days, I flag or downgrade it.

  • Deals with single-threaded contacts or no updated notes? Red flag.

This is where deal inspection becomes crucial. Forecasting without inspection is just optimism.

Defining Forecast Categories (and Their Practical Use)

When sales teams build forecasts, they often categorize deals based on how likely they are to close. These categories are standard in most CRMs and forecasting tools like Salesforce, Clari, and HubSpot.

Forecast categories help standardize rep inputs and give leadership clearer insight:

  • Pipeline (10–40% probability): These are early-stage deals. They may have shown some initial interest, but there’s a long way to go. These deals are used mainly for coverage planning – to understand raw volume, not closing probability.

  • Best Case (50–70% probability): These deals are further along. A proposal might be sent, or strong signals from the buyer exist, but something critical is still pending – like budget confirmation, legal review, or stakeholder alignment.

  • Commit (90–100% probability): These are the most mature and promising deals. The rep and their manager are confident it will close within the forecast period. Typically includes:

    • A clear mutual action plan

    • Multi-threaded buying conversations

    • Budget and timing confirmed

I often weight these categories accordingly in the roll-up to create a more realistic view. For example:

Pipeline: weighted at 25%

Best Case: weighted at 60%

Commit: weighted at 95%

This layered approach helps balance optimism with realism and leads to more accurate forecasts, especially when you’re still building your forecasting muscle.

Where the Data Lives (and Where It Shouldn’t)

I’ll be blunt: Salesforce isn’t your forecast. It’s your data source. Your forecast lives in the process you build around it.

I start with Google Sheets or Airtable for flexibility, then tie back to CRM fields:

  • Forecast Category (Pipeline, Best Case, Commit)

  • Confidence Score (Rep + Manager)

  • Expected Close Date

  • Last Activity Date

  • Deal Risk Label (Low / Medium / High)

What changed my accuracy most? Forcing reps to label each deal with a risk score and confidence level. Subjective, yes. But surprisingly revealing.

As teams scale, I recommend exploring tools like Clari or Aviso to add automation, ML-driven forecasts, and predictive risk scoring. For seed to Series A? Sheets still do the job.

How I Stress-Test a Forecast Mid-Quarter

Your forecast isn’t static. It should flex as your quarter evolves.

Here’s my quick stress test:

  • Review Slipped Deals: Which ones were pushed from last quarter?

  • Run Pipeline Velocity: How long from stage to stage?

  • Top 5 Deals by Value: Are they still active? Last touched?

  • Scenario Planning: If top 2 reps miss their number, what happens?

Bonus red flag: If 70%+ of your forecast is expected to close in the final 2 weeks of the quarter, you’re likely looking at a "hockey stick." That’s not a forecast–that’s a bet.

Measuring and Improving Forecast Accuracy Over Time

Forecasting isn’t just about projecting, it’s about learning. After each quarter, I measure:

  • Forecast Attainment % = Actual Revenue / Forecasted Revenue

  • Forecast Variance = (Forecast - Actual) / Forecast

I run a post-mortem after each quarter:

  • Which deals slipped?

  • What did we overestimate?

  • Were there patterns in who forecasted most accurately?

Do this for three quarters, and your forecast will be more accurate by design.

Aligning with Marketing and SDR Pipeline

One silent killer of forecasts: misaligned pipeline definitions.

  • What marketing calls a "qualified lead" may not meet sales criteria.

  • What SDRs push into pipeline may lack buying signals.

I now insist on one shared definition of what qualifies as viable pipeline. I track how much pipeline comes from:

  • Marketing-sourced (Inbound)

  • SDR-sourced (Outbound)

  • AE-sourced (Self-gen)

This breakdown helps me adjust pipeline coverage expectations and hold each channel accountable for forecast quality.

My Forecasting Playbook, Summed Up

  • Align around a single source of truth

  • Use historical, coverage, and rep data together

  • Automate what you can, inspect what matters

  • Don’t fear subjectivity, just structure it

  • Teach the forecast, don’t just build it

  • Track forecast accuracy and hold a post-mortem

  • Align sales, SDRs, and marketing on what counts as real pipeline

Where to Start: My Step-by-Step Forecasting Blueprint

This framework draws on methodologies from Clari, Gartner, and Sales Hacker, along with hard lessons from real-world RevOps.

If you're reading this and thinking, "This all makes sense, but where do I actually begin?" – here's exactly how I would approach building a forecasting system from scratch. Whether you're a RevOps lead at a Series A startup or a founder trying to bring some sanity to your pipeline, follow this sequence:

Step 1: Clarify Definitions and Align the Team

  • Forecast: A prediction of future sales revenue based on current data and assumptions.

  • Pipeline: All active deals currently tracked in your CRM.

  • Qualified Pipeline: Deals that meet a minimum set of criteria to be considered viable – typically includes budget, authority, need, and timeline (BANT).

  • Align on what "qualified" means across Sales, SDRs, and Marketing. This is essential. According to Gartner, misaligned definitions of pipeline stages and qualification criteria are one of the top contributors to inaccurate forecasts in B2B sales teams.

Step 2: Audit and Clean Your CRM Data

  • Check for missing close dates, owners, or stage assignments.

  • Remove obviously dead or stalled deals.

  • Ensure all deals have next steps, notes, and recent activity.

  • Tag or flag deals with no movement in 14+ days.

If your CRM is a mess, your forecast will be fiction. Start here.

Step 3: Define Your Forecast Categories

Use simple, standardized buckets:

  • Pipeline = Early-stage deals (low confidence, ~10–30% weighted)

  • Best Case = Deals with positive signals, but not guaranteed (50–70%)

  • Commit = Deals the rep and manager believe will close (90–100%)

Tie each to specific sales stages and expected activities. These forecast categories – Pipeline, Best Case, Commit – are the same classifications used by tools like Clari and Salesforce to help teams apply probability-based weightings to each deal based on confidence level and deal maturity.

Step 4: Establish Your Sales Stages and Conversion Rates

  • Define stages like: Discovery → Qualification → Proposal → Negotiation → Closed Won/Lost

  • Pull historical data: what % of deals move from one stage to the next?

  • Example: if 100 deals reach Proposal and 20 close, that stage converts at 20%.

This helps you build a weighted forecast based on real behavior – not hope. Sales Hacker recommends stage-by-stage conversion analysis as the foundation for accurate forecasting models, especially in teams with long or variable sales cycles.

Step 5: Calculate Pipeline Coverage

  • Formula: Pipeline Value / Revenue Target

  • Target: 3–4x coverage for new business, 1.5–2x for expansion/renewals

Coverage tells you if your current pipeline is even big enough to hit your number.

Step 6: Layer Your Forecasting Models

Use 3 complementary lenses:

  1. Historical: Based on past stage conversions and deal velocity

  2. Pipeline Coverage: Are you above or below your target?

  3. Rep-Level Commitments: What reps and managers are signaling

Each model gives you a range – you’ll learn to triangulate between them. This triangulation approach mirrors Clari’s own forecasting methodology, which combines historical AI-driven trendlines with real-time rep inputs and pipeline coverage ratios to generate a dynamic forecast.

Step 7: Build a Live Forecast Sheet or Dashboard

  • Start in Google Sheets if you’re early-stage

  • Include: deal name, stage, amount, close date, owner, confidence score, risk level

  • Add formulas to weight forecasts based on category or stage

Tools like Clari or Aviso can come later once your process is mature

Step 8: Set a Forecasting Cadence

  • Weekly forecast calls with managers and reps

  • Monthly reviews with leadership

  • Real-time updates in the dashboard as deals progress

Consistency is what makes your forecast trusted.

Step 9: Track Accuracy and Run Post-Mortems

  • Forecast Accuracy = Actual Revenue / Forecasted Revenue

  • Forecast Variance = (Forecast - Actual) / Forecast

  • Review misses: Which deals slipped? Why? Aviso and InsightSquared both emphasize the value of quarterly forecast post-mortems to identify patterns and refine future forecasting assumptions based on historical accuracy.

This is where learning happens. You get better every quarter by reviewing what worked and what didn’t.

Final Thought

I’m three years in. Still learning. But one thing I know for sure: a forecast doesn’t need to be perfect. It needs to be trusted, repeatable, and constantly stress-tested. Clean data is boring–until you miss target. Build accordingly.

Previous
Previous

Beyond the First Sale: How Rev Ops Unlocks Sustainable Growth Through Customer Retention & Expansion

Next
Next

What RevOps Teams Can Learn from Solo AI SaaS Killers