The 5-Signal Lead Scoring System Behind Our 94% Follow-Up Rate

Learn the 5-signal lead scoring framework that helps teams prioritise high-intent leads, improve follow-up rates, and close more deals.

  • Date

    24 Mar 2026

  • Category

    Lio

The 5-Signal Lead Scoring System Behind Our 94% Follow-Up Rate 
Table of Content






Ashley Carter

About Author

Ashley Carter

Most Leads Don't Die Because They Were Bad. They Die Because Nobody Got to Them in Time.

There is a stat that should keep every sales leader awake at night. 44% of sales reps never follow up with a lead at all. Not late. Not poorly. Never.

And of the ones who do follow up, most quit after a single attempt, even though 80% of sales require five or more touchpoints before a deal closes. The leads were real. The intent was there. The system just did not move fast enough or know who to prioritise.

This is not a motivation problem. It is a scoring problem.

When every lead looks the same in your CRM, your team treats every lead the same. The high-intent prospect who visited your pricing page three times this week gets the same follow-up cadence as the person who stumbled onto your blog from a Reddit thread and bounced after nine seconds. One of them is ready to buy. The other is not. But your system cannot tell the difference, so neither can your salespeople.

Here is the exact 5-signal framework that drives a 94% follow-up rate, and why most scoring models fail long before they get anywhere close.

Why Most Lead Scoring Models Collapse

Before getting into what works, it is worth understanding why most lead scoring efforts do not.

The typical approach goes something like this. A marketing team sits down, assigns arbitrary point values to a handful of actions (downloaded an ebook: 10 points, opened an email: 5 points, visited the website: 3 points), sets a threshold, and hands "qualified" leads over to sales. Sales ignores half of them because the leads feel cold. Marketing blames sales for not following up. Sales blames marketing for sending junk. The scoring model sits in a spreadsheet somewhere gathering dust.

This happens constantly. Only 27% of leads passed from marketing to sales are actually qualified. That means nearly three out of four leads that marketing calls "ready" are anything but.

Three issues kill most scoring models before they ever deliver results.

  • They score activity, not intent. Someone who reads twenty blog posts might be a content enthusiast, not a buyer. Someone who visits the pricing page once and leaves might be more ready to purchase than the person who engaged with every email in your nurture sequence. Volume of activity is not the same as quality of signal.

  • They are static. Set up once and never recalibrated. But markets shift, buyer behaviour changes, and what indicated intent six months ago might mean nothing today. A scoring model that is not regularly tested against actual conversion data is just a guess dressed up as a system.

  • They exist in isolation. The score lives in the CRM, but nothing happens automatically when a lead crosses the threshold. Someone still has to notice. Someone still has to act. And that is where the gap between a scored lead and an actual follow-up becomes a chasm.

A scoring system is only as good as the action it triggers. If a lead can score 95 out of 100 and still sit in a queue for two days waiting for a human to notice, the scoring was pointless.

The 5-Signal Lead Scoring Framework

This framework is built around a simple principle. Do not score what leads do. Score what their behaviour means.

Every signal in the system answers a specific question about the lead's readiness to buy. Together, the five signals create a composite picture that is accurate enough to automate action on, not just report on.

Signal 1: Source Fit

The question this answers: How likely is this channel to produce a buyer?

Not all lead sources are created equal. A referral from an existing customer carries fundamentally different weight than a click from a broad-match Google ad. A demo request from your website signals something very different from a webinar registration that was driven by a free coffee voucher.

Source fit scoring works by analysing historical conversion data across every channel and assigning weight accordingly. If leads from LinkedIn ads convert at 12% and leads from organic blog traffic convert at 3%, the LinkedIn lead starts with a higher baseline score before they have done anything else.

This is not about penalising channels. It is about giving your team an honest starting point. A lead from a high-converting source gets faster attention. A lead from a lower-converting source enters a nurture track where they have time to demonstrate further intent before a human gets involved.

Most scoring systems skip this entirely. They treat every lead as equal at the point of entry and only start scoring once the lead begins engaging. That is already too late. The source tells you something meaningful about probability, and ignoring it means your model is blind from the first interaction.

Signal 2: Firmographic Match

The question this answers: Does this lead look like our best customers?

This is the "fit" layer. It evaluates whether the lead matches the profile of businesses that have historically converted and stayed. The factors that matter most:

  • Company size

  • Industry

  • Role of the contact

  • Geography

  • Estimated budget capacity

A head of operations at a 30-person SaaS company is a different prospect than an intern at a 5,000-person enterprise, even if they took the exact same actions on your website.

The key here is to build the fit profile from actual closed-won data, not assumptions. Most teams define their ideal customer profile once, during a strategy offsite, and never revisit it. The businesses that score leads accurately are the ones that regularly look at who actually bought, who churned early, and who became a long-term, high-value customer, then update the fit criteria accordingly.

Fit scoring also acts as a negative filter. A lead might be highly engaged but fundamentally wrong for your product. Without a fit signal, that lead consumes sales time and produces nothing. With it, the lead gets routed to a nurture track or deprioritised before a rep ever picks up the phone.

Signal 3: Engagement Depth

The question this answers: Is this lead actively exploring, or passively browsing?

This is where most scoring systems begin and end. Page views. Email opens. Content downloads. Form submissions. The problem is not that engagement data is useless. It is that most models treat all engagement equally.

  • Visiting a blog post is not the same as visiting a pricing page

  • Opening an email is not the same as clicking through to a product demo

  • Downloading a top-of-funnel guide is not the same as requesting a proposal template

Engagement depth scoring weights actions by their proximity to a buying decision.

  • Top-of-funnel engagement (blog reads, social follows, newsletter signups) adds modest points

  • Mid-funnel engagement (case study views, comparison page visits, webinar attendance) adds more

  • Bottom-of-funnel engagement (pricing page visits, demo requests, feature-specific deep dives) adds significantly more

The model should also account for breadth. A lead who visits five different product pages in a single session is exhibiting research behaviour. That pattern matters more than the sum of the individual page views.

Signal 4: Timing and Velocity

The question this answers: Is this lead heating up or cooling down?

A lead who visited your pricing page six months ago is a very different proposition from one who visited it yesterday. A lead who has engaged three times in the past week is behaving differently from one who engages once a month.

Timing and velocity scoring captures three dimensions of a lead's behaviour:

  • Recency: How recently did they last engage?

  • Frequency: How often are they engaging?

  • Acceleration: Is the pace of engagement increasing?

It is the difference between a warm lead and a hot one.

This signal is also where score decay matters. If a lead was highly engaged eight weeks ago but has gone silent, their score should reflect that. Most static models do not account for this. They accumulate points indefinitely, which means a lead who was active in January and dormant since can still show a high score in June. That is a false signal, and it wastes your team's time.

Velocity is the most overlooked component. A lead whose engagement is accelerating (one visit last week, three visits this week, pricing page today) is showing a pattern that correlates strongly with imminent purchase decisions. A good scoring model flags that acceleration in real time, not in a weekly report.

Research backs this up. Leads contacted within five minutes of showing intent convert at nine times the rate of those contacted after even a short delay. Velocity scoring is what makes that five-minute window actionable instead of theoretical.

Signal 5: Explicit Intent

The question this answers: Has this lead told us, directly or indirectly, that they are ready to buy?

This is the signal that overrides everything else. When a lead requests a demo, asks for pricing, starts a free plan signup, or replies to an outreach email with "Can we talk this week?", the other four signals become context. Intent becomes the headline.

Explicit intent also includes what some teams call "hand-raiser" behaviours.

  • Returning to the pricing page multiple times in 48 hours

  • Comparing your product against a competitor on your website

  • Forwarding your content to a colleague (detectable through email tracking)

  • Inviting a second team member to explore your platform

These are not passive actions. They are buying signals.

Product-qualified leads, those who have actually started using a product and hit meaningful usage milestones, convert at two to three times the rate of traditionally scored leads. If your model does not account for in-product behaviour as a signal, you are missing the strongest predictor you have.

Explicit intent scoring should carry the highest weight in the model and should trigger immediate action. Not a score update for someone to review later. Immediate routing to the right person, with the right context, ready to act.

How the Five Signals Work Together

No single signal tells the full story. A lead with perfect firmographic fit but zero engagement is not ready. A lead with deep engagement but poor fit will not convert. A lead with strong intent but from a historically low-converting source needs a different approach than one with strong intent from a proven channel.

The power of the framework is in the combination.

A lead from a high-converting source (Signal 1) that matches your ideal customer profile (Signal 2), has visited your pricing page twice this week (Signal 3), whose engagement is accelerating (Signal 4), and who just requested a demo (Signal 5) is not a "qualified lead." That is a sale waiting to happen. And the system should treat it accordingly, routing it to your best available rep within minutes, not hours.

Conversely, a lead that scores well on fit but shows no engagement depth, no velocity, and no intent signals belongs in a nurture sequence, not a sales queue. Sending that lead to a rep is how you burn trust between marketing and sales and waste everyone's time.

The framework also makes it possible to diagnose exactly where leads stall.

  • High-fit, high-engagement leads that never convert? The problem is likely in Signal 5. They are interested but not ready. That is a nurture problem, not a sales problem.

  • Leads with strong intent that keep churning post-sale? The problem is in Signal 2. They were not the right fit to begin with.

  • Lots of activity but low conversion across the board? The problem is in Signal 1. The channels driving volume are not driving quality.

Scoring is not just a prioritisation tool. It is a diagnostic tool. And that distinction separates teams that react to lead data from teams that actually learn from it.

What a 94% Follow-Up Rate Actually Requires

A follow-up rate that high is not about discipline. It is about architecture.

When lead scoring is done manually, or when the score exists in a system that requires a human to check it, act on it, and route it, the follow-up rate will always be limited by human attention. Reps get busy. Notifications get buried. High-value leads sit in a queue next to low-value ones and nobody knows the difference until it is too late.

A 94% follow-up rate requires three things the traditional stack cannot deliver.

  1. The scoring has to be real-time. Not batched overnight. Not updated weekly. Real-time. When a lead crosses a threshold, the system needs to know within seconds, not hours.

  1. The scoring has to trigger action automatically. A score is just a number until something happens because of it. The lead needs to be routed to the right person, a task needs to be created, and a follow-up needs to be assigned, all without a human initiating any of it.

  1. The system that scores and the system that acts need to be the same system. When your CRM scores the lead and your task tool manages the follow-up and your email platform handles the outreach, every handoff between tools is a point where speed is lost and leads fall through.

That is the architecture gap. Not a scoring gap. Not a people gap. A systems gap.

Where WorksBuddy Fits In

WorksBuddy was built around the principle that scoring a lead and acting on that score should not be two separate events happening in two separate tools.

LIO, the lead management agent, captures, enriches, and scores every lead the moment it arrives. It evaluates source, fit, engagement, velocity, and intent automatically. When a lead crosses the threshold, LIO does not send a notification and wait. It acts.

  • Routes the lead to the best available person based on score and capacity

  • Triggers a follow-up task in TARO, the task management agent, with the context already attached

  • Gives the rep everything they need: who the lead is, what they did, why they scored highly, and the recommended next step

If the lead is not yet ready, EVOX, the email marketing agent, enrols them in a nurture sequence tailored to where they are in the buying journey. When their behaviour changes, when velocity picks up or an intent signal fires, LIO rescores them and the process starts again.

No manual handoff between systems. No delay between score and action. No lead sitting in a queue because nobody checked the dashboard.

That is how you get to 94%. Not by asking your team to be faster. By building a system where speed is the default.

Stop Losing Leads to Slow Systems

The 5-signal framework is not complicated. Source, fit, engagement, velocity, intent. Any team can adopt it. But the gap between having a scoring model and actually acting on it in real time is where most businesses leave money on the table.

WorksBuddy closes that gap with a free plan that gives you LIO, TARO, and real-time lead scoring from day one, no credit card, no per-seat surprises. Paid plans open up the full power of all eight agents for teams ready to scale.

Your leads are already telling you who is ready to buy. The only question is whether your system is listening.