Skip to content
Insights

Why Your Health Score Is Missing the Renewals That Actually Churn

Single health scores blend away the signals that predict churn. Here is how CSMs and renewal managers can spot at-risk accounts 60 to 90 days before renewal, using composite risk across product, support, and engagement data.

Neelam Chakrabarty
Neelam Chakrabarty
April 22, 2026
7 min read
Why Your Health Score Is Missing the Renewals That Actually Churn

If you run customer success or renewals, you have probably lived through this week. A top account is green in your CSP. Usage looks steady. The last QBR went fine. Then the renewal lands at 78 percent of where you forecast it, and everyone on the call is trying to explain what happened.

John Huber, who has led renewals for large enterprise software teams, described exactly this pattern on the Account Management Secrets podcast. A renewal that scored 94 percent safe closed at 78 percent, leaving a seven-figure gap in the quarter. His conclusion was not that the forecast was bad. It was that the signals used to build it were too simple. (Amplify AM)

That is the quiet problem inside most customer success organizations right now. The health score is not wrong. It is just too smooth. It takes a dozen real signals, averages them together, and hands you a number that feels like clarity but hides the compound risk underneath.

This blog is about how to fix that, what the data actually says works, and how CSMs and renewal managers can build a sharper system for spotting at-risk accounts before the forecast call.

The single health score problem

Health scores exist because customer success needs a way to triage. With a book of 80 or 200 accounts, no one can read every usage chart, every ticket, and every email. The score compresses everything into one number so the CSM knows where to spend Monday morning.

The issue is what the compression hides.

Gainsight, which defined the category, puts typical health score accuracy at around 85 percent for churn prediction. Their own research shows that SaaS teams using composite scores with four or more dimensions see 34 percent better prediction accuracy than teams using single-dimension models. (Gainsight)

Read that again. A quarter to a third of your churn prediction quality is sitting on the floor, underneath the average. It is the difference between knowing an account is yellow and knowing that its usage dropped 22 percent while support tickets from power users doubled and the executive sponsor changed LinkedIn jobs in March.

One number cannot carry that much meaning. Four signals, watched together, can.

What actually predicts renewal risk

The research on churn signals has become far more specific in the last two years. Four categories consistently show up across studies from Totango, ChurnZero, and Sturdy:

Product usage trends, like declining logins, shrinking session depth, and abandoned features. A 20 percent drop in logins over 90 days is one of the most cited thresholds in the literature.

Support and sentiment patterns, including a sudden rise in ticket volume, longer resolution times, or shifts in email tone from key users.

Relationship signals, which is where the data gets brutal. When an executive champion changes roles or leaves, research from Sturdy puts the probability of churn in that account at 51 percent within 12 months. Other studies put the number at 65 percent within six months when it is a senior leader. (ChurnZero, Momentum Nexus)

Business health context, including contract value, prior expansion history, and time since the last meaningful CSM touch. Accounts going 45 or more days without a CSM conversation show up repeatedly as a quiet churn signal.

None of these are exotic. Most CSMs know they matter. The problem is that the four categories live in four different systems, and by the time a human pulls them into one view, the renewal is 30 days out and the window to act has closed.

Why CSMs are stuck in spreadsheet work

Ask any CSM how they build a risk list for their upcoming renewals and you will hear a version of the same story. Export from the CRM. Pull a product usage report. Ask a data analyst for support ticket volume by account. Paste it all into a Google Sheet. Spend an afternoon cross-referencing.

This is the same underlying tension that keeps showing up across customer success research. CSMs are expected to be data-driven, but the plumbing between their systems does not exist, so they either become part-time data analysts or they rely on the health score and hope.

The cost of that is measurable. Harvard Business Review, citing research by Bain and Earl Sasser, found that acquiring a new customer is five to 25 times more expensive than keeping one, and a five percent increase in retention can lift profits by 25 to 95 percent. (HBR) When a CSM spends two days pulling data instead of having two conversations, that math is working against the business.

The 60 to 90 day window that matters most

There is a specific window where at-risk analysis pays off the most, and most teams miss it. It is the 60 to 90 day zone before renewal.

Earlier than that, you do not yet have a renewal decision forming, so intervention has a diffuse effect. Later than that, the buyer is already talking to procurement or a competitor, and discounts become the only lever. In between is where a CSM can still change the outcome.

ChurnZero has documented that teams who run disciplined monthly risk reviews starting at the 90-day mark cut renewal forecast variance by several percentage points within two quarters. (ChurnZero) The work is not harder. It is earlier.

The practical version of this for a CSM looks like a weekly or biweekly dashboard that filters to only accounts renewing in the next 60 to 90 days, then flags the ones showing compound risk across usage, support, and engagement signals. Not a score. A pattern.

A sharper framework for spotting at-risk accounts

Based on what the research supports and what working CSM teams are doing, a cleaner approach has five steps.

First, pull your renewal dates and contract values from the CRM. This is the spine. Without it, any analysis gives equal weight to a 5,000 dollar account and a 500,000 dollar one.

Second, layer in product usage from the last 90 days. You are looking for directional change, not absolute numbers. A 20 percent drop in logins matters even if the absolute number is still above your benchmark.

Third, add support ticket trends. A 30 percent or higher jump in tickets over 90 days, especially on accounts where tickets come from power users, is a stronger signal than total ticket count.

Fourth, check engagement. Days since the last CSM touch, days since the executive sponsor was in a meeting, any known change in the buying committee. This is where most of the silent churn hides.

Fifth, rank by composite risk, weighted by contract value. You want the accounts that are declining across multiple dimensions at the top, and you want the dollars at stake to decide who gets the CSM's Tuesday morning.

This is the framework the Querri playbook on identifying at-risk accounts before renewal is built around. The goal is not another scoring model. It is a repeatable view that surfaces compound risk instead of averaging it away.

Where Querri fits in the workflow

The reason most CSM teams do not run this framework is not that they do not believe in it. It is that they cannot plumb the systems together without a data engineering ticket.

Querri is built to remove that step. You connect the CRM, the product analytics tool, the support system, and any survey data you have. Then you ask, in plain English, for the view you want. Something like: show me all accounts renewing in the next 90 days where logins dropped more than 20 percent and tickets rose more than 30 percent, ranked by contract value.

The output is a ranked list, not a black box score. The CSM can see every signal that contributed, export a CSV for the team standup, or schedule the same query to run every Monday morning.

That shift matters because it moves the CSM from data assembler to decision maker. The two days a week that used to go into pulling and pasting now go back into the customer conversations that actually save renewals.

What to do this week

If you lead a CS or renewals team, three small moves are worth making before your next forecast call.

Pick one upcoming renewal cohort, the 60 to 90 day window, and build a single ranked list of at-risk accounts using four signal categories, not one score. Compare the list to what your health score is telling you. The gap is where your blind spots live.

Second, look at your last four surprise churns. For each one, write down what signals were visible 60 days out and which system they lived in. You will almost always find the pattern was there. It just was not assembled.

Third, have an honest conversation with your CSMs about how much of their week goes into data plumbing. If the answer is more than half a day, you have a workflow problem, not a talent problem.

The renewals that catch customer success teams off guard are almost never truly surprising in hindsight. The signals were there. They were just spread across four systems and blended into one comfortable number. Breaking the number apart is how you get the window back.

Further reading

For the working version of the framework described above, including the exact prompts and signal thresholds, see the Querri playbook on identifying at-risk accounts before renewal.

More Querri playbooks for customer success teams:

Tags

#At-Risk Accounts #Renewal Management #Customer Health Score #Churn Prevention #Customer Success #Renewal Forecasting #CSM Playbook #Querri
Neelam Chakrabarty
Neelam Chakrabarty
Neelam Chakrabarty is Chief Growth Officer at Querri, with over 20 years of experience building and scaling products, teams, and growth strategies across B2B and B2C companies.
April 22, 2026
7 min read

Share this article

Ready to unlock your data's potential?

Turn raw data into decisions in minutes