You're Measuring Churn Wrong: How to Actually Calculate SaaS Churn Rate
Your churn number is a lie. Not because the math is wrong, but because a single blended percentage tells you almost nothing about what is actually happening in your business. After parsing cancellation patterns across $14M in lost B2B SaaS revenue, the pattern is clear: most founders track one number, react to that number, and make the wrong decision because the number hid the real story.
Here is how to measure churn in a way that actually helps you fix it.
Is 3% monthly churn good or bad for B2B SaaS?
3% monthly churn is average. It compounds to 31% annual churn. You lose nearly a third of your customer base every year. Whether that is acceptable depends entirely on whether you know WHY those customers left.
The math is straightforward: Annual churn = 1 - (1 - 0.03)^12 = 30.6%. At 3%, if you start the year with 1,000 customers, you end with roughly 694, assuming zero new signups. That is 306 customers gone.
The real question is not "is 3% good?" It is "what does 3% consist of?"
Here is where it gets interesting. In cancellation data from 400+ PLG startups, that "3% monthly" almost always breaks down into something like:
- 1.2% voluntary churn (the customer decided to leave)
- 0.9% involuntary churn (payment failed, card expired)
- 0.6% trial-included churn (trial users counted as customers)
- 0.3% already-gone churn (customers who stopped using months ago, finally cleaned up)
Each of those requires a completely different intervention. Lumping them into one number means you are solving the wrong problem.
For context: sub-$1M ARR companies typically see 5-8% monthly. $1-10M ARR averages 3-5%. Above $10M, strong companies push under 2%. Enterprise with annual contracts sits below 1%. Your stage matters more than the absolute number.
What is the right way to measure SaaS churn?
Measure churn across four dimensions: logo vs revenue, voluntary vs involuntary, cohort vs aggregate, and gross vs net. A single blended number tells you almost nothing.
Dimension 1: Logo churn vs revenue churn
Logo churn counts heads. Revenue churn counts dollars.
You can lose 50 customers and $2,500 in MRR (all $50/mo accounts) while your remaining base expands by $8,000. Logo churn looks bad. Revenue picture is healthy. Or you can lose 3 enterprise accounts and $45,000 in MRR. Logo churn looks fine. Revenue churn is a crisis.
Track both. Weight revenue churn more heavily for financial planning. Use logo churn to spot product-market fit issues.
Dimension 2: Voluntary vs involuntary churn
Voluntary: customer clicked cancel. They made a choice. Involuntary: payment failed. The customer may not even know they are gone.
Data from 18,000 failed payment recoveries shows involuntary churn accounts for 20-40% of total churn at most B2B SaaS companies. That is a payment infrastructure problem, not a product problem. Mixing it with voluntary churn makes your product look worse than it is and sends your team chasing the wrong fixes.
Dimension 3: Cohort vs aggregate
Aggregate churn: "We lost 4% of customers this month." Cohort churn: "Customers from the January 2025 cohort churned at 8% in month 3, while the April 2025 cohort churned at 3% in month 3."
Aggregate hides trends. Cohort analysis reveals them. If your March cohort churns less than your January cohort, something you changed is working. If your most recent cohorts churn faster, something broke. You cannot see either pattern in an aggregate number.
Dimension 4: Gross vs net
Gross churn: total MRR lost from cancellations and downgrades. Net churn: gross churn minus expansion revenue (upgrades, seat additions).
A company with 5% gross revenue churn and 7% expansion has -2% net revenue churn. Their existing customer base is growing without new sales. This is best-in-class. A company reporting "5% churn" without specifying gross or net is giving you a number that could mean wildly different things.
Why does blended churn hide the real story?
Blended churn mixes voluntary cancellations with failed payments, enterprise accounts with self-serve signups, and first-month users with two-year customers. A single rate could be hiding three different problems or no problems at all.
Here is a real scenario. A SaaS company reports 4% monthly churn to their board. Looks normal. But segmented:
- Annual contract customers: 0.5% monthly churn
- Monthly self-serve customers: 9% monthly churn
- Involuntary (failed payments): 1.8% of total
- First-30-day customers: 14% churn
- Customers past 90 days: 1.2% churn
That "4% blended" is actually five different problems:
- Self-serve onboarding is broken (9% vs 0.5% on annual)
- Payment recovery needs work (1.8% involuntary)
- First-month activation is failing (14% in first 30 days)
- Long-term retention is solid (1.2% after 90 days)
- Annual contracts are doing their job (0.5%)
The founder who sees "4%" might cut prices or add features. The founder who sees the segmented data fixes onboarding and dunning. One of them actually reduces churn. The other wastes a quarter.
Blended churn is a vanity metric. Segmented churn is an operating metric.
What is the difference between cancellation and churn in SaaS?
Cancellation is when a customer requests to end their subscription. Churn is when they are removed from your active customer count. These events can be days or weeks apart, and the gap matters more than most teams realize.
A customer clicks "Cancel" on March 5. Their billing cycle ends March 28. They remain an active, paying customer for 23 more days. During that window, they might:
- Change their mind and reactivate
- Continue using the product (and realize they need it)
- Get contacted by your team with a save offer
- Let the subscription lapse and become a churned customer on March 28
If you count cancellation requests as churn on the day they happen, you overcount. Some of those customers come back. If you only count churn at the end of the billing period, your data is more accurate but delayed.
The practical difference matters for two reasons:
Reporting accuracy. If 100 customers cancel in March but 15 reactivate before their billing period ends, your actual churn is 85, not 100. The reactivation window is real revenue. Track both: cancel requests (leading indicator) and actual churn (lagging indicator).
Intervention timing. The gap between cancellation and churn is your save window. In analysis of cancellation-to-churn patterns, customers contacted within 48 hours of their cancel request reactivate at 2-3x the rate of customers contacted in the final days. The earlier you understand why they canceled, the more likely you are to address it.
What is an acceptable churn rate for B2B SaaS?
Under 3% monthly is acceptable for SMB SaaS. Under 2% is strong for mid-market. Under 1% is the standard for enterprise with annual contracts. But "acceptable" is the wrong framing. The real question is whether your rate is improving month over month.
Here are the benchmarks by segment:
| Segment | Acceptable Monthly | Strong Monthly | Best-in-Class |
|---|---|---|---|
| SMB self-serve | Under 5% | Under 3% | Under 2% |
| Mid-market | Under 3% | Under 2% | Under 1% |
| Enterprise (annual) | Under 1% | Under 0.5% | Net negative revenue churn |
These numbers are logo churn, not revenue churn. Revenue churn should be lower if your expansion motion is working.
The trap is treating benchmarks as targets. "We are at 4% and the benchmark is 5%, so we are fine" is the wrong conclusion. 4% monthly still means losing 39% of customers annually. That is 39% of the customer base that left without you understanding why.
Across conversations from PLG companies doing $2-15M ARR, the pattern is consistent: the companies that actually reduce churn are not the ones with the best benchmarks. They are the ones that know, for each churned customer, exactly what went wrong. A company at 5% churn that understands every cancellation reason will outperform a company at 3% churn that is guessing.
The number tells you the size of the problem. The reasons tell you what to do about it.
Should trial users count in your churn rate?
No. Trial users who never converted are not churned customers. They are unconverted leads. Including them inflates your churn number and obscures your real retention problem.
This is one of the most common measurement mistakes, especially at PLG companies with free trials or freemium tiers. If 1,000 people start a trial in January and 100 convert to paid, those 900 non-conversions are a conversion problem, not a churn problem. Mixing them into your churn calculation makes your churn rate look catastrophic and your retention look worse than it is.
The correct approach:
- Track trial-to-paid conversion separately. This is your activation metric. It tells you whether your product delivers value quickly enough to justify payment.
- Start the churn clock when the first payment clears. A customer who pays you $99 and cancels after 30 days is a churned customer. A trial user who never paid is not.
- If you have a freemium tier, exclude free users from churn calculations entirely. Free users are a separate funnel. Track free-to-paid conversion as its own metric.
The exception: if a customer paid, received a refund within a trial-like window, and was removed, that is a gray area. Most companies exclude refunded-within-7-days customers from churn and track them as "failed activations." This is a reasonable approach as long as you are consistent.
Your board deck should show two numbers: trial-to-paid conversion rate AND paying customer churn rate. Never blend them.
Can CRM tools accurately measure churn?
CRMs track account status, not churn. Salesforce knows an opportunity is "Closed-Lost." It does not know whether the customer left voluntarily, had a payment fail, or simply did not renew an expired contract.
Here is what CRMs get wrong:
They lack billing granularity. Your CRM shows "Active" or "Churned." It does not show "payment failed on attempt 3 of 4, card will retry in 48 hours, customer likely does not know there is an issue." That distinction is the difference between involuntary churn you can recover and voluntary churn you need to understand.
They blend time periods. CRM reporting typically shows churn by close date, not by cohort. You cannot easily see whether your Q1 signups churn faster than your Q3 signups without building custom reports. And even then, most CRM churn reports are aggregate, not cohort-based.
They miss the "why." A CRM can tag a churn reason from a dropdown. In practice, those dropdowns get filled in weeks after the fact by a rep guessing from memory. The data quality is low. Analysis of CRM churn-reason fields shows they match the actual reason (verified through direct conversation) less than half the time. "Price" in the CRM often turns out to be "onboarding failure" when you actually talk to the customer.
They do not distinguish between cancellation and churn. Most CRMs track the moment an account is marked closed. They do not track the cancel request date, the end-of-billing-cycle date, or the reactivation window.
The right approach: pull churn data from your billing system (Stripe, Chargebee, Recurly), not your CRM. Billing data tells you exactly when a customer was last charged, whether payments failed, and when access was revoked. Layer your CRM data on top for account context, but never use CRM as the source of truth for churn measurement.
For the "why" behind each cancellation, you need actual conversations with departing customers. Not a CRM dropdown. Not a one-question survey. A structured conversation that captures the real reason, the competitor they are considering, and the thing you could have done differently.
See how your churn actually breaks down
Connect your Stripe account and get an instant audit: blended vs segmented churn, voluntary vs involuntary split, revenue at risk, and saveable customer estimates.
Stop guessing. Start measuring.
Get Your Free Stripe Churn Audit
No credit card required. 10 free AI conversations included.