This is a guest post by Alex Birkett Alex works on user acquisition growth at HubSpot. He’s based in Austin, Texas, but travels roughly half of the year. When he’s not working on growth and optimization, he enjoys martial arts, yoga, snowboarding, and occasionally writing on his personal blog.

Retention is the core of growth. If you can’t keep users around, any acquisition efforts are basically just filling up a proverbial bucket only to have it leak out through holes in the bottom.

leaky bucket theory

Image Source

You’ve heard that before, I’m sure, and of course you want to keep more customers around. But how do you actually pinpoint customer pain points and predict (and prevent) churn?

Obviously, you have to have a great product. “Great,” however is subjective, and if you can find and detect key points where most users are more likely to churn, you can set meaningful priorities around them.

Furthermore, if you can find and detect, on an individual level, where users are going to churn, you can intervene and remedy the situation, particularly if you have real-time messaging solutions like live chat or in-app notifications.

This article is all about detecting those customer churn red flags. As usual, data is key. In the early stages of your business, you can find these red flags through one-to-one feedback. Past that point, however, you’ll need a process by which you can collect data and analyze it (preferably programmatically) to detect churn indicators.

A useful way to break this up is by qualitative and quantitative data:

  • Quantitative data can be instrumental in predicting churn points before they happen as well as finding which parts of a product or experience may be un-optimized.
  • Qualitative data can be useful for determining what exactly is going wrong in your user experience and possible solutions. Sometimes, the simple act of collecting qualitative feedback (in a considerate way) can help you avoid churn in the first place.

Quantitative data: Detecting Customer Churn Red Flags in Behavioral Data

At this point, most businesses know the value of data-driven decision making, even if they’re not fully advanced on their own implementation and use of data.

In fact, now it’s more common (at least in my experience) that companies have a swath of data but are so overwhelmed by the sheer volume of it, they don’t know where to start.

When it comes to churn, here’s the roadmap or checklist I like to go through to get the proper data:

  • Map out your product analytics strategy
    • Determine core touchpoints in the customer journey and product experience
    • Set up events to track those key points and name them intelligently
  • Approach this data using business questions to figure out what issues you may be having with your growth model.
  • Do qualitative research to triage problematic areas in your quantitative model.
  • Run experiments to see how you can move the needle
  • Learn, iterate, repeat

This is the common pathway in conversion optimization: audit/track, research, experiment, learn, repeat.

Later, we’ll cover algorithmic and predictive solutions, but first, we’ll walk through each of these steps.

Map out your product analytics strategy

Before you run off and build a bunch of reports and cohort analyses, you’ll want to define your product analytics strategy in the first place. What constitutes an active user? What constitutes a churned user?

When we’re talking about revenue, these things are fairly certain, but what about in the case of users and usage? It becomes a bit muddier.

In any case, your product usually has a key action or actions that constitute activation. Then there’s usually a grace period where you still count a user as “active” despite no additional or continued actions. Finally, there’s a time cutoff where you determine a customer has churned.

Joao Correia outlines this well in his excellent piece on measuring churn rate:

user activity timeline

Image Source

As he defines it, an active user is a customer who has executed at least one key task in the previous 30 days. A key task is an action that is relevant and meaningful to the product’s main purpose. For example, if you operate a meeting scheduling app, a key action would probably be someone booking a meeting.

A churned user, then, is a customer who hasn’t performed a key task in the last 30 days.

Joao made a great visualization tracking the journey of a user who activates on January 16 and later churns:

example of product user activity

Image Source

Once you’ve defined your key task, you’ll need to invest in an analytics platform (whether you build it or buy it), and a team that can implement and manage the event tracking over time.

Of course, there are many analytics tools out there nowadays that make churn and retention tracking quite easy. Here are a few out-of-box solutions, all of which include churn-specific reports and dashboards:

  • Amplitude
  • Woopra
  • MixPanel
  • Heap

Additionally, you can use Google Analytics or Adobe Analytics to set up your event tracking, or even an open source solution like Snowplow.

In any case, being able to implement and orchestrate the proper data, as well as blending it with other customer data sources to find correlative churn indicators, is a crucial step here. Choosing an analytics tool is a tough business question, and no single solution is best for every company.

Approach your data with business questions

When you have a proper analytics setup, you’ll want to approach your product analytics with business questions. In this case, we’re mostly looking to identify segments or cohorts that have greater than average customer churn (and why that may be).

Of course, we can start out in aggregate and map our churn rate over time to a benchmark. Joao has a helpful Tableau visualization in his churn article showing this:

Screen Shot 2019 04 30 at 1.06.01 PM

Image Source

In this case, we see that the churn rate is improving over time, which is obviously a great thing. This isn’t always the case, and it’s not always so simple, as sometimes churn goes up for some user cohorts and down for others. We want to understand those discrepancies.

This is where you’ll want to get really good at cohort analysis and segmentation.

Cohort analysis – not only useful for churn, by the way – is simply a segmentation method that tracks a group of users with similar characteristics over time. The simplest way to do this is by tracking a cohort based on their sign up date. By doing this, you can parse out the effects of different product changes based on the behavioral changes of cohorts over time.

Here’s a great visualization of what a cohort analysis looks like from Intercom:

Retention Cohorts 21

Image Source

For example, if you launched a redesign of your product dashboard and subsequently see a large spike in churn in the following cohorts, you know you may have a problem with that particular redesign. Then it would be worth running controlled experiments to isolate the effects of that given change.

Perhaps you find that there’s a steep drop-off early in the curve. In that case, you may want to look at your early product experiences, your aha-moment, and your onboarding. Anything to get users to stick past the inflection point.

When you visualize cohort analyses, you’ll see three types of curves:

  • Flattening
  • Declining
  • Smiling

A flattening curve means that, after an initial dropoff, your retention stabilizes and at least for a percentage of your sample, the product is valuable and they return to use it consistently over time. The higher the point of the dropoff, the better for your long term retention and the healthier your product looks for that cohort.

A declining curve is no bueno. This means that the curve continues declining, eventually approaching zero or near zero users. Obviously this is not sustainable for a subscription business, as you cannot simply keep refilling the top of the funnel with more users indefinitely.

Finally, a smiling curve is a sign you have a remarkable product. It means that the churn rate flattens, but then network effects and product developments spur churned users to come back after an inactive time period.

05 01

Image Source

Of course, you can begin to pinpoint particular segments or channels that have churn issues when you apply segmentation to these visuals. This is something most analytics tools, even Google Analytics, let you do. You can also build custom dashboards with SQL and Tableau, though that takes a bit more effort.

As an example of segmentation applied to cohort analysis, check out this visualization created in Google Analytics’ cohort analysis feature. It compares tablet and desktop retention to mobile retention:

0 Rw Ikx Qe4q6EAC3

Image Source

Ryan Farley, co-founder of LawnStarter, told me that he uses retention by acquisition sources as a way to plan out their campaigns:

“One of our most powerful cohorts is our retention by acquisition channel. For example, people from direct mail tend to stay far longer, so we can afford to have a higher CAC than say Adwords, which has a much lower retention rate.”

He covers exactly how to loop together this pre-conversion and post-conversion data in this post.

Screen Shot 2016 06 04 at 3.02.26 PM

Image Source

That’s another reason you’d want a good way to tie-together different data sources for full customer journey analytics.

Triage problem areas with qualitative research

Cohort analysis and segmentation will tell you “what” may be the problem (a feature launch, a redesign, a component of your product) and “who” may be at risk for churn (a specific segment, those who come in via a particular marketing channel or use a particular browser or device), but qualitative research is really where you start to carve out why specifically people are churning.

Qualitative research is how I like to build insights to tap into when it comes to running experiments. It gives me ideas as to how we can fix the churn problem via product changes or at least predictively avoid churn through intervention (live chat, discounting, etc.).

You can guess all day why your experience isn’t working for users. But at the end of the day, I find that we’re usually too close to the problem. If we knew what wasn’t working, we’d probably just fix it easily. That’s where qualitative research comes in handy.

Some of my favorite qualitative research tools include:

  • In-app feedback polls
  • Customer surveys
  • Post-cancellation surveys
  • Session replays
  • User testing

I’ll briefly explain each one and give you some recommended tools.

In-app feedback polls

In-app or on-site feedback is one of my favorite passive data collection tools. It’s a simple concept: let people give you feedback in real-time and at their desired time.

Usabilla gives a great product feedback solution. They have a subtle widget on the side of the screen you can click that pulls out into an overlay, giving you the option between specific or general feedback:

Screen Shot 2019 04 30 at 1.56.39 PM

If you click “specific feedback” you can point and click at the element on the page or in the app you want to talk about:

Screen Shot 2019 04 30 at 1.57.31 PM

Another great tool for collecting product feedback is Qualaroo. Generally, I like Usabilla for passive feedback collection which you can later tie into specific user cohorts, and I like Qualaroo when you want to find the answer to a specific business question. You can also trigger it to only show to a specific user segment and on specific pages:

Image 2018 04 27 at 10.37.07 AM

Image Source

Customer surveys

Never doubt the power of a well-designed customer survey at uncovering potential product issues and experimentation ideas.

However, be wary of a poorly designed survey or taking the responses at face value and implementing without testing. The subset of users that willfully take a survey may not be representative of your user base, and even if so, their answers may or may not be useful at defining actionable solutions.

Still, I do like to keep tabs on the satisfaction of users and then I can tie that back to behavioral data like churn rates. That way, we can predict churn based on things like NPS or CSAT scores.

NPS survey 1

Image Source

Post-cancellation surveys

Here’s the obvious qualitative feedback source: ask churned users, right after they cancelled, why they cancelled or stopped using your product.

This can be a great way to triangulate product or customer experience issues that are also backed up via behavioral data.

By the way, I like doing this in an open-ended way and then coding the answers categorically after the fact. Don’t want to lead anyone to an answer they wouldn’t have offered on their own:

open ended question

Session replays

If you use a tool like HotJar or ClickTale, you can get some really nice passive behavioral data on users. First off, you can watch their anonymized session replays. This is just a video of them going through your website or product. Really insightful when you’re looking at a specific page that seems to be problematic.

Then you can also get heatmaps that show where users are looking and clicking on your landing pages or app.

Really great to get a clearer picture of where users’ attention is placed on a given page or workflow.

User testing

Finally, when you want to learn something specific or flag user experience or usability problems, user testing is the gold standard. Get five to seven users, give them a specific and a general task, and watch them interact with your site or app.

This is an amazing way to uncover usability bottlenecks you may never have even considered.

You don’t need a tool for this, but if you want to do remote user testing and get users recruited for you, UserTesting and TryMyUI are both good options.

Run experiments

At this point, you should have a good idea of which cohorts are churning at higher than average rates and your qualitative investigation should give you a full spreadsheet loaded with potential issues and experiment ideas.

Now it’s time to launch some product experiments.

A/B testing is the gold standard when it comes to validating ideas and improving metrics. It would be a long article if I dove into the specifics on A/B testing, so I’ll link out to a huge guide I wrote on the topic here.

What is AB Testing 1024x731 1

Image Source

Learn & Repeat

Growth is a process, not a one-time fix. It’s not likely you’ll solve your churn problems in one go-round, and even if you could, you’ll have more churn problems in the future. Same goes for any growth problem you’re trying to solve. For instance, in conversion optimization, today’s traffic sources and devices are likely to convert differently than tomorrow’s, and the competition and buyer’s environment is always changing.

As such, growth is a process, one where the winners are the ones who learn the most and iterate the fastest. Keep at it and dedicate resources to it, and you’ll come out on top.

brian 568x398

Image Sources

Predictive churn using machine learning

What we’ve covered so far is sufficient for 99% of companies, but if you’re technically curious and interested in machine learning, I’ll briefly cover a few options available to you.

The worst thing about having an abundance of data is relying on your ability to parse through all the various dimensions and metrics and find meaningful connections. The best thing about machine learning is it was designed to do just that.

There are two options here:

  • You can build the predictive model yourself with R or Python
  • You can use a service like DataRobot to speed up the process

As for the first option, expect to spend a lot of time cleaning up your data and, if you’re not technically proficient, learning a lot about R and Python. I’d also recommend taking Andrew Ng’s Machine Learning course to brush up on the basics of classification algorithms to understand what’s going on inside the algorithm.

R is my weapon of choice, so if you’re like me, this guide on predictive customer churn in R will help you.

1 yk6IcGibGtNCUNj9rM5c g

Image Source

If you’re into Python, fear not, there’s a step by step walk through with code included here.

If you’re using a service like DataRobot, you just need to upload a tabular CSV file that contains the variable you want to predict as an output (in our case, churn or retention) and whatever variables you think may be correlated.

data robot predict churn

Image Source

Ryan Farley talks through his use case with DataRobot in predictive churn here:

“In my case, the predicted output was whether the person had canceled during the last month, and my inputs were all sorts of things: lot size, price, average rating, NPS score, location, lead source, whether they joined via our app or online, and sorts of activity metrics.

Data Robot then builds a bunch of machine learning models for you, and recommends the best to use based on a variety of metrics.  It also does a univariate analysis on your dataset, and shows which variables play the biggest role in the outputs. I won’t get too into the details here, but it’s a pretty cool tool.”


Predicting customer churn and finding accurate leading indicators is by no means easy, but it is important.

Once you can do that, you can begin to diagnosis product problems and solve them through UX design and experimentation, you can optimize your acquisition by bringing in more qualified users (and spending more based on their retention and LTV), and you can begin to invest in proactive customer support to intervene during critical moments in a customer’s lifecycle.

It’s powerful stuff, and you have a lot of options to get started. You can begin with some simple product analytics and event tracking to build out cohort reports and put up some feedback polls. When you start to get more serious, you can look into machine learning and predictive models. Whatever you do, don’t ignore churn. Retention is the core of growth, remember.

Alex Birkett

Author Alex Birkett

Alex Birkett works on user acquisition growth at HubSpot. He's based in Austin, Texas, but travels roughly half of the year. When he's not working on growth and optimization, he enjoys martial arts, yoga, snowboarding, and occasionally writing on his personal blog.

More posts by Alex Birkett

Leave a Reply