Customer Support Metrics pt 1
Which metrics actually matter when evaluating support team performance?
When asking the question “which ONE north star metric can I use to determine if the support program is performing well?”, it feels like I’m always chasing my tail. There are enough important metrics that make the decision difficult for “yea but which one goes all the way at the top by itself?” The only real answer is “it depends on the business conversation we’re having.” There needs to be a blend of metrics brought to the table when discussing support program performance and together they’re weighted together to evaluate success and priorities. It’s convenient and desireable to only have one metric but it comes at the expense of important context and might lead to incorrect prioritization. This subject has to come in multiple parts because of the number of metrics swirling around the customer team world. I’ll walk through the main metrics I use in my dashboard to drive different conversations:
Customer Satisfaction Scores (CSAT)
This is what customers think of the quality of support. The conversations where this metric is most relevant increase as the score gets worse. Typically, when the score gets within target range, the conversations naturally move to other metrics. If I have CSAT scores below 80%, clearly there’s an issue somewhere that needs to be addressed. But even then, I’m looking towards other metrics to tell me what the “what” is (contact rate trends, first reply time, first contact resolution, churn). If my CSAT scores are above 90%, it’s worth celebrating for a hot minute. But beyond that, where is the action? There are different ways to get this score but here’s how I do it:
Customer receives a post-interaction survey and chooses on a scale from 1-5.
I multiply this by 20 to put it on a scale of 1-100% which is easier for me to describe to leadership.
example: it’s easier to say we have 85% CSAT than it is 4.25 stars out of 5
It’s worth mentioning that these scores are driven by a small subset of customers (customers who have a support experience + are also willing to fill out the survey). So rating the support program experience on this metric alone leaves a disproportionate amount of weight that might not show the true performance of the program. It’s also prone to incidents that are outside of the control of the support team. e.g. at HMBradley, when we switched partner banks, customers shredded our CSAT scores because of the inconvenience they experienced having to manually move their accounts over to the new bank - the support team provided excellent service but you wouldn’t have known just seeing the metrics.
It definitely has its place on the dashboard; I use it and always have. But I wouldn’t say it’s strong enough to be the north star.
imo the future state of this metric is sentiment analysis - the quality of service can be observed across the entire group of customers in the support experience that wouldn’t otherwise fill out a survey. And using real-time sentiment analysis will be leveraged to prevent irritated customers from exiting a support experience without an invervention attempt from the team. i.e. any customers who leave without an intervention have already formed their opinion. So if CSAT is measuring someone’s experience from last week or even yesterday, how much can be done about it besides saying “we’re really really sorry”? Versus a real-time sentiment analysis that shows where I can jump in and fix a problem as it’s happening. That’s also a more thrilling challenge to take on.
Contact Rate
How many contacts we’re actually getting based on the number of customers/orders. The program is managed around the current state of this metric and modifications to the infrastructure are planned for its forecasted future. This includes staffing, training, QA monitoring, observing for true changes in trends, etc.
The ratio is straight forward:
[number of contacts / *number of customers] = Contact Rate
*The denominator changes based on different criteria; at Soylent, it was [number of orders] and at HMBradley it was [number of active customers].
The ideal contact rate depends on the scale of the operation. At Soylent, the contact rate was 15%, which was fine if our number of monthly orders was 10,000, generating a total of 1,500 contacts per month. But if monthly orders is 500,000, the total contacts coming in is 75,000. In order to prevent scaling up the support team by an order of that magnitude, I focused energy on driving down the contact rate ahead of growth when possible.
Contact rate is something I double-click on. There’s the rate for [all tickets] and then I zoom in on [driver 1], [driver 2], and [driver 3]. If either of these top contact rate drivers have a match in Cost Per Ticket, that’s a natural way to drive the project prioritization process.
Cost Per Ticket
Shows where the biggest cost burden is in providing the support experience. My (relatively simplified) formula:
[variable support costs**] / [tickets] = Cost per Ticket
**[variable support costs] = total salary cost of agents, supervisors QA & training staff required to service the total volume.
Overhead support costs aren’t included in this equation because they represent a base that would create an artificial limit for this metric. Instead, they’re included in the Cost to Service Customer metric to show the total cost impact of the support program.
Keeping the Cost Per Ticket with the variable support costs alone shows the true cost saving potential of eliminating (or automating, etc) various sources of contacts. So I double click on [driver 1], [driver 2], and [driver 3] to help prioritize the next projects on the roadmap. If there’s overlap with Contact Rate drivers, there’s a stronger case. And it gets even stronger when there’s an additional overlap with churn.
example:
driver 1 has a cost per ticket of $4.70 and a contact rate of 4.0%
driver 2 has a cost per ticket of $3.21 and a contact rate of 2.1%
driver 3 has a cost per ticket of $5.90 and a contact rate of 0.8%
without knowing anything of the brand risk or the cost to implement the solution for each driver, the numbers pretty clearly point to the top priority (or at least where I should start digging)
bonus: there are also hidden costs when other teams in the org need to be deployed in order to research & resolve certain contact types. That time is also attributed in different versions of this equation that can show which cross-functional projects should be implemented.
Cost to Service Customer
When evaluating the total cost impact of the support program, I aggregate all support-related costs and divide by the total customer base.
([overhead support costs***] + [variable support costs]) / total number of active customers = Cost to Service Customer
***[overhead support costs] = Software set up fees, Leadership/internal salaries, BPO onboarding fees aka things that would still be in place if there were zero tickets.
I use this metric to show the total cost impact of the program and where any big pieces might need to be moved. e.g. if the Cost to Service Customer metric rises 1:1 with growth, there should be plenty of opportunity to optimize.
Customer Support Churn
This metric is relevant in conversations where friction in the support experience is causing the most disengagement. I take the baseline customer churn metric for the total population and then compare against churn for the top Contact Rate drivers as well as the Cost Per Ticket drivers. If I can attribute higher churn to a specific Contact Type that is impacting Contact Rate & Cost Per Ticket, that guides the overall roadmap priority I suggest.
Churn data is loaded with valuable insights. Any percentage of churn that is changed in the right direction is worth more than anything else on this list imo. It also answers the age-old question I hear from CX colleagues “how do we get a seat at the table?” The answer seems pretty clear - by focusing initiatives on the larger business metrics that everyone else at the table cares about, churn being one of them.
Customer Support Churn = customers who churn + have 1 or more support tickets
What else?
There are still more metrics to evaluate that I’ll dive into in part 2. e.g. Product engagement, first contact resolution, first reply time, # of agent touches, tickets per agent, etc.
The primary point here is that there are simply too many metrics to only show 1 at the top and say “this is how we measure support performance” or “this one metric is how we drive all of our initiatives.” Each metric comes with a long conversation and the ultimate goal is to provide excellence service that is efficient and informs the changes that need to happen in order to deliver results to the top & bottom lines (LTV) - gotta use a combination!
It still leaves the question of “which support metric is at the very top” unaddressed, but I think the answer lies somewhere in the journey.