When Tolstoy opened Anna Karenina with the words, “Happy families are all alike; every unhappy family is unhappy in its own way,” he wasn’t thinking about companies struggling with low customer satisfaction scores. But if he were, he would have been on to something.
Every company struggling with bad CSAT is fighting a unique fight. No two companies have identical pain points, and the cause of these pains differ widely...but everyone has them.
The good news is, there is a consistent way that teams can approach finding solutions to these problems, and in particular, when it comes to increasing customer satisfaction scores (CSAT).
In this blog post, we run through a full framework you can use at your company to increase CSAT. We'll use an example of a real company we worked with (we'll call them Alexei Inc. for anonymity 😉).
The Problem At Alexei Inc
Alexei Inc grew rapidly, and had to scale its customer service team quickly to meet increasing ticket volumes. The team was struggling to keep up with the influx of customer outreach, so adding agents to the team became their top priority in an attempt to keep CSAT scores high (and to keep up with tickets!).
By adding agents they were able to meet the increased demand, but the support team quickly found themselves with a greater issue. They used a solution like Zendesk CSAT to listen to their customers, and saw their CSAT score drop dramatically.
They needed to get a handle on the quality of their customer service and, if possible, improve customer satisfaction with the entire brand and customer experience more broadly.
Identify The Source Of Bad Customer Satisfaction Scores
Without understanding why you are getting poor CSAT scores from customers, it would be near-impossible to make impactful changes on your support team. So, step 1 is to identify the source of bad CSAT scores. The best way to get to the bottom of it: start a customer service QA program.
Customer service quality assurance programs give you a holistic view of customer feedback and help you identify patterns from customer interactions. Often, a dip in CSAT is a common impetus to start a customer service QA program (among many other reasons, like gaining a better understanding of agent performance).
The main element of a QA program is setting up a grading cadence for your support tickets. Usually, tickets are graded against a scorecard that ensures interactions follow company policies & procedures.
Grading all tickets (or a sample of tickets, depending on your volume) with bad CSAT scores is a great place to start. With the right customer service quality assurance software, you can then identify the common reasons for bad CSAT.
Reviewing customer satisfaction comments – potentially pages of qualitative, textual information – is most effective when data is structured in the right kind of QA tool. If these comments and takeaways are confined to spreadsheets, trends are even harder to catch, and very difficult to correlate with other service data like volume of inbound requests.
Here’s how to think about doing setting up a QA program:
1. Initial qualitative review: You might be able to think of a handful of things driving customer dissatisfaction (DSAT) off the top of your head. Start by reading through CSAT reviews to gut check if your intuition is correct. You might be surprised to see some issues coming up that hadn’t occurred to you before. Based on what you see, create a checkbox for each of these issue groups.
2. One round of QA to hone your categories/checkboxes and QA scorecard questions: Going through a round of QA with these broad groups will systematically show you how often issues are happening relative to each other. You might find that you expected each issue to be equal in terms of impact, when actually the data is showing you that one issue accounts for over 50% of DSAT.
3. Break issues into component parts: Next, these larger issues should be broken into more granular pieces. If agent tone was the biggest problem, you should break it down further to see which components need to be coached for most (sounding rude, robotic, or disinterested). Additionally, the data may show that some of the predicted issues aren’t really affecting DSAT. These categories can be removed, and you can dive deeper into the most pressing issues.
Starting with broad categories allows you to eventually hone the quality management scorecard based on what the data tells you, therefore decreasing the risk of your predictions influencing your results.
Below is an example of a QA scorecard meant to tease out patterns in an actionable way. The scorecard first prompts the grader to identify if the issue was support-related or not, then uses a checkbox feature to tally reasons for bad CSAT, and gathers this granular insight across all graded tickets.
Once you fully understand the support-related (and non-support-related) issues impacting customer satisfaction, you can work to make improvements on your team. Analysis of these patterns often leads to two things: increased training and coaching of agents, and implementing company-wide policy changes (like updating billing policy or offering new support channels).
In our example, Alexei Inc identified high customer effort as one of the largest drivers of DSAT.
Use Quality Assurance And Customer Satisfaction Data To Inform Coaching Strategy
Once you’ve incorporated the reasons for bad CSAT into your scorecard, you can dig deeper into the problem, and start coaching explicitly for that behavior.
Alexei Inc added a question in their scorecard to evaluate if the agent could’ve done anything to lower customer effort. After honing their checkboxes, and breaking “customer effort” into many component parts, they identified that agents were asking for information that was available in their CRM, and that this exchange of information was annoying to customers.
In this case, they had over-rotated – they were trying respond as quickly as possible at the expense of making the customer provide information that the company already knew. This small issue was really hurting CSAT scores.
They then used coaching sessions to help agents understand how to improve, and encouraged agents to research historical conversations and context – even when they felt pressed for time.
They also uncovered some things outside of the agent’s control that was contributing to bad CSAT.
Use Quality Assurance And Customer Satisfaction Data To Inform Company-Wide Changes
The QA lead uncovered that customers were frustrated because they were having trouble connecting to an agent in the first place. Even if their interaction with the agent was overwhelmingly positive, their experience with the brand as a whole was not.
Again, they looked to their CSAT QA data, and honed down categories of customer pain until they landed on the most potent issue – customers on mobile devices were having a hard time figuring out how to reach support.
Fixing this issue required a broader, company-wide change. They worked with UX designers and product teams to change their mobile app to make it easier to reach support.
This change, along with their new coaching, helped Alexei Inc raise their CSAT by over 10%, helping them meet their goal of maintaining high quality while rapidly scaling.
Wrapping Up
This framework will help any team identify the issues causing bad customer satisfaction scores (both related to support and not), and enable brands to take steps toward improvement. If your team can commit to coaching, and implement the changes you identify, you should expect to see your CSAT go up.
While every company has different problems, this framework, along with the right QA tool, should help you root out your specific problems and solve them. This structured process to analyze CSAT results will also allow you to communicate with key stakeholders in your company to implement the changes that the data demands.
Curious to learn more about MaestroQA? Reach out to our team or request a demo today.