Companies in the early phases of building a customer service quality assurance process often begin by creating a quality assurance scorecard that focuses on agents’ technical skills. They want to make sure all of their interactions are PII compliant and in line with company policy.
But imagine that a customer calls into support because a product they’ve recently purchased just broke. If the quality review scorecard focuses on technicalities and compliance, the agent might be graded on whether or not they tagged that interaction as “broken product,” but not at all on how that interaction made the customer feel, or how it impacted Customer Satisfaction (both of which are important, I think we agree).
We looked at 150k customer interactions to quantify the relationship between agent actions and Customer Satisfaction.
We found that customers only care about the technical components of their support interaction that help them get a resolution. While technical QA metrics can be important from a managerial perspective, they have little impact on CSAT.
The chart below highlights these two types of technical actions – actions that the customer can see, and actions that the customer is oblivious to.
Actions that customers are oblivious to are weakly correlated to CSAT scores, where the actions that help them get a resolution are correlated with CSAT scores. Thus it’s important to split technical actions into two groups conceptually as you build out your QA program: CSAT-focused and compliance-focused👇
When it comes down to it, managers/support teams need to monitor both types of actions. The key is to find a balance between the two – to keep your support team running smoothly while still making sure your customers are happy.