Completing your first QA scorecard is a major milestone - but once it’s finished, there’s more work to be done.
As you’ve probably realized by now, the best-performing QA programs are made up of more than just a standalone QA scorecard. Top QA programs also include a formalized grading cadence - this includes deciding who should be grading tickets, agreeing on a fixed grading volume and frequency, determining how to grade (aka - which scorecard to use), as well as setting up regular coaching conversations with your agents.
With these supporting factors in place, your team can go from simply having a QA program, to one that provides structured coaching for agents and trusted insights for managers.
The best part is that a grading cadence is really easy to establish. We’ll walk you through exactly how to set one up so that your agents receive timely, structured, and data-driven coaching in no time.
Deciding if you should have dedicated QA specialists grade your tickets
After you build your quality assurance scorecard, you need to decide who’s going to be using it to grade. This decision usually boils down to a few factors for most teams: size, human resources, and team maturity.
For most teams, it makes sense to have dedicated QA specialists to run the QA program and be responsible for grading. This allows managers to focus on the myriad other tasks (like hiring, coaching, and onboarding) that make up their average day. Dedicated QA specialists can give their undivided attention to grading agents, interpret the data produced, and provide trusted CX insights that can help improve CSAT, reduce churn, and maintain a positive brand reputation.
And - since they aren’t directly in the queue - they have a fresh, third-party perspective on the issues that pop up in grading (and aren’t afraid to point them out!).
We asked our CX network about their grader:agent ratios - we found that most teams have a ratio of roughly 1:20. We’ve included the exact breakdown below:
- 22% of teams have a grader:agent ratio of 1:10,
- 51% follow a ratio of 1:20, and
- 27% have a ratio of 1:40 agents or more.
- In general, larger teams tend to have more agents per grader. As the team matures and builds a base of senior agents who require less grading, they shift their energy to newer agents
But not all teams are large enough to need a QA specialist, and many don't have them. The alternative is to have managers, team leads, or senior agents double up to do QA. It's a great way to train senior agents on a new skillset and provide an opportunity for career progression. This is where team maturity comes into play - you need to have some established veterans on the team to help grade.
The downside to having non-QA specialists step in to grade? Team leads, managers, and senior agents are busy. Between onboarding, hiring, coaching, reporting, and answering tickets, setting aside time for QA might fall off their radar. When this happens, agents are the ones who ultimately lose out in terms of their performance and career growth.
If you do choose to have team leads grade tickets and run the QA program, be sure to pick a grading volume that they can commit to, and that’s integrated into their goals or compensation. We’ll be covering that in the next section.
Deciding on your QA grading volume and frequency
While there are no rules around how frequently you should grade, 60% of MaestroQA customers surveyed grade between 1-5% of all interactions. The other 40% graded upwards of 5% of all interactions, but these tended to be smaller companies with a dedicated QA specialist on the team. Some companies, like Handy, choose to grade only DSAT tickets - where the customer had indicated unhappiness with the way the ticket was handled.
There are 3 main factors that determine how frequently you can grade: your team’s headcount, the average difficulty of a ticket, and the team’s ratio of junior:senior agents.
CX Team Resources
Do you have dedicated QA specialists, or are your senior agents and team leads pulling double duty to grade? As we’ve talked about in the previous section, there are pros and cons to both. If you’re relying on team leads to handle QA, you might want to lower your grading volume in light of the fact that they typically have a dozen other tasks to handle on top of QA.
With dedicated QA specialists, it’s easy to work out how much you can realistically QA.
For example: imagine a team of 10 agents with 1 grader. The team handles 5000 tickets per week, and you’d like to grade 5% of tickets.
5% of 5000 tickets = 250, meaning you’d have to grade 250 tickets per week (or 50 per day) in order to hit the goal. This approach scales with your team - as you start handling more tickets and the ratio jumps higher, you can add more graders to maintain that 5% target rate.
Take an iterative approach to grading no matter how you do it. In this example, we’d encourage the team to try to grade 50 tickets in one week and see how it goes. If the team can handle more, tack more onto the goal, but if 50 proves to be too much, consider setting the bar a bit lower.
Average CX ticket difficulty
If the majority of tickets in the queue are easy-to-handle and your agents knock them out of the park on a consistent basis, your QA program might not be giving you the insights that you truly need out of it. Put a different way: if you’re spending time grading “easy” tickets, are you truly catching the more “difficult”/error-prone tickets that require more agent knowledge?
If you randomly sample tickets for grading, it’s easy to see how the harder tickets (and the ones you should really be focusing on!) get lost in an avalanche of easy tickets.
In lieu of random ticket sampling, there are other ways to solve the problem. Handy and WP Engine have come up with two innovative ways to sieve out and tackle tough tickets.
The team at Handy now grades only DSAT tickets - tickets that either have a negative CSAT score from the customer survey, or those that are flagged by agents because they struggled with the interaction and want a review. This allows the tickets with the most opportunity for learning to float to the top - leaving Handy with the best possible insights to improve their training and CX programs.
WP Engine has a hybrid program - they filter every DSAT ticket out for grading, but maintain a separate QA instance that randomly selects tickets to grade. This allows them to reap the same benefits as Handy’s program, while also ensuring that the general performance of the CX program is maintained.
Agent Seniority
As agents gain experience in their role, their customer interactions get smoother over time. Most CX teams have QA data that backs this up; agents’ QA scores generally increase as they gain experience.
Set a threshold score for your team (say, 85/100), and consider lowering the grading volume and frequency for the more senior agents on your team who have consistently maintained a QA score above 85. This will allow you to focus more time and effort on newcomers, while building trust with senior agents.
You could even take it one step further and involve your senior agents in setting the threshold score. This gives them ownership over part of their coaching program and provides new opportunities for growth and learning.
Relaying QA results to the CX team
There are three main ways to relay your hard-earned QA scores to your team: coaching sessions, email notifications, and team meetings.
Some QA platforms allow agents to receive their scores through email immediately after the grade has been submitted by the grader. This allows for real-time feedback that the agent can immediately apply to the queue, while keeping the agent engaged with the QA program.
Agents shouldn’t be left to receive and interpret their QA scores on their own, however. This method should be paired with regularly scheduled coaching sessions.
In these sessions, managers can help analyze an agent’s long term results, and provide qualitative feedback to help them improve. More importantly - the numbers don’t care about an agent’s feelings, but a manager does. Coaches can help reframe QA results and put them in context - a below-average score could be due to a new product launch or a one-off event, and not a long-term trend of poor performance.
Finally, use team meetings to dig into team-level trends in the QA data, and nip problems in the bud before they become more widespread. Mailchimp uses a team meeting combined with a QA newsletter for this purpose - and they always see a positive spike in QA scores for the section of the scorecard being highlighted.
The Benefits of Setting Up a Grading Cadence
Scorecards are the central pillar of every QA program - but they can’t hold the roof up alone. After finishing your first scorecard, invest time into setting up the processes that will enable consistent grading and structured coaching. The strategies we’ve outlined here will allow you to benchmark performance, make data-driven decisions, and ultimately improve the experience you’re offering customers.