What is the foundation of a robust quality assurance program? It starts with well-defined CX standards for your organization. For a fantastic customer experience, well-defined CX standards represent the characteristics and process your customer-facing teams should adhere to and excel in.
Two Key Workflows To Increase QA Grading Team’s Alignment
Quality assurance (QA) is crucial to ensure that your agents maintain this standard set out by the organization and give them opportunities to improve their performance. The CX scorecard is the cornerstone of any successful QA program, but one of the often overlooked aspects of QA is whether or not your QA team is aligned with how they interpret the standards in the scorecard. If your rubric is perfect, but your graders interpret it differently and have different scoring strategies, how much can you trust the results of your quality program? What insights can you draw if your graders are not using the same standards?
This article will provide the framework for two proven workflows that help measure and improve alignment amongst your grading team!
There are two widely accepted workflows for measuring and improving alignment:
- Call Calibrations
- Grade the Grader (in the MaestroQA platform, we call this GraderQA)
Both of these workflows ensure your QA team or team leads are aligned with how they interpret the CX standards in the scorecard. As we dig deeper into these workflows, they also have stark differences intended to drive different results. Let’s define each process before deciding which workflow suits your team:
What is Call Calibration?
Call calibrations allow your QA team to grade agent performance and collaborate on the same ticket. The purpose of call calibrations is to have your team grade (or calibrate) the interaction individually while utilizing the same scorecard.
Performing Call Calibration Sessions for Quality Assurance
After each member of the QA team has submitted their scores, the team holds a call calibration session to discuss why they landed on the scores they did. Your team will probably disagree on what the “right” grade was - and that’s okay! The purpose is to have a constructive conversation to collectively decide how to grade.
Once the call calibration process has been completed, Maestro will generate an “alignment score” to showcase how aligned each QA team member is with the final calibration score. The higher the alignment score is, the more closely aligned they are to the agreed-upon CX standards.
Tips
- Designate one person to be the final calibrator. This person will be the final decision maker and will submit the Final Calibration. This is your source of truth to measure alignment against! There is a lot of back and forth when going through the call calibration process, so it’s important to designate someone as the decision-maker.
- As you become more aligned, select harder, more complex, or more ambiguous tickets so you can continue to drive value from these conversations and raise the bar.
- Consider recording the sessions to share within your CX organization to build a culture of transparency around how you think about and talk about quality.
Other benefits of Call Calibration sessions
In addition to improved team alignment, call calibration sessions can help refine training processes and identify issues, including soft skills and technical behaviors in customer interactions. Ensuring agents meet evaluation standards leads to a more consistent customer experience.
How often should my team do Call Calibrations?
If your quality program is new, we see teams go through the call calibration process as frequently as once a week! Once your call calibration process runs smoothly, and alignment is high, consider moving to once a month and focusing on more complex tickets. When you launch a new rubric, it’s a good idea to increase frequency again until all graders are comfortable and aligned with the new criteria.
MaestroQA Grade the Grader (aka GraderQA)
The key difference with this workflow is that it’s designed to provide feedback to an individual grader on their grading performance rather than engaging in a team-wide discussion about the standards. Similarly, there will be a member of the team designated as the “source of truth” (in Maestro, we call this a Benchmark Grader. In your program, this should be your most experienced grader or perhaps your QA Manager. Whoever is an expert on your grading standards!) who grades a ticket that another grader already completed in AgentQA.
The goal is to provide feedback to the original grader on how they answered the questions and shared feedback with the agent. Ultimately, the work done by the Benchmark Grader is then shared back with the grader so they can review their feedback about how to grade more consistently with the CX standards set out.
Tips:
- Graders want feedback as much as your agents do, as it helps them to perform at a higher level and ensure they’re keeping within your team's standards. If your team has the bandwidth, providing bi-monthly feedback on a few graded tickets should give them confidence moving forward that they are grading consistently to the team standard.
- Focus on tickets where the grader left the agent with qualitative feedback (comments or selecting additional checkboxes). As the “Benchmark Grader,” this will give you ample opportunity to share feedback with the grader and see how they deliver feedback to your agents.
How often should my team run GraderQA?
Similarly to call calibrations, we typically see teams run GQA at least a few times a month to share regular feedback with your graders. Depending on internal bandwidth, it’s a good idea to include consistent feedback for your graders alongside the feedback shared for agents, so everyone can continue performing at the expected organizational standards.
Which workflow is right for me?
Ultimately, it depends on your ideal end state! Let’s walk through a few scenarios to help demonstrate the use cases of each:
- Your agents benefit from QA feedback, and you want to provide that same level of feedback to your graders to improve their grading skills.
Our Suggestion: MaestroQA GraderQA! This gives you the tools to give qualitative feedback to individual graders.
- You have a few graders that you think are underperforming. Their QA scores are high, or maybe they take a long time to grade a single ticket and are not completing their weekly assignments. Any of these are indicators that a grader may need some more help.
Our Suggestion: GraderQA! This gives you the tools to grade these graders’ tickets, uncover which criteria they are not grading correctly, and coach the grader toward success.
- You just launched a new rubric. The scoring strategy changed, and a few questions were added that focused on customer satisfaction.
Our Suggestion: Definitely Call Calibrations! This will allow you to see what questions graders have, which areas of the rubric are unclear, and perhaps even make changes before launching.
- You just hired a class of new graders. They are getting ramped on your rubric and processes.
Our Suggestion: Both Call Calibrations and GraderQA! Call Calibrations will be great to teach the new graders how you think about the rubric and work through customer problems. GraderQA will give you a specific alignment score, so you know how quickly your graders are ramping and what parts of the rubric they need coaching on.
Key Takeaway
GraderQA and Call Calibration workflows are valuable tools in your QA toolkit to unlock additional insights and better align your grading team. Whether pursuing just one of these workflows or implementing both, the conversations and learnings that come from these are endless. They should help create a more robust experience for your customers and agents!
Interested in learning more about MaestroQA GraderQA?
Request a demo.