Quality Assurance

21 Key Customer Experience Definitions for QA Professionals

June 29, 2020
0 minute read

Customer Experience is a highly complex and ever-evolving industry, and nowhere is that more apparent than in the vast number of terms and jargon that make up the CX alphabet soup - QA, CSAT, NPS, and LMS - just to name a few.

In a growing industry like ours, it gets even more confusing when people with different definitions come together to talk about CX - what do you really mean when you say you do calibrations once a month?

To get everyone drinking from the same bowl of soup (we’re really stretching the analogy here), we’ve put together a glossary of the key terms used in the worlds of CX and Quality Assurance. It’s a pretty long list, so we loosely organized the terms into four different buckets: key QA processes, industry-specific terms, metrics, and other concepts you need to know.

Basic QA Processes

Quality Assurance 

Also known asa Quality Control, Grading

uality Assurance (QA) is the process by which CX teams ensure that their interactions with customers meet their organization’s standards. While these standards differ widely from company to company, they typically have the 4C’s in mind when establishing a QA program: communication skills, customer connection, compliance and security, as well as correct and complete content.

The main process in QA is grading. Grading happens when a QA specialist or team leader randomly selects tickets from an agent’s queue and evaluates if they meet (or don’t meet) the company’s quality standards (aka the 4C’s). The terms “grading” and “QA” are often used interchangeably.

Onboarding

Also known as: new hire training, induction

Onboarding is the training process that teaches newly-hired agents everything they need to know about interacting with customers. This usually includes product training and knowledge, basic troubleshooting, as well as CX tone-of-voice training. The length of onboarding programs may vary from company to company, but they typically run from 2-8 weeks.

QA features prominently in the onboarding process, and frequent QA early on can help new employees understand what makes (or breaks) a response. New employees are usually introduced to a sandbox environment to apply what they’ve learned without fear of real-world ramifications if something goes wrong. Senior agents (or onboarding specialists) grade these test tickets, and deliver feedback, reinforce learnings, and identify areas of improvement. An alternative approach: instead of a sandbox environment, some teams have agents answering actual tickets early on, but they’ll heavily QA them. 

Training

Also known as: uptraining, upleveling

Related: Learning Management Systems (LMS), Learning and Development (L&D), Knowledge Base (KB)



Training is the process in which an agent is assigned learning material or coaching based on areas of improvement that have been identified through QA. It is also the last part in our “Classic Loop” that describes how QA impacts Customer Service training.

Grading allows teams to identify areas that require improvement, and assign targeted training materials. These materials are usually hosted on a Learning Management System (LMS) that tracks an agent’s progress and serves as a knowledge base for the team. 

To close the loop, agents are then graded again in a subsequent round of QA, and their improvement over time can be tracked in a quantitative manner.

At some companies, a full Learning and Development (L&D) team exists to keep the knowledge base up-to-date, create new training when necessary, and run onboardings for new hires. These teams thrive on QA data! It helps them to evaluate the efficacy of their programs and identify gaps in their training or knowledge base that require further improvement.

Coaching and Appeals

Coaching is the process in which an agent receives feedback from either a grader, manager, or a peer. These coaching sessions are typically conducted on a 1:1 basis and feedback is given based on the agent’s QA data. 

Modern QA solutions allow managers to spend more time on 1:1 coaching sessions with agents instead of grading. Rather than pinpointing individual errors in tickets, most systems allow managers to coach using a much larger dataset.

Over time, coaching has evolved into a more inclusive and democratic process. One such improvement is the introduction of appeals to the coaching process. An appeal is the process by which an agent challenges the grade given to them, usually based on extenuating circumstances that were not taken into consideration.

During the appeals process, agents share their side of the story and a grader reevaluates the given grade. This helps build trust amongst agents and managers, and helps with the agent experience (see below!).

Calibrations

The nature of customer service means that some parts of the grading process are subjective. Grader A and B might give scores differing by just 1 point for an agent’s tone in a chat interaction. For a scorecard that grades on a 5 point scale, that point difference can mean up to a 20% difference in that agent’s QA scores 🤯

Calibrations aim to remove subjectivity by having graders grade the same ticket separately, then come together to discuss the score the ticket should have received. Some QA tools automatically assign tickets for calibration to ensure that graders are always in line with agreed-upon grading standards.

If you want to learn more about calibration, this panel with Stitch Fix dives deep into how they implemented a best-in-class calibration program.


Other QA jargon:

Scorecard/Rubric:

Scorecards (or rubrics) are the backbone of every QA program - they provide the tangible way to grade someone on the quality of their interaction.

Scorecards used to be built by the CX team on a spreadsheet (just imagine an Excel file tracking hundreds of agents over thousands of interactions, and the associated anxiety 🥺). But these days, scorecards often reside in QA platforms that are fully customizable, can automatically (and randomly!) pick tickets for grading, and report on long-term QA data on both the agent and team level. Graders have reported a 10x increase in tickets graded when moving from spreadsheets to QA software. 

If you’re looking to create your first scorecard, this guide will help you get started. If you’re a seasoned QA pro hoping to level up your scorecards, check out our guide to call center quality monitoring scorecards, which covers the topic in more depth.

Touchpoints/Interactions:

Touchpoints and interactions refer to every point of contact that a company has with a customer. In CX, that would refer to every customer support action logged with a customer in the CRM, regardless of channel (phone, chat, email, or social media). 

Voice of Customer (VoC):

VoC refers to the process of capturing customer’s expectations, preferences and aversions. If you’re curious about why people are writing in, and what is causing them to have negative experiences with your company, VoC programs can help.

CX teams are uniquely positioned to capture these insights, and QA programs can ensure this data is captured consistently. Product and backend teams use VoC data to plan their product roadmaps and engineering sprints to ensure that the product meets the evolving needs of the customer.

PII/PHI:

Personally Identifiable Information (or Personal Health Information in the healthcare space) refers to any data or personal information that can be used to identify specific individuals. This ranges from addresses and birthdays to Social Security numbers.

Most QA scorecards are built with PII compliance in mind, because the legal and reputational ramifications of not protecting PII can be extremely damaging to the company.

Automation/assignments:

Modern QA software has the added benefit of automatically assigning tickets to graders for grading, ensuring the right ticket is graded at the right time.

For example, if your team wants to grade all DSAT tickets (we talk about DSAT in the next section!), and a random sample of 5 normal tickets per agent, automations assign those tickets to graders seamlessly.

These automations have become more powerful over time, allowing CX management to specify trends and patterns that might be problematic in the future (and nip them in the bud!). For growing teams, automations also ensure that each agent is graded at the frequency that their experience and performance requires - you’d grade a two-year agent who consistently receives 90+ QA scores a lot less frequently than a new hire.

Grade-the-grader:

Also known as: grader QA

Quis custodiet ipsos custodes? (who will guard the guards themselves?) is probably the only Latin phrase I know, and is the basis of grade-the-grader, or grader QA. 

In this process, graders are scrutinized to ensure that they meet the agreed-upon standard of grading that was established during calibration. Grader QA can also help managers report on the efficiency and accuracy of their graders based on the number of tickets graded and appeals they receive. 

All very meta, I know.

Metrics

Here’s where we open up the real can of … Campbell’s Alphabet Soup. Metrics are the direct output of many CX programs, so defining them is essential to ensuring we’re comparing apples to apples.

Average Handle Time (AHT):

If you skimmed past the fancy equation, Average Handle Time is simply the average amount of time taken to handle a ticket, from the time it was submitted, to the time an agent finishes up the tasks required to complete the customer interaction.

AHT shouldn’t be taken as a success metric - you don’t want agents to rush to close tickets in order to keep their AHT low (since some customers and issues need more time than others). 

Rather, AHT can be used for assessing the efficiency of the CX operation as a whole - which lets CX management establish performance benchmarks for new agents and inform decisions around team staffing levels.

If you wanted a benchmark to compare your team against, Call Center Helper Magazine published a study finding that the industry standard for AHT is just over 6 minutes, but keep in mind that they also found a wide variance between industries, so your mileage may vary.

First Call Resolution rate (FCR):

FCR is the percentage of contacts that are resolved on the first interaction with the customer. FCR rates give CX leaders a good indication of customer satisfaction (CSAT), because no one likes having to reach out again with the same unresolved issue!

Ticket Volume:

The number of tickets entering the queue, or the number of customer interactions initiated, given a period of time. Teams usually look at ticket volume on a weekly or monthly basis.

Tickets per hour:

This metric gives an indication of how efficient an agent is with dealing with tickets in the queue - as in, how many tickets they can grade each hour. A falling tickets per hour metric should not be cause for alarm, though. QA data and notes taken by graders usually show the bigger picture - in a lot of cases, the agent was taking the time to properly walk the customer through the steps to resolution.

Tickets per #X active users:

Tickets per #X active users give teams an idea of how they are doing with regards to customer education. For a company that’s rapidly growing - ticket volumes are naturally going to go up. But if this metric is following a downward trend, it might mean the team is doing well with customer education and preventing issues from becoming tickets. 

This is a great metric to watch for teams experimenting with self serve CX (where customers are shown a menu of support articles before an actual chat can be requested with an agent), or who are implementing a product marketing campaign.

The GetUpside team tracks Tickets per 1000 Active Users to understand how their CX team is doing in terms of FCR and customer education. Read their case study here.

CSAT/DSAT/UNSAT/BadSat

Customer satisfaction scores (CSAT), and its two siblings DSAT and UNSAT (dissatisfied or unsatisfied rates) are a mixed bag when it comes to QA.

Why? CSAT scores are usually measured through a customer survey at the end of an interaction, often on a binary scale (thumbs up/down) or on a 5 point scale. As with most optional surveys, the data tends to show a little bias. Just think - are you more likely to answer a survey if the interaction:

  1. Exceeded expectations
  2. Went as expected
  3. Was worse than expected

Chances are, interactions that go as expected (scenario 2) don’t usually result in surveys submitted, meaning CSAT doesn’t usually show the whole picture of an agent’s performance.

Here’s another example: a customer wants a refund for a product they’ve bought, but they don’t meet the criteria for a refund/return. The agent follows policy to a tee, yet the customer is still disappointed they didn’t receive their refund. QA and CSAT metrics will disagree here - the agent would probably score well for QA having followed policy, but receive a bad score for customer satisfaction.

Despite these potential shortcomings as an individual agent performance metric, CSAT scores are still important in customer-centric organizations. Without a doubt, it’s a barometer of the success for a company’s customer experience program and a key indicator if things need improvement, but teams shouldn’t rely on it to tell the entire story behind an agent’s performance.

QA score

QA scores are the direct output of a QA program. These scores are usually given as a percentage, or out of 100 possible points.

As we said earlier, scorecards are completely customizable, which means it’s difficult to compare QA scores across different companies.

However, QA scores are a powerful snapshot of how an agent is performing relative to what their company has defined their QA standards to be. QA scores chart an agent’s progress and growth over time, and the individual components that make up the score can be used to assign targeted training where needed.

Trends in Customer Experience Quality Assurance

Agent Experience

Agent Experience refers to the holistic view of how empowered, efficient, and effective your agents are. Simply put, happy agents = happy customers!

This trend was best described in our by Bonni Poch, CX Training Manager at Staples, as “moving from catching to coaching”, in our webinar titled Why Fortune 500 Companies are Replacing Legacy CX Systems with Zendesk and MaestroQA.

CX teams have caught on to the fact that a positive agent experience generally leads to better customer interactions. QA has evolved as a result, focusing more on empowering agents and giving them the leeway to make judgment calls on how to best help customers, rather than ensuring they follow a script.

Omnichannel CX

The shift to omnichannel CX refers to the practice of having multiple channels of CX support where a customer can reach out to a CX team, all within the same dashboard. Most CX solution providers like Zendesk, Talkdesk, and Kustomer enable you to meet your customers where they are, be it on phone, chat, social media, or email.

This trend also allows more CX self-service than previously possible, thanks to the advent of CX chatbots and more user-friendly support pages. The rise of Omnichannel CX has also led to the increase in importance of ...

Integrations

As support channels get more and more complex, teams are building increasingly complex CX tech stacks to support their agents. As a result, the quantity and quality of available CX software integrations is an important point of consideration when selecting a QA tool.

Compliance

Not a new trend, but all the more crucial as more privacy laws spring up around the world (think CCPA, GDPR, HIPAA, and FERPA). With these developments, CX teams are held to increasingly higher standards, and compliance is a catchall term that describes a company’s efforts to maintain those standards, whether legal or policy-based.


We hope this article helped - tweet at us @MaestroQA if there are other terms you think we should include!


For a better understanding of the state of QA, including what the latest trends and metrics to watch are, look no further than our annual conference, The Art of Conversation. All panels with our guest speakers from leading CX teams like Zendesk, Mailchimp and Peloton can be requested here.

Previous Article

Empowering Agents to Overcome Customer Service Challenges and Drive Brand Loyalty

Leanna Merrell

Navigating Legal Risks of AI in Employee Performance Management

Leanna Merrell

How QA is Transforming Sales

Leanna Merrell

Navigating AI Pitfalls and Enhancing CX in Call Centers

Leanna Merrell

Beyond VOC: The Future of Customer Service Conversation Intelligence

Leanna Merrell

How to Optimize Your Chatbot Strategy: QA’s Critical Role in Enhancing Accuracy & Effectiveness

Leanna Merrell

MaestroQA Achieves PCI DSS 4.0 Level 1 Compliance: Leading the Way in Secure QA Solutions

Lauren Alexander

Don't Settle. Dig Beneath The Surface For Customer Insights.

Team MaestroQA

Using Zendesk CSAT Reviews and Slack to Appreciate CX Agents

Customer Support Best Practices From The NYC Support Driven Meetup

Why Failure To Provide Great Customer Service Is A Risk To Company Success

Maintaining Quality Of Customer Support In The Face Of Hyper-Growth

Customer Service Quality Assurance and Soft Skills

The 2 Agent-Controlled Factors to Improve CSAT Scores

Guide to Building Call Center Quality Monitoring Scorecards

How To Improve CSAT Scores for Your Call Center in 3 Steps

Customer Service Quality Assurance for Higher CSAT Explained

How To Build Your First QA Scorecard — A Comprehensive Guide

Innovation in Quality Management with Freshly at The Art of Conversation

A maybe-too-honest perspective on our rebrand

How FabFitFun Uses Customer Service Quality Assurance To Manage A Best-In-Class Support Team

Team MaestroQA

How to Create A Customer Service Quality Assurance Form

See The Future: Be Proactive In Support Of Your Customers

Team MaestroQA

Quality Management and Customer Service Training Programs

Why Paving A Path To Resolution Is A Customer Service Best Practice

Team MaestroQA

CSAT Scores vs. Quality Assurance Metrics – Which Is Better?

5 Ways Quality Assurance Programs Can Improve CSAT Scores

MaestroQA Partnerships: Introducing Zendesk Suite

Is Your Quality Assurance Program Built For 2018?

Fresh Take: How Peer Review Can Identify Improvements

Roger That! Assume Nothing Until You Get Confirmation

Creating a Multi-Channel Quality Form For Contact Centers

The Art of Training with Harry's Razors and FuboTV

Building Customer Loyalty And Trust Through Service

Team MaestroQA

How ActiveCampaign Uses MaestroQA To Scale Their Support Team, And Improve Team Dynamics

Team MaestroQA

Omnichannel Support For Agents And Customers: A Necessity

2 Types Of Agent Skills That Impact Customer Satisfaction

Mastering Customer Interactions in the Age of DSAT

Leanna Merrell

How Shinesty Uses Alternative Positioning as a Best Practice

Dangers Of The 90%+ QA Scores

Using Positive Positioning to Improve Call Center CX

Team MaestroQA

Navigating AI Implementation Strategy in Customer Experience: Risks and Strategies

Leanna Merrell

Elevating Call Center Performance with Six Sigma and MaestroQA

Lauren Alexander

Elevating Business Excellence Through Non-Customer-Facing QA: A Strategic Imperative

Leanna Merrell

Elevating Trust and Safety through QA: How TaskRabbit Sets the Standard

Leanna Merrell

The Essential Guide to Chatbot Quality Assurance: Ensuring Excellence in Every Interaction

Leanna Merrell

Unlocking Superior CX: The Bombas Blueprint for Quality and Coaching

Leanna Merrell

Agent Empowerment: 5 Tactics for Customer Retention from Industry Leaders

Mastering Agent Onboarding: Quality Assurance Lessons from ClassPass

How Angi Unlocked Growth and Continuous Improvement with QA

The Transformation of QA: Driving Business Results - Key Takeaways from MaestroQA’s CX Summit

Lauren Alexander

Unleashing the Power of Customer Conversations: Top 6 Tech Trends Revealed at the CX Summit

Lauren Alexander

Important Factors to Consider when Exploring Sentiment Analysis in Customer Support QA: A CX Community Discussion

Driving Business Impact with Targeted QA: Insights from an Expert

The Art of Outsourcing Customer Support: Lessons from Stitch Fix's BPO Partnership

Larrita Browning

How to Revamp QA Scorecards for Enhanced Quality Assurance

De-Villainizing QA Scorecards with Hims & Hers Customer Service

How to Maximize Call Center & BPO Performance | MaestroQA

Larrita Browning

Writing the Auto QA Playbook & Transforming Customer Support

Larrita Browning

MaestroQA Named One of Comparably’s 2023 Best Workplaces in New York for the Second Consecutive Year

Larrita Browning

Advancing Customer Service Metrics with AI Classifiers

Lauren Alexander

MaestroQA Named on Comparably’s Best Workplaces in New York

Larrita Browning

CX Strategy: The Future of AI in Quality Assurance

Larrita Browning

Elevating Customer Satisfaction with Visibility & Coaching

Larrita Browning

How Customers Collaborate with Their BPO Partners Today

5 Key Strategies to Supercharge Your BPO Partnership

Larrita Browning

Champion-Challenger Model: Improve Customer Service In BPOs

Larrita Browning

Kick Start Your Customer Service BPO Partnership Successfully

Larrita Browning

BPO Call Centers: Best Practices for Quality Assurance

Larrita Browning

Call Calibration: What is It & What are the Benefits?

Larrita Browning

Increase QA Team Alignment with Call Calibration & GraderQA

Dan Rorke

Measuring An Organization's 3 Ps: People, Process and Product

Larrita Browning

Empathy in Customer Service: Everything You Need to Know

Larrita Browning

Average Handle Time (AHT): How to Calculate & Reduce It

How to Onboard Your Customer Service Team to a New QA Program

21 Key Customer Experience Definitions for QA Professionals

Should You Have Dedicated Quality Assurance Specialists?

How Top eCommerce Brands Ensure Exceptional Customer Service in a Remote World

The Top 4 CX Books Recommended by Our QA Community

A Guide to Customer Service Quality Assurance Programs

5 Key Components of a Remarkable Customer Service Experience

The Ultimate Guide to Improving First Call Resolution (FCR)

How to Refresh Your Call Center Quality Monitoring Scorecard

The Key to Customer Service Coaching Is More Data (and Fewer Opinions)

Call Center Quality Assurance with Zola and Peloton

How to Update Your QA Scorecard

3 Ways to Test Your Call Center Quality Assurance Scorecard

The 9 Customer Service KPIs Needed To Improve CX

What is DSAT and 5 Steps to Improve It

Leveraging Customer Sentiment to Improve CX in Call Centers

Larrita Browning

Customer Experience Management and Quality Assurance Jobs

How Deeper CX Analytics Lead to Better CSAT | MaestroQA

Customer Service Management 101: Everything You Need to Know

Beyond Low CSAT Scores: Finding the Root Cause of Poor CX

How to Create an Omnichannel Call Center Quality Assurance Scorecard

Achieving Effortless Customer Experiences (CX) with QA

This Is What an Effective Customer Service Coaching Session Looks Like

Customer Service Coaching 101: Improve Agent Performance

Auto-Fail in Call Center QA: What It Means and When to Use It

MaestroQA's Aircall Integration: Bring Your Calls to Life