Increase in CSAT
30%
Decrease in agent ramp time
120%
Increase in monthly coaching sessions
300%
FabFitFun believes that women should have easy access to products that enable them to live good and empowered lives (and that it shouldn’t cost a fortune). They help customers achieve this through sending a quarterly, full sized, subscription box with the latest beauty, health and wellness, fashion and home products.
The company also believes that if a customer needs help with something, they should have an incredible customer support experience. To FabFitFun, the “Customer is Queen.” And this has led to some innovative and creative solutions in managing their large (and growing) support team.
We recently sat down with Caitlin Logan, Senior Director of Customer Experience at FabFitFun to talk about the metrics she uses to manage her 200 person team of customer support agents. The following is a Q&A from that conversation:
What is FabFitFun?
FabFitFun is a quarterly subscription box. It's designed to make members feel like they are really treating themselves, though it’s a recurring event. Members get a box of full sized products that are centered around beauty, fashion, fitness, wellness, and home tech products specially curated for each season.
Our membership has really started to evolve into being a fully-rounded lifestyle brand. So as part of the membership, customers also get access to FabFitFun TV where they can get daily workout videos, among other things. We also have an online community forum where members can interact and talk about anything that might interest them. Additionally, we have heavily discounted flash sales that happen eight times a year. So there's a lot of different things wrapped up into this membership outside of the physical box itself.
How does your company think about customer experience, and what sort of requests are you fielding?
The tagline "Customer is Queen" is really special to me because not only is it something that I've chosen a career in, but it's actually a value of FabFitFun. It's something that even our C-Level executives are making decisions around – treating the customer like she is our queen, and constantly asking how we can make her feel extra happy all the time. So that's something that's just at the forefront of everything that I do every day, and that the company focuses on every day.
Some common inquiries that the customer support team receives are around how the membership works, when boxes will be delivered, how to track packages, and shipping timeline. We field a lot of cancellation requests, so anyone that's trying to discontinue their membership, we educate on the benefits of being a member and then of course if there's anything wrong with their shipment or their member experience, we're there to make things right for them too.
Can you give us a little bit of background around your time at FabFitFun and how the team has changed since you got there?
I joined FabFitFun in the spring of 2016. When I first joined the team was really, really small, and the company was significantly smaller as well. We weren't very well known. We had about five agents in the US at our LA office and we had 15 agents that were contracted through a company called upwork.com.
There was very little reporting at the time and little structure. But the team still did a great job assisting our members. Now fast forward two and a half years, and our team has opened up an office in the Philippines with an outsourced provider. We now have over 200 people on our customer service team. So it's grown into a much bigger operation. And we've been scaling very consistently since I joined. The team has grown 700% since I first started at FabFitFun. It’s crazy!
And as I'm looking out at the team right now and there's just a sea of people with their headphones on, helping our members out. And it's so crazy to think that two and a half years ago there was just one desk.
Can you talk about the challenges associated with a growing, remote team?
Both growth and a remote workforce are definitely things that create challenges for us to maintain a consistent customer experience. We've been consistently adding 10 agents per month, and per season that rounds up to 20+. So the challenge is making sure every single interaction that comes in from a customer is positive, is the correct information and is educating the member in line with our brand.
So making sure that everyone is trained and has the tools, energy and motivation to give our members the experience that we want them to have is a challenge with a growing team. Definitely people being remote is a huge one too – I do travel out here a lot but I don't live here, and so managing the team from afar is definitely another big challenge for us.
Can you talk a little bit about any specific customer base challenges that you have?
FabFitFun members are very passionate and they're very eager. We build a lot of hype around our season boxes so that customers get excited. But then, if something doesn't quite go their way, or they're confused or there's something going on in their experience that isn't quite right, they contact us and they are very passionate about it. So we have very eager and passionate customers (which is a positive in many ways).
But then we have an online community forum with a discussion board. You have to be a member to join it, but it's still hundreds of thousands of members who have access to this one forum. Anyone can post anything about their experience that they want. And if a customer reaches out because something's damaged and maybe one agent offers the member a credit for their damaged item, and another agent offers a customer a replacement product, if we're not consistent there, then customers might go to the community forum and they're tell each other about their experience, and one might feel they got a lesser experience.
Then one might ask support why they didn't get the credit. So it's really, really important that our customer service team has really consistent responses across all channels. So every agent needs to be trained and knowledgeable in handling every situation the same way. Otherwise it's going to backfire and, and people are going to know about it.
But you have come up with some really interesting ways of handling these challenges. Can you tell us about that?
First and foremost, I would say we are extra hands-on with the team overseas. Without really reviewing or inspecting what it is that we want them to do and then giving them feedback, they aren't going to know what you want the experience to be.
So it's taken a lot of time, and I’ve probably taken ten trips out here (I've really lost count) but it's just so important that I am here and that I work directly with my leaders out here. My team leads, and my QA and everyone is really helping to make our support consistent.
So just just being super involved both in this way and through reporting is how I see what's going on out here, and better manage the remote team. So having regular reports that are sent over to us and then also training the leads here in person to be able to review and manage those reports so that they can make educated decisions and give feedback to the agents as well are the main ways that we try to solve the challenge of scale and dispersal.
What are some important metrics that you use to measure the team that's helped drive success?
So productivity is a huge one. We want to know how many tickets the agents are handling per channel. We have live chat, voice, and email. So we want to know the number of tickets solved, SLA or response times per channel, and how long our customers are having to wait before they get a resolution from us.
And other big ones like attendance, how much time agents are spending away from their seats and we also have something called "save rate" which is anytime someone calls to cancel and we attempt to educate them about the membership to maybe save them from canceling. And so we track save rate percentage too, just to see how the team is helping with retention (which we track through our QA process).
Aside from productivity and response times, we want to measure things that indicate that we’re helping our members, and creating member satisfaction. So we really measure CSAT rating and the QA score. Satisfaction rating to see how happy customers are, and the QA score to measure of the quality of individual responses.
What does quality mean to your team?
Quality for us is twofold: are we delivering the right experience to the customer, and does the response really speak to the FabFitFun tone. So on the technical side, are we using our tools correctly and giving them the correct information, and on the soft skills side, are we going above and beyond to make their experience better? We don't just want to respond to answer the question, we want to do even more and we want to put a smile on the customer's face. All of that is just so important and that's what quality means to us.
Can you also tell us how long you have been QAing, how you started QAing and how you got to where you are today?
So when I joined two and a half years ago, there really was no QA program. The program kind of consisted of, "someone's really mad about our response, how do we react to that?" And as we started growing I developed a program using excel spreadsheets, and we would randomize tickets and we would read through them and score them on grammar, punctuation, and correct or incorrect information or handling the tools properly.
So there was a certain manual score sheet, and we would give agents QA scores, but that took so much more time and it wasn't automated. And then we'd have to sit down and pull up all the tickets again to be able to give the feedback to the agent.
Fast forward, I hired my first QA person over here in the Philippines and worked really closely with her to develop scorecards and automate the whole process with MaestroQA.
MaestroQA makes quality assurance something that isn't so manual and we can efficiently use it to give feedback to our team.
I think a lot of people can relate to the spreadsheet pains that you guys felt for a while.
Oh yeah. The spreadsheets were really, really painful, but it still was really important to be able to quantify quality somehow. But it's just so much nicer now.
Can you tell us how you decided which people to promote into QA positions, as well as how your team is structured generally with QA leads, team leads, and agents.
All of my promotions go through a formal interview process. And so once we decide that it's time to add another QA person, we open up the position, we allow agents to apply for it so that they can experience growth on the team, and then we go through the interview process and look at what their performance has been like as an agent. So we look at productivity scores, quality scores, and CSAT scores – because we can't promote anyone with low satisfaction or low quality scores because they'll be grading quality in the future. So we really need people that are top top agents to be filling positions.
One awesome thing is that my very first QA is actually now my QA lead, and she leads a team of 10 QAs below her. I think it's really important that we have enough QAs on the team that they have time to consistently and thoroughly read through the tickets and grade them and give the team a thorough QA score. If I didn't have one QA for every 20 agents, I don't think we'd be looking at enough tickets to give us an actual overview of what is happening in terms of quality.
Can you go into the workflow associated with your QA lead and how what they're QAing ultimately ends in training for agents?
So each QA is grading five tickets per week (per agent) and they are designated to certain teams – one QA handles two teams, and each team has a team lead. It's kind of like a little tripod – the QA works directly with those team leads, and they are constantly meeting about markdowns, feedback, what trends they're seeing with agents. And the QA takes a lot of pride and is incentivized based on the CSAT ratings of the 20 agents that they're in charge of QAing. (The idea here is that the better job they do QAing, the better CSAT their team will be creating).
So both team leads and QAs are incentivized based on their team's performance. But the QAs in particular are incentivized by how satisfied the customers of the teams that they're grading are. And I think it directly correlates to the feedback that they're giving on the tickets that they're grading every day. Then the team leads are coaching the agents based off of that feedback. So it's really important that the team leads are aligned with what the QA scores are and what the feedback is because you know, the QAs really are the experts on what's going on in that specific interaction. And the team lead is the one that's managing their team and coaching agents based off of what the QA gives them.
So to summarize, the QA gives feedback to the team lead, then the team lead gives feedback and coaching to the agent, and both are responsible for the quality of the entire team, and creating better customer satisfaction.
Can you talk a little bit about your calibration process and how you make sure the in-house LA team is on the same page with the team in the Philippines in terms of what quality means?
The vast majority of our QA team is in the Philippines, and all of them started as agents and were then promoted to QAs. So they've been at the forefront of customer support, they went through training processes, they've talked to members, they've showcased that they have the highest quality and satisfaction scores and therefore they've been promoted. And so in terms of being quality experts, it's definitely something that they've proven.
I also have a head of QA on my team in LA who meets with our head QA lead in the Philippines once a week (remotely). They also meet with the QA team as a whole.
There's also a process where the team leads can rebuttal a QA score that is put in by a QA that they disagree with. So we have calibration meetings on a regular basis where a team lead can rebuttal and everyone discusses it together and comes up with an agreed upon outcome so that it's consistently that way going forward. And our team in LA, especially our QA is involved in all of these meetings and she's giving her overarching opinion on how we should handle all of these cases.
So to summarize, the teams are very aligned because everyone meets on a regular basis, and they have the rebuttal process which creates continual discussion on what quality means to our entire team.
Can you talk about how you guys use QA scores, and CSAT scores, and what that means for managing your team.
Incentives or bonuses are a really important piece of our model here. When the agents are doing well and they're making our customers happy, we want to reward them for that with more than just recognition. Part of our incentive structure is using their CSAT score and their QA score.
Once they hit a certain attendance and productivity bar, and they have a CSAT and QA score (both) above 90, they get a certain bonus. If both scores are above a 92, the bonus is better, and if both are above 94, it’s even better.
If CSAT and QA are below 90, they're not going to get any bonuses at all. If one is above and quality is below, they won’t get a bonus, so they need to be in really good standing for both quality and CSAT because that's why we're here.
That's the forefront of what we do – making the customer happy, and if they're not quite meeting that standard then that bonus won't be rewarded to them.
What does the coaching look like after the team receives metrics and the data from each platform? How are they formalizing that coaching process?
Team leads each manage 10 agents, and they spend a certain amount of time every week with every single agent on their team. They spend a little more time with agents that are less tenured and a little bit less time with agents that've been with us longer. But they do spend quite a bit of one on one time reviewing Stella Score (CSAT) and QA score and the other metrics. They will probably be listening to a few calls as well if it's a phone agent for example, and they'll see the QA feedback from that particular call and go over it with the agent. So it's a thorough conversation, and feedback loop from what's happened in the previous week.
And CSAT and QA are predominantly what they're talking about in these meetings unless there's something going on with morale or a personnel issue, which we hope doesn't happen, but long story short, we use these meetings to make sure that the agents are supported, and they have the tools to get to their incentives and keep pushing upwards. We have multiple tiers of incentives, so if they're at a 91, we’d talk about how we get them to a the next tiers of success. So the team lead is there to work through some of their scores and see how they can get even better than they are.
And does this process differ at all when you have seasonal or new hires on board and does the frequency or anything change when they come on?
New hires are definitely monitored a lot more closely than the ones that are masters. I mean, we have some people who are consistently getting 100 percent QA scores every single week so the team leads don't need to spend so much time with them because we just trust that they know and are providing the correct information. But with newbies, instead of spending 15 minutes with them, it might be 30, and we might even loop in a training refresher or a 1:1 with a QA. In those instances were they need extra help, they get it.
Do all these processes and the metrics that you're looking at (and remembering that the customer is Queen) help you with customer retention in any way?
Absolutely. I think that as long as we are giving the most quality response and really making interactions the best they can be, customers leave with a smile on their face and they leave thinking, wow, that was so easy. If a member ends up cancelling in the short term, we think of that as ok, because we have a high reactivation rate (in part because we make it so easy for customers to interact with us and they actually want to come back because their experience was so pleasant). Ease of experience for long term customer retention really is the goal, with the team.
Could you tell us a little bit more about how you calibrate those CSAT and QA scores together?
We really never see an agents with a really, really high quality score and a really low CSAT score. The scores really do go hand in hand – if an agent is providing a 100 percent quality response and the customer still isn't happy, then that's a process issue and we would then go back to the drawing board and say hey, our agent answered this perfectly, but why is the customer still not happy? Then we'd adjust a process because this just isn't something that's helping our customer.
An example of this is a shipping process. Shipping is always at the forefront of our customers' minds. And we used to tell customers that their boxes would ship within 10 business days. And usually right around that 10 business day mark, we'd still be shipping the last 10% of all the boxes, but people would call in and be really angry, and ask "why didn't my box ship out on day two or day five?" Usually the agent would have a beautiful response that would say, "we're so sorry that you feel this way" but the customer still wouldn't be happy and CSAT scores would reflect that.
So we went back to the drawing board and we thought, you know, how do we manage our customers' expectations a little bit? Because we're shipping out hundreds of thousands of boxes and we can't get them all out in a single day, so it's just going to only become bigger and more of an issue as we as we scale.
So instead of trying to change that (which would have been impossible) we actually changed our service agreement to 30 days, which lowered their expectations and also lowered their stress and contacting us. So we actually did see an increase of satisfaction around shipping because members were less stressed about that 10 day window.
Can you tell us how you think about using metrics to drive changes across departments?
Customer satisfaction is at the forefront of everything that every department is doing. So when we give other departments feedback on things like shipping, education for members when they cancel, additional feature requests, or better explanations around membership, they’re really open to the feedback.
A lot of things come in through customer support, and when we see things that aren't satisfying members, it gives us some ammo to go to the rest of the company and say, "Hey, our customer is our queen. She's not feeling that way for these reasons. How do we fix that?"