How to measure customer service training

You've sent your team through customer service training.

People seemed to like the training. It even feels like the team has a bit more energy than before, though you can't quite say for sure.

But you face a nagging question. Did the training really work?

Those post-training surveys don't seem like enough. Kirkpatrick's four levels of evaluation sounds interesting, but your executives don’t care about levels. They just want results.

This guide can help you.

There's no advanced math or exotic statistics. These are straightforward techniques that will be credible with your CEO. I've repeatedly used them to demonstrate the impact of customer service training to executives.

Why evaluate customer service training?

Measuring your customer service training programs allows you to answer the tough questions you’ll inevitably get from executives.

I hear these three questions most often:

  • Does the training program work?

  • How can we make it even more effective?

  • Are there lessons that can be applied to other programs?

You can answer them all with a straightforward evaluation plan. Here's a step-by-step guide.

Step 1: Identify expectations

Start by meeting with the person who asked for the customer service training. The goal of this meeting is to learn what they expect the training to accomplish.

It’s important you do this before the training.

This allows you to focus the training on achieving their objectives. It also give you a chance to build evaluation into your overall plan.

Don’t be afraid to press for specifics.

"Improve customer service" is too vague. Improve what, exactly? From where to where?

"We received some complaints" isn't measurable enough. "We need to reduce complaints by 10%" is getting somewhere somewhere.

Here are three questions I ask during this meeting:

  1. Why do you want to do this training?

  2. How will you know if this project is successful?

  3. What do employees need to do as a result of this training?

Those answers will provide you with the foundation of your measurement plan. They will also help you improve the training.

I was working for a parking management company when I received a call from the CEO with an urgent request to deliver customer service training at one of our locations. The three questions helped me get more information.

Why do you want to do this training?
Our client was a resort hotel. The hotel's general manager had complained about poor customer service and threatened to cancel the parking contract if performance didn't improve within 30 days.

How will you know if the project is successful?
The goal was to keep the contract.

What do employees need to do?
The team needed to average 90 points on mystery shopper evaluations within 30 days. The current average was 78 points.

This conversation made the evaluation plan crystal clear.

Notice my CEO didn’t care about the usual stuff that trainers obsess over measuring:

  • Pre- and post-training quizzes.

  • Training satisfaction surveys.

  • Number of people completing the training.

This program had one simple goal: keep the contract.

The valet team needed to bring its average mystery shopper score from 78 points to 90 in order to do that.

This short video provides more information about gathering stakeholder expectations. There’s even a sample conversation with an executive who requested interviewing skills training.

Although the topic isn’t customer service, its reflects the same approach I’d use for any training program.

Step 2: Gather data

Gather data that will help you determine whether the training program achieved its goals.

Start this process before you do any training. Getting data ahead of time gives you a baseline you can use to evaluate your results later on.

Data collection was simple for the valet parking project:

  1. The mystery shopper form provided specific standards the valets needed to meet.

  2. Recent mystery shopper results provided a baseline for their performance.

  3. Ongoing mystery shopper results allowed us to track the team’s progress.

Bonus tip: it's easiest to use data that's already being collected for something else. This can save you time, money, and effort since the data is already there.

Here are a few places you might look:

  • Customer service surveys

  • Quality assurance results

  • Customer complaints (by volume and type of issue)

  • Performance observations (observing employees serve customers)

  • KPIs such as revenue, customer retention, or productivity

Step 3: Analyze data

Once the training is complete, the next step is to use the data you collected to determine whether the training program achieved its goals.

This process was straightforward with the valet parking contract.

Mystery shopping scores immediately improved and continued to trend upward. They averaged 94 points by the 30-day deadline, which was above the 90-point target.

The hotel general manager agreed not to cancel the contract as a result. Keeping the contract was the primary objective, so my CEO considered the project a success.

Some projects require you to dig a little deeper. This short video will give you more ideas on analyzing your data:

You can often identify additional insights from analyzing your data. The valet parking evaluation revealed two important lessons that could help the rest of the business.

First, the valet’s improved performance came as a direct result of the manager sharing more frequent updates with the team about client service expectations. This became a best practice that was shared with the other locations and was emphasized in our manager training program.

Second, our client’s complaint caused the executive overseeing the account to check in with the hotel general manager more frequently.

The CEO asked his executives to spend more time with other clients and ask them for feedback about our operations. This immediately paid dividends, as several executives discovered other unhappy clients who hadn't yet voiced their concerns.

Take Action

Measuring the impact of your customer service training gets a lot easier if you plan for measurement at the start of your project. Take time to find out the goals behind the training request, and then design a plan to evaluate whether those goals have been achieved.

LinkedIn Learning subscribers can access my Measuring Learning Effectiveness course to get more in-depth techniques and examples.

Training course --> Measuring Learning Effectiveness

Here's a preview:

Alexander Salas: Why quizzes are a poor way to measure training

How do you evaluate customer service training?

Most trainers don't measure it beyond tracking attendance and giving people a smile survey at the end of every class. Employees attend training, then go back to work without any proof they learned something useful.

Some trainers use gut instinct.

For instance, a learner who asks fewer questions or displays greater confidence as the training goes along is considered trained. Never mind that some people who are completely incapable of doing the job ask no questions and display amazing amounts of confidence.

Still other trainers use quizzes. Participants are given tests to assess their knowledge of the content. The thought is a good quiz score indicates the person can do the job.

But can they?

I recently discussed the use of quizzes with Alexander Salas, Chief of Awesomeness at eLearning Launch, an online academy for instructional designers. We discussed the reasons why quizzes are often a poor way to measure training, and what trainers can do instead.

Alexander Salas, Chief of Awesomeness at eLearning Launch.

Here are a few questions Salas addressed in our conversation:

  • Why are quizzes a poor way to measure training?

  • What should we do instead of quizzing learners?

  • Why should companies avoid corporate universities?

  • What metrics should I use to evaluate training?

  • How do I justify my training programs to executives?

You can watch the full interview or read some of the highlights below.

Why are quizzes a poor way to measure training?

"In terms of workplace learning, you have to ask yourself why you are asking people to take a quiz," said Salas. 

The goal of training is rarely for people to acquire and retain information. We usually give them information so they can use it to do a better job. That's where quizzes fall short—they don't show us whether an employee can do better work as a result of training.

Salas also discussed the challenge people have retaining information they learn in a training. A quiz might assess your knowledge level today, but two weeks later learners might have forgotten a lot of the knowledge they learned if it wasn't reinforced on the job in some way.

You can hear more on this topic at the 1:17 mark in the interview.

What should we be doing instead of quizzes to evaluate training?

Training should be evaluated by having participants demonstrate the performance you expect to see on the job. This can be done in a controlled training setting, or through on-the-job observations after training.

"Ideally, what you want to do is understand your purpose," said Salas. "What is your scope? Where are you ending?"

Salas argues that a quiz makes sense if your end goal is for people to have knowledge. Unless you're in academics, that's rarely the objective in the workplace.

In most cases, you really want people to be able to do something with that information. For example, if you want people to build better rapport with customers, then being able to identify rapport-building techniques on a multiple choice quiz is not enough. 

Your evaluation plan should include participants demonstrating that they are able to build rapport, either in an in-class simulation or with real customers.

This evaluation process starts when you first design a training program.

Decide what a fully trained person looks like, and then work backwards to create a program to get people to that goal. That picture of a fully trained person should describe what the person should be able to demonstrate after the training is complete. (Here's a guide to help you do that.)

Go to 2:27 in the interview to hear more.

Why should companies avoid corporate universities?

Salas argues that too many companies mirror academia when they set up a training function. "What is school called in the business world? Training."

Training departments are often called corporate universities. Content is organized in a series of classes. Classes are often grouped into "certificate" programs to reward participants for completing a certain amount of content. Quizzes are used to assess learning, just like in school.

I once ran a corporate university that was set up this way. Being a results person, I studied whether taking a certain number of classes correlated with better job performance. There turned out to be no correlation at all.

Some people who attended every class were indeed successful, while others who attended every class were mediocre performers. There were other employees that never went to a class, yet were objectively high performers.

This insight caused me to scrap the corporate university approach. 

What we did instead was focus on helping employees improve their job performance. We assessed employee skills gaps at an individual level, and created customized plans to help people grow.

Salas shares more at the 11:20 mark.

What metrics can we use to evaluate training?

"There's an evolution that you want professionals to go through," said Salas. "If they're beginners, and they're in customer service training programs, you want them to perform at a specific standard."

For example, if an employee is expected to respond to emails with a certain level of quality, there should be a clear standard that defines what a quality email looks like. Once that's defined, the employee's training should be evaluated by whether or not they can demonstrate the ability to write emails according to the quality standard.

More veteran employees might be evaluated a little differently. According to Salas, "You want them to progress to a level where they start creating their own improvements to the workflow, improvements to the way they do their work."

Using the same email example, you might evaluate an employee's learning by their contributions to updating or writing knowledge base articles that can help the entire team work faster and more accurately.

Go to 14:47 in the interview to hear more.

How do I justify my training programs to executives?

Salas suggests the process starts up front, when executives request training. "The question that you pose, when you get that request, is 'What do you want out of the training? Do you want performance, or do you want knowledge?'"

Trainers can then tailor the training program and evaluation strategy to meeting the executive sponsor's expectations.

We talk more about this at 17:39 in the interview.

Additional Resources

Salas provides elearning consulting at Style Learn and runs an online academy for instructional designers at eLearning Launch.

He's also a good person to follow on LinkedIn for content and though-provoking questions around training and instructional design.


An Easy Way to Evaluate Your Training

My reputation was getting battered, and I didn't like it.

This was years ago, when I supervised a training department in a contact center. New hires often struggled after completing our initial training program, and their supervisors would conveniently blame my team.

The training program wasn't the problem, but I couldn't prove it. The contact center director wasn’t interested in my perspective when she had multiple supervisors saying their agents had not been properly trained. Feelings became facts in the absence of data.

So I decided to find a way to get the data.

This led me to take a crash course on training evaluation. I didn't discover any of the models popular in the training industry—I'd learn about those several years later. My sole focus was proving new hires had been trained.

The solution turned out to be incredibly simple. It gave me observable, quantifiable data that irrefutably proved new hires had been trained. If they struggled after training, there had to be some other reason.

Here's the simple principle I discovered, and how you can use it to evaluate your training programs as well.

A trainer guiding contact center agents through an exercise in a training class.

Why is it important to evaluate training?

Evaluation can tell you whether or not training is working, and what needs to be improved. 

This can be done on a program level, where you evaluate the entire training program. Evaluation can also take place on an individual level, where you evaluate whether a participant has been fully trained.

It's an issue that gets executive attention. PwC's 2019 CEO survey discovered that CEOs see a lack of skills as a clear threat to the business:

  • 79 percent feel a skill gap is a threat to the business

  • 47 percent say a skill gap impacts customer experience

  • Just 29 percent believe they have adequate data to address the issue

In my case, my department's credibility was at stake. I needed to find data I could show to our contact center director that proved our training programs were working just fine.

This is where evaluation comes in. (You can find more reasons to evaluate training here.)

How to evaluate training

There's one thing you should do first if you want to evaluate training: set clear learning objectives. The objectives should be specific, observable, and measurable.

This was the approach I took in the contact center.

I asked the supervisors, their managers, and the contact center director to describe what a fully trained agent looked like. The goal was to get their agreement on what an agent should be able to do before they graduated training.

Our agreement centered around the contact center's quality monitoring process. The leadership team agreed that agents needed to achieve passing quality scores on live calls before they graduated new hire training.

The rest was now easy.

New hires already took live calls in class, under the watchful eye of a trainer. The only change we had to make was have the trainer complete a monitoring form for several calls per agent. 

This approach provided a lot of instant benefits:

  • Documented each agent had been trained.

  • Identified specific skills for agents to improve.

  • Tracked areas where the training program needed to improve.

Notice the goals we set centered around agents doing the actual job. Evaluating training based on passing a quiz or getting through a certain amount of content doesn't provide proof that someone has been trained. We need to see them do the work.

You can establish your own training goals using the A-B-C-D method. This model will guide you through a simple process.

More Resources

You can learn more about evaluating training programs from my Measuring Learning Effectiveness course on LinkedIn Learning.


How to Improve Training with Level One Feedback

Level one feedback is more commonly known as the survey you take at the end of a training program.

Some trainers derisively call these surveys "smile sheets" because they are often used for nothing more than confirming everyone had a great time. I must admit I haven't always put a lot of stock in them.

But I leaned heavily on level one feedback for a recent project.

My first full-length training video, Customer Service Foundations, launched on Lynda.com in 2014 and has garnered more than 2.4 million views. In late 2017, I was approached by the company and given the opportunity to update the course with a new version.

The revision included a tighter script, new scenes, and re-shooting the entire thing. Many of the revisions I made came directly from level one feedback. (You can see the finished course here.)

Here's what I did and how you can apply the same lessons to your next training project.

Three participants evaluating a training program with a four, five, and three respectively.

A Quick Overview of Level Ones

The term "level one" comes from a training evaluation model attributed to Donald Kirkpatrick. It's one of four levels in the model:

  • Level 1 = Reaction

  • Level 2 = Learning

  • Level 3 = Behavior

  • Level 4 = Results

Kirkpatrick defines level one specifically as "the degree to which participants find the training favorable, engaging and relevant to their jobs." You can watch a short primer on the Kirkpatrick model here.

There's not a ton to be gained from level one evaluations in terms of actual learning. I know plenty of examples where participants had a great time in training only to go back to work and do absolutely nothing with it.

The real value is in product development. 

If participants like your training programs, find them engaging, and believe they are relevant, they are more likely to tell other people about their favorable experience. That becomes helpful word-of-mouth marketing.

So yes, a level one evaluation is really a customer service survey. 

 

Search Feedback for Themes

The starting point is to search participant feedback for themes, just like you would a customer service survey. I analyzed comments from thousands of survey results from this course. (You can read this primer on analyzing survey comments if you aren't sure how.) 

Overall, the feedback was very positive. People really liked the course, which helps explain its popularity. Some people did have some constructive feedback, and my analysis quickly revealed three clear themes:


Theme #1: The course is too long.
Sample comment: "great information, but, very lengthy and would not show completed in my tasks."

You have to be a bit of a detective when analyzing survey comments. That last part of the comment made me suspect the participant was more interested in getting credit for watching the entire course than they were in learning new skills.

It's good to follow-up on surveys and have a conversation with a sample group of participants. You will often get a lot more insight this way.

I talked to a lot of people who were watching my videos and discovered many watched the entire 1 hour 57 minute course from start to finish. The course is divided into short segments that are less than 5 minutes each, yet people just plowed all the way through.

I can see how that would be boring.

Here's a comment from a happy participant who used the training the way it was designed to be used:
"Great to have each segment short, so that you can take a little piece at a time."


Theme #2: Too basic
Sample comment: "If you have worked here or in customer services for any amount of time, 2 hours is an overkill. only took this class as it was mandatory."

The target audience for this course is new and inexperienced professionals. Even the title, Customer Service Foundations, implies this.

Some people, like this one, were really upset because they were mandated to take a course they didn't feel they needed. Other comments revealed people didn't clearly understand the course focused on the basics and was not intended to share more advanced skills.

Here's a comment from a happy participant who understood the target audience:
"Highly Recommended for customer service representatives with little to no experience."


Theme #3: Wish there was more detail on ___ topic
Sample comment: "There could be a bit more on serving difficult customers."

This one was a real challenge for two reasons. First, people tended to want more information on different topics. The second challenge was you can only squeeze so much content into one course. I really had to think about this one.

Here is a comment from a happy participant:
"The level of detail and easily relatable material greatly exceeded my expectations."


Notice I compared happy and unhappy participants for each of these themes. This provided some important context that told me, in general, people who didn't like the course were either taking the wrong course or taking the right course the wrong way.

 

Turn Feedback into Action

It's essential to use participant feedback to improve your course. The challenge is to make improvements without breaking the elements that people really like.

For example, if I added more detail in all the areas people requested (theme #3), the course would be even longer, which probably wouldn't go over well for the people who felt it was already too long (theme #1).

Here's what I did:

Fix #1: Shortened the course

I was able to shorten the new course by 25 percent.

Run-time comparison of the new vs old customer service training video

A few tactics helped me do this:

  • Shortened scripts by getting to the point faster in each segment

  • Eliminated content that was non-essential

  • Spun off more detailed content into stand-alone courses

 

Fix #2: Created a how-to video

The new course kicks off with a welcome video and then moves to a short how-to video that explains who should watch the course and how to use it. You can watch the how-to video here.

I also created an "Additional Resources" guide that participants could download which contained resources to explore specific topics in greater detail. The resources included books, blogs, podcasts, and even other training videos.

 

Fix #3: Created educational campaign

I've also created my own ongoing campaign to educate customer service leaders and employees on the best way to use these videos.

The campaign has included working with individual clients, sharing best practices with people I know are using the videos, and writing blog posts. Here are a few sample posts:

 

Take Action!

You can gain a lot from those level one training surveys if you think of your training participants as customers. Take a close look at their feedback and use it to make improvements.

Five Reasons Why You Should Evaluate Your Training Programs

There's one question I always ask project sponsors who request training. It's a bit of a show stopper because 90 percent of the time my client hasn't thought of the answer.

How will we evaluate the success of this program?

A good answer can drive results. For instance, let's say you want to train employees to better handle customer complaints. 

There are a whole host of questions you would need to ask before doing the training if you wanted to evaluate it:

  • What are customers complaining about?

  • What is a successful complaint resolution?

  • What are employees doing now?

  • What do we want employees to be doing instead?

  • What other factors besides training might influence complaint handling?

These questions can move you from generic training to a targeted intervention that actually reduces complaints and keeps customers happy.

Getting better results is just one reason why you should evaluate your training program. Here are five more.

Two professionals analyzing the results of a training evaluation report.

Why You Should Evaluate Training

Reason #1: Learn whether it works. Training is not always effective. One company spent tens of thousands of dollars on leadership training. Participants gave the course high ratings on post-training surveys and some even described it as "life changing." Yet a closer analysis revealed participants were not actually becoming better leaders as a result of the training. Funding for the program was eventually cut because there were no results to justify the cost.

Reason #2: Develop credibility. Customer service representatives were skeptical about a procedure they were being trained to use. They weren't convinced it would work until the trainer shared evaluation data from a pilot class that showed their colleagues had dramatically improved results using the new procedure. This gave the training greater credibility and the participants agreed to try using the new process.

Reason #3: Improve your programs. A client recently hired me to develop a customized customer service training program. We did a pilot session and it received excellent reviews, but our evaluation also identified a number of places where the program could be improved. The result was a much better program once it was introduced to all of my client's employees.

Reason #4: Meet sponsor expectations. The CEO of a small company asked me to conduct training to help customer service reps convert more inquiries into sales. The current conversion rate was 33 percent and the CEO felt employees could achieve 35 percent after the training. A post-training evaluation revealed the conversion rate rose to 45 percent, which made the CEO extremely happy!

Reason #5 Get more funding. A client hired me to conduct customer service training with her staff. They had received numerous complaints and she knew they needed to improve. We were able to demonstrate the training helped significantly reduce complaints and dramatically improve service levels, which allowed my client to get her boss to approve funding for additional training programs.

 

Learn More

Here's a short video that explains more about the importance of evaluating training.

Evaluating training programs requires more than just a short survey at the end of the class. Trainers sometimes call those "smile sheets" because they are really a customer satisfaction survey and not a robust evaluation tool.

The good news is evaluating training does not have to be overly difficult.

  1. Set clear training goals

  2. Create a measurement plan before training

  3. Execute your plan after training

3 Deadly Evaluation Mistakes That Can Destroy Your Training

It's budgeting season for many companies, which means your training programs may be at risk. 

Many of my clients are looking for cheaper ways to deliver customer service training. They're facing pressure from executives to cut costs, but they don't have hard data to prove their training program is working.

Others are trying to get new funding for expansion, but they're having an equally tough time making their case.

Forget the lofty platitudes like "training is an investment" or "it will help our employees grow." You'll need to back up those statements with some real numbers if you want them to fly in the c-suite.

Here are three deadly evaluation mistakes to avoid if you want to make a solid case.

Mistake #1: No goals

If your training program lacks goals, you're sunk.

It's impossible to evaluate training if you haven't set any goals that provide a target to evaluate your program against. I don't mean fluffy goals like "inspire employees to WOW customers" or some other platitude. Trust me, most executives find these worthless.

I'm talking about concrete goals that are set using the SMART model (Specific, Measurable, Attainable, Relevant, and Time-Bound). Here are some examples:

  • Customer service employees will reduce monthly escalations 15% by 12/31.

  • We will reduce customer churn by 10% by 1/31.

  • The Support Team will improve customer satisfaction 5 points by 2/28.

Setting goals often results in another important activity.

You need to have baseline measurements in place before you set a goal. It's pretty hard to reduce monthly escalations by 15 percent if you don't know how many escalations you have now, or why they're happening. So, the goal-setting process often forces managers to start measuring how their department brings value to the business.

 

Mistake #2: No linkages

Many training programs fail to link the training to the goals. Here's how a typical organization approaches training evaluation.

  1. Survey participants after the class

  2. ???

  3. Customer service improves

That part in the middle is absolutely critical. 

In his book, Telling Training's Story, Robert Brinkerhoff outlines a simple method called a Training Impact Model for making that critical connection. You do it by working backwards from business goals to the training itself.

  1. Establish business goals (see Mistake #1)

  2. Determine results needed from employees to achieve the goals.

  3. List actions needed to accomplish desired results.

  4. Identify knowledge and skills needed for those actions.

Here's an example for reducing escalations:

  • Goal: Reduce monthly escalations 15% by 12/31

  • Results: Resolve issues to customers' satisfaction without escalation

  • Actions: Apply the LAURA technique

  • Knowledge & Skills: Active listening, expressing empathy

So, my training in this case should focus on developing active listening skills and empathy. I'll want to set clear learning objectives using the A-B-C-D model so I can easily evaluate whether training participants have actually learned the right skills.. 

And, I'll also want to develop a workshop plan make sure employees aren't considered fully trained until I can observe them using the LAURA technique on the job.

 

Mistake #3: No financials

You'd better have some numbers if you're going to a budget meeting.

Many trainers are uncomfortable working with financials, so they avoid them. Or worse, they spout bogus metrics like telling people that ROI equals "Return on Inspiration." (Sadly, that's a true story.) 

Your CFO will laugh at you if you refer to ROI as Return on Inspiration.

You'll need to come correct with some real financial figures instead. Fortunately, this isn't too difficult if you've established clear goals that are linked to business results.

Let's go back to the escalations goal we've used as example. The sample goal was reduce monthly escalations 15% by 12/31.

Connecting those escalations to financial results should be easy. First, calculate the average cost of an escalation. There are a few places you might look:

Revenue: Look at how much your average customer spends (per order, per year, etc.) and compare that to how much customers with escalations spend after they have an issue that's been escalated. The escalations customers almost certainly spend less. Just for fun, lets say its $100 less per customer, per year.

Cost: Calculate the average cost of an escalation. For instance, if the average escalated call takes 15 minutes and is handled by someone making $20 per hour, then each escalation costs $5.

Projected Savings: Now, determine how much more money customers would spend and how much money you'd save with 15 percent fewer escalations. Prepare a nice report (showing your work) and share it with key stakeholders like your CFO.

The summary might look like this:

  • Each escalation costs $105 ($100 in lost revenue, $5 servicing cost)

  • A 15% reduction in escalation would equal 180 fewer escalations per year (based on 100 escalations per month).

  • $105 x 180 = $18,900 projected annual savings

 

Learn More

This short video provides five reasons why you should measure your training programs. 

It's part of the How to Measure Learning Effectiveness Course on lynda.com and LinkedIn Learning. You'll need a lynda.com or LinkedIn Premium subscription to view the course, but you can get a 10-day trial account on lynda.com.