This is a guest post by Annette Wisniewski, Learning Strategist at Judge Learning Solutions. In this post she shares an experience building a better smile sheet for a client.

She also does a nice job showing how to improve questions by getting rid of Likert-like scales and replacing them with more concrete answer choices.

______________________________

Using a “Performance-focused Smile Sheets” Approach for Evaluating a Training Program

Recently, one of our clients had experienced an alarming drop in customer confidence, so they hired us, Judge Learning Solutions, to evaluate the effectiveness of their customer support training program. I was the learning strategist assigned to the project. Since training never works in isolation, I convinced the client to let me evaluate both the training program and the work environment.

I wanted to create the best survey possible to gauge the effectiveness of the training program as well as evaluate the learners’ work environment, including relevant tools, processes, feedback, support, and incentives. I also wanted to create a report that included actionable recommendations on how to improve both the training program and workforce performance.

I had recently finished reading Will’s book, Performance-focused Smile Sheets, so I knew that traditional Likert-based questions are problematic. They are very subjective, don’t give clear distinction between answer choices, and limit respondents to one, sometimes insufficient, option.

For example, most smile sheets ask learners to evaluate their instructor. A traditional smile sheet question might ask learners to rank the instructor using a Likert-scale.

   How would you rate your course instructor?

  1. Very ineffective
  2. Somewhat ineffective
  3. Somewhat effective
  4. Very effective

But the question leaves too much open to interpretation. What does “ineffective” mean? What does “effective” mean? One learner might have completely different criteria for an “effective” instructor than another. What is the difference between “somewhat ineffective” and “somewhat effective”? Could it be the snacks the instructor brought in mid-afternoon? It’s hard to tell. Also, how can the instructor use this feedback to improve next time? There’s just not enough information in this question to make it very useful.

For my evaluation project, I wrote the survey question using Will’s guidelines to provide distinct, meaningful options, and then allowed learners to select as many responses as they wanted.

   What statements are true about your course instructor? Select all that apply.

  1. Was OFTEN UNCLEAR or DISORGANIZED.
  2. Was OFTEN SOCIALLY AWKWARD OR INAPPROPRIATE.
  3. Exhibited UNACCEPTABLE LACK OF KNOWLEDGE.
  4. Exhibited LACK OF REAL-WORLD EXPERIENCE.
  5. Generally PERFORMED COMPETENTLY AS A TRAINER.
  6. Showed DEEP SUBJECT-MATTER KNOWLEDGE.
  7. Demonstrated HIGH LEVELS OF REAL-WORLD EXPERIENCE.
  8. MOTIVATED ME to ENGAGE DEEPLY IN LEARNING the concepts.
  9. Is a PERSON I CAME TO TRUST.

It’s still just one question, but in this case, the learner was able to provide more useful feedback to both the instructor and to the course sponsors. As Will recommended, I added proposed standards, and then tracked percentages of each response to include in my report:

I used this same approach when asking learners about the course learning objectives.

Instead of asking a question using a typical Likert scale:

   After taking the course, I am now able to navigate the system.

  1. Strongly agree
  2. Agree
  3. Neither agree nor disagree
  4. Disagree
  5. Strongly disagree

I created a more robust question that provided better information about how well the learner was able to navigate the system and what the learner felt he/she needed to become more proficient. I formatted the question as a matrix, so  I could ask about all of the learning objectives at once. The learner perceived this to be one question, but I gleaned nine questions-worth of data out of it. Here’s a redacted excerpt of that question as it appeared in my report, shortened to the first four learning objectives.

The questions took a little more time to write, but the same amount of time for respondents to answer. At first, the client was hesitant to use this new approach to survey questions, but it didn’t take them long to see how I would be able to gather much more valuable data.

The descriptive answer choices of the survey, combined with interviews and extant data reviews, allowed me to provide my client with a very thorough evaluation report. The report not only included a clear picture of the current training program, but also provided detailed and prioritized recommendations on how to improve both the training program and the work environment.

The client was thrilled. I had given them not only actionable recommendations but also the evidence they needed to procure funding to make the improvements. When my colleague checked back with them several months later, they had already implemented several of my recommendations and were in the process of implementing more.

I was amazed at how easy it was to improve the quality of the data I gathered, and it certainly impressed my client. I will never write evaluation questions again any other way.

If you plan on conducting a survey, try using Will’s approach to writing performance-focused questions. Whether you are evaluating a training program or looking for insights on improving workforce performance, you will be happy you did!

I’m thrilled to announce that my Gold-Certification Workshop on Performance-Focused Smile Sheets is now open for registration, with access available in about a week on Tuesday May 14 (2019).

This certification workshop is the culmination of years of work and practice. First there was my work with clients on evaluation. Then there was the book. Then I gained extensive experience building and piloting smile sheets with a variety of organizations. I taught classroom and webinar workshops. I spoke at conferences and gave keynotes. And of course, I developed and launched LTEM (The Learning-Transfer Evaluation Model), which is revolutionizing the practice of workplace learning—and providing the first serious alternative to the Kirkpatrick-Katzell Four-Level Model.

Over the last year, I’ve been building an online, asynchronous workshop that was rigorous, comprehensive, and challenging enough to offer a certification. It’s now ready to go!

I’d love if you would enroll and join me and others in learning!

You can learn more about this Gold-Certification Workshop by clicking here.

 

Congratulations to Steve Semler who has become Work-Learning Research’s first certification earner by successfully completing the Work-Learning Academy course on Performance-Focused Smile Sheets!

The certification workshop is not yet available to the public but Steve generously agreed to take the program before its release. Note that certification verification can be viewed here.

Those who want to be notified of the upcoming release date can do that here.

 

Links of Interest:

 

 

I’ve had the distinct honor of being invited to speak at the Learning Technologies conference in London for three years in a row. This year, I talked about two learning innovations:

  • Performance-Focused Learner Surveys
  • LTEM (The Learning-Transfer Evaluation Model)

It was a hand-raising experience!

Most importantly, they have done a great job capturing my talk on YouTube.

Indeed, although I’ve made some recent improvements in the way I talk about these two learning innovations, the video does an excellent job of capturing some of the main points I’ve been making about the state of learning evaluation and two innovations that are tearing down some of the obstacles that have held us back from doing good evaluation.

Thanks to Stella Collins at Stellar Learning for organizing and facilitating my session!

Special thanks to the brilliant conference organizer and learning-industry influencer Robert Taylor for inviting and supporting me and my work.

Again, click here to see the video of my presentation at Learning Technologies London 2019.

While I was in London a few months ago, where I talked about learning evaluation, I was interviewed by the LearningNews about learning evaluation.

Some of what I said:

  • “Most of us have been doing the same damn thing we’ve always done [in learning evaluation]. On the other hand, there is a breaking of the logjam.”
  • “A lot of us are defaulting to happy sheets, and happy sheets that aren’t effective.”
  • “Do we in L&D have the skills to be able to do evaluation in the first place?…. My short answer is NO WAY!”
  • “We can’t upskill ourselves fast enough [in terms of learning evaluation].

It was a fun interview and LearningNews did a nice job in editing it. Special thanks to Rob Clarke for the interview, organizing, and video work (along with his great team)!!

Click here to see the interview.

Dani Johnson at RedThread Research has just released a wonderful synopsis of Learning Evaluation Models. Comprehensive, Thoughtful, Well-Researched! It also has suggestions of articles to read!!!

This work is part of an ongoing effort to research the learning-evaluation space. With research sponsored by the folks at the adroit learning-evaluation company forMetris, RedThread is looking to uncover new insights about the way we do workplace learning evaluation.

Here’s what Dani says in her summary:

“What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation.”

 

“While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.”

 

“Of the literature we reviewed, several pieces stood out to us. Each of the following authors [detailed in the summary] and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.”

 

I also encourage you to look at this great review! You can see the summary here.

 

 

As I preach in my workshops on how to create better learner-survey questions (for example my Gold-Certification workshop on Performance-Focused Smile Sheets), open-ended comment questions are very powerful questions. Indeed, they are critical in our attempts to truly understand our learners’ perspectives.

Unfortunately, to get the most benefit from comment questions, we have to take time to read every response and reflect on the meaning of all the comments taken together. Someday AI may be able to help us parse comment-question data, but currently the technology is not ready to give us a full understanding. Nor are word clouds or other basic text-processing algorithms useful enough to provide valid insights into our data.

It’s good to take the time in analyzing our comment-question data, but if there was a way to quickly get a sense of comment data, wouldn’t we consider using it? Of course!

As most of you know, I’ve been focusing a lot of my attention on learning evaluation over the last few years. While I’ve learned a lot, have been lauded by others as an evaluation thought leader, and have even created some useful innovations like LTEM, I’m still learning. Today, by filling out a survey after going to a CVS MinuteClinic to get a vaccine shot, I learned something pretty cool. Take a look.

This is a question on their survey, delivered to me right after I’d answered a comment question. This gives the survey analyzers a way to quickly categorize the comments. It DOES NOT REPLACE, or should not replace, a deeper look at the comments (for example, my comment was very specific and useful i hope), but it does enable us to ascribe some overall meaning to the results.

Note that this is similar to what I’ve been calling a hybrid question, where we first give people a forced-choice question and then give them a comment question. The forced choice question drives clarity whereas the follow-up comment question enables more specificity and richness.

One warning! Adding a forced choice question after a comment question should be seen as a tool in our toolbox. Let’s not overuse it. More pointedly, let’s use it when it is particularly appropriate.

If we’ve asked two open-ended comment questions—one asking for positive feedback and one asking for constructive criticism—we might not need a follow-up forced choice question, because we’ve already prompted respondents to give us the good and the bad.

The bottom line is that we now have two types of hybrid questions to add to our toolbox:

  1. Forced-choice question followed by clarifying comment question.
  2. Comment question followed by categorizing forced-choice question.

Freakin’ Awesome!

 

Donald Taylor, learning-industry visionary, has just come out with his annual Global Sentiment Survey asking practitioners in the field what topics are the most important right now. The thing that struck me is that the results show that data is becoming more and more important to people, especially as represented in adaptive learning through personalization, artificial intelligence, and learning analytics.

Learning analytics was most important category for the opinion leaders represented in social media. This seems right to me as someone who will be focused mostly on learning evaluation in 2019.

As Don said in the GoodPractice podcast with Ross Dickie and Owen Ferguson, “We don’t have to prove. We have to improve through learning analytics.”

What I love about Don Taylor’s work here is that he’s clear as sunshine about the strengths and limitations of this survey—and, most importantly, that he takes the time to explain what things mean without over-hyping and slight-of-hand. It’s a really simple survey, but the results are fascinating—not necessarily about what we should be doing, but what people in our field think we should be paying attention to. This kind of information is critical to all of us who might need to persuade our teams and stakeholders on how we can be most effective in our learning interventions.

Other findings:

  • Businessy-stuff fell in rated importance, for example, “consulting more deeply in the business,” “showing value,” and “developing the L&D function.”
  • Neuroscience/Cognitive Science fell in importance (most likely I think because some folks have been debunking the neuroscience-and-learning connections). And note: These should not be one category really, especially given that people in the know know that cognitive science, or more generally learning research, has shown to have proven value. Neuroscience not so much.
  • Mobile delivery and artificial intelligence were to two biggest gainers in terms of popularity.
  • Very intriguing that people active on social media (perhaps thought leaders, perhaps the opinionated mob) have different views that a more general population of workplace learning professionals. There is an interesting analysis in the book and a nice discussion in the podcast mentioned above.

For those interested in Don Taylor’s work, check out his website.

 

I’d like to announce that the first certification workshop for my new Work-Learning Academy is almost ready to launch. The first course? Naturally, it’s a course on how to create effective learner surveys—on Performance-Focused Smile Sheets.

I’m thrilled—ecstatic really—because I’ve wanted to do something like this for years and years, but the elements weren’t quite available. I’ve always wanted to provide an online workshop, but the tools tended to push toward just making presentations. As a learning expert, I knew mere presentations—even if they include discussions and some minimal interactions like polling questions—just weren’t good enough to create real learning benefits. I’ve also always wanted a way to provide a meaningful credential—one that was actually worth something, one that went beyond giving people credit for attendance and completion. Finally, I figured out how to bring this all together

And note that LTEM (the Learning-Transfer Evaluation Model), helped me clarify my credentialing strategy. You can read about using LTEM for credentialing here, but, in short, our entry-level certification—our Gold Certification—requires learners to pass a rigorous LTEM Tier-5 assessment, demonstrating competence through realistic decision-making. Those interested in the next level credential—our Master Certification—will have to prove their competence at an LTEM Tier-6 designation. Further certification levels—our Artisan Certification and Research Certification—will require competence demonstrated at Tier-7 and/or Tier-8.

 

For over 20 years, I’ve been plying my research-to-practice craft through Work-Learning Research, Inc. I’m thrilled to announce that I’ll be certifying our first set of Gold Credential professionals within a few months. If you’d like to sign up to be notified when the credential workshop is available—or just learn more—follow this link:

Click here to go to our
Work-Learning Academy information page