Guest Post by Annette Wisniewski: Thrilling a Client with Better Smile-Sheet Questions

,

This is a guest post by Annette Wisniewski, Learning Strategist at Judge Learning Solutions. In this post she shares an experience building a better smile sheet for a client.

She also does a nice job showing how to improve questions by getting rid of Likert-like scales and replacing them with more concrete answer choices.

______________________________

Using a “Performance-focused Smile Sheets” Approach for Evaluating a Training Program

Recently, one of our clients had experienced an alarming drop in customer confidence, so they hired us, Judge Learning Solutions, to evaluate the effectiveness of their customer support training program. I was the learning strategist assigned to the project. Since training never works in isolation, I convinced the client to let me evaluate both the training program and the work environment.

I wanted to create the best survey possible to gauge the effectiveness of the training program as well as evaluate the learners’ work environment, including relevant tools, processes, feedback, support, and incentives. I also wanted to create a report that included actionable recommendations on how to improve both the training program and workforce performance.

I had recently finished reading Will’s book, Performance-focused Smile Sheets, so I knew that traditional Likert-based questions are problematic. They are very subjective, don’t give clear distinction between answer choices, and limit respondents to one, sometimes insufficient, option.

For example, most smile sheets ask learners to evaluate their instructor. A traditional smile sheet question might ask learners to rank the instructor using a Likert-scale.

   How would you rate your course instructor?

  1. Very ineffective
  2. Somewhat ineffective
  3. Somewhat effective
  4. Very effective

But the question leaves too much open to interpretation. What does “ineffective” mean? What does “effective” mean? One learner might have completely different criteria for an “effective” instructor than another. What is the difference between “somewhat ineffective” and “somewhat effective”? Could it be the snacks the instructor brought in mid-afternoon? It’s hard to tell. Also, how can the instructor use this feedback to improve next time? There’s just not enough information in this question to make it very useful.

For my evaluation project, I wrote the survey question using Will’s guidelines to provide distinct, meaningful options, and then allowed learners to select as many responses as they wanted.

   What statements are true about your course instructor? Select all that apply.

  1. Was OFTEN UNCLEAR or DISORGANIZED.
  2. Was OFTEN SOCIALLY AWKWARD OR INAPPROPRIATE.
  3. Exhibited UNACCEPTABLE LACK OF KNOWLEDGE.
  4. Exhibited LACK OF REAL-WORLD EXPERIENCE.
  5. Generally PERFORMED COMPETENTLY AS A TRAINER.
  6. Showed DEEP SUBJECT-MATTER KNOWLEDGE.
  7. Demonstrated HIGH LEVELS OF REAL-WORLD EXPERIENCE.
  8. MOTIVATED ME to ENGAGE DEEPLY IN LEARNING the concepts.
  9. Is a PERSON I CAME TO TRUST.

It’s still just one question, but in this case, the learner was able to provide more useful feedback to both the instructor and to the course sponsors. As Will recommended, I added proposed standards, and then tracked percentages of each response to include in my report:

I used this same approach when asking learners about the course learning objectives.

Instead of asking a question using a typical Likert scale:

   After taking the course, I am now able to navigate the system.

  1. Strongly agree
  2. Agree
  3. Neither agree nor disagree
  4. Disagree
  5. Strongly disagree

I created a more robust question that provided better information about how well the learner was able to navigate the system and what the learner felt he/she needed to become more proficient. I formatted the question as a matrix, so  I could ask about all of the learning objectives at once. The learner perceived this to be one question, but I gleaned nine questions-worth of data out of it. Here’s a redacted excerpt of that question as it appeared in my report, shortened to the first four learning objectives.

The questions took a little more time to write, but the same amount of time for respondents to answer. At first, the client was hesitant to use this new approach to survey questions, but it didn’t take them long to see how I would be able to gather much more valuable data.

The descriptive answer choices of the survey, combined with interviews and extant data reviews, allowed me to provide my client with a very thorough evaluation report. The report not only included a clear picture of the current training program, but also provided detailed and prioritized recommendations on how to improve both the training program and the work environment.

The client was thrilled. I had given them not only actionable recommendations but also the evidence they needed to procure funding to make the improvements. When my colleague checked back with them several months later, they had already implemented several of my recommendations and were in the process of implementing more.

I was amazed at how easy it was to improve the quality of the data I gathered, and it certainly impressed my client. I will never write evaluation questions again any other way.

If you plan on conducting a survey, try using Will’s approach to writing performance-focused questions. Whether you are evaluating a training program or looking for insights on improving workforce performance, you will be happy you did!