This is a guest post by Annette Wisniewski, Learning Strategist at Judge Learning Solutions. In this post she shares an experience building a better smile sheet for a client.

She also does a nice job showing how to improve questions by getting rid of Likert-like scales and replacing them with more concrete answer choices.

______________________________

Using a “Performance-focused Smile Sheets” Approach for Evaluating a Training Program

Recently, one of our clients had experienced an alarming drop in customer confidence, so they hired us, Judge Learning Solutions, to evaluate the effectiveness of their customer support training program. I was the learning strategist assigned to the project. Since training never works in isolation, I convinced the client to let me evaluate both the training program and the work environment.

I wanted to create the best survey possible to gauge the effectiveness of the training program as well as evaluate the learners’ work environment, including relevant tools, processes, feedback, support, and incentives. I also wanted to create a report that included actionable recommendations on how to improve both the training program and workforce performance.

I had recently finished reading Will’s book, Performance-focused Smile Sheets, so I knew that traditional Likert-based questions are problematic. They are very subjective, don’t give clear distinction between answer choices, and limit respondents to one, sometimes insufficient, option.

For example, most smile sheets ask learners to evaluate their instructor. A traditional smile sheet question might ask learners to rank the instructor using a Likert-scale.

   How would you rate your course instructor?

  1. Very ineffective
  2. Somewhat ineffective
  3. Somewhat effective
  4. Very effective

But the question leaves too much open to interpretation. What does “ineffective” mean? What does “effective” mean? One learner might have completely different criteria for an “effective” instructor than another. What is the difference between “somewhat ineffective” and “somewhat effective”? Could it be the snacks the instructor brought in mid-afternoon? It’s hard to tell. Also, how can the instructor use this feedback to improve next time? There’s just not enough information in this question to make it very useful.

For my evaluation project, I wrote the survey question using Will’s guidelines to provide distinct, meaningful options, and then allowed learners to select as many responses as they wanted.

   What statements are true about your course instructor? Select all that apply.

  1. Was OFTEN UNCLEAR or DISORGANIZED.
  2. Was OFTEN SOCIALLY AWKWARD OR INAPPROPRIATE.
  3. Exhibited UNACCEPTABLE LACK OF KNOWLEDGE.
  4. Exhibited LACK OF REAL-WORLD EXPERIENCE.
  5. Generally PERFORMED COMPETENTLY AS A TRAINER.
  6. Showed DEEP SUBJECT-MATTER KNOWLEDGE.
  7. Demonstrated HIGH LEVELS OF REAL-WORLD EXPERIENCE.
  8. MOTIVATED ME to ENGAGE DEEPLY IN LEARNING the concepts.
  9. Is a PERSON I CAME TO TRUST.

It’s still just one question, but in this case, the learner was able to provide more useful feedback to both the instructor and to the course sponsors. As Will recommended, I added proposed standards, and then tracked percentages of each response to include in my report:

I used this same approach when asking learners about the course learning objectives.

Instead of asking a question using a typical Likert scale:

   After taking the course, I am now able to navigate the system.

  1. Strongly agree
  2. Agree
  3. Neither agree nor disagree
  4. Disagree
  5. Strongly disagree

I created a more robust question that provided better information about how well the learner was able to navigate the system and what the learner felt he/she needed to become more proficient. I formatted the question as a matrix, so  I could ask about all of the learning objectives at once. The learner perceived this to be one question, but I gleaned nine questions-worth of data out of it. Here’s a redacted excerpt of that question as it appeared in my report, shortened to the first four learning objectives.

The questions took a little more time to write, but the same amount of time for respondents to answer. At first, the client was hesitant to use this new approach to survey questions, but it didn’t take them long to see how I would be able to gather much more valuable data.

The descriptive answer choices of the survey, combined with interviews and extant data reviews, allowed me to provide my client with a very thorough evaluation report. The report not only included a clear picture of the current training program, but also provided detailed and prioritized recommendations on how to improve both the training program and the work environment.

The client was thrilled. I had given them not only actionable recommendations but also the evidence they needed to procure funding to make the improvements. When my colleague checked back with them several months later, they had already implemented several of my recommendations and were in the process of implementing more.

I was amazed at how easy it was to improve the quality of the data I gathered, and it certainly impressed my client. I will never write evaluation questions again any other way.

If you plan on conducting a survey, try using Will’s approach to writing performance-focused questions. Whether you are evaluating a training program or looking for insights on improving workforce performance, you will be happy you did!

This is a guest post by Brett Christensen of Workplace Performance Consulting (www.workplaceperformance.ca/)

In this post, Brett tells us a story he recounted at a gathering of Debunker Club members at the 2018 ISPI conference in Seattle. It was such a telling story that I asked him if he would write a blog post sharing his lessons learned with you. It’s a cautionary tale about how easy it is to be fooled by information about learning that is too good to be true.

One thing to know before you read Brett’s post. He’s Canadian, which explains two things about what you will read, one of which is that he uses Canadian spellings. I’ll let you figure out the other thing.

______________________________

How I Was Fooled by Dale’s Cone

Why do we debunk?

A handful of members of the Debunker Club had the rare opportunity to meet in person on the morning of 09 April 2018 at the Starbucks Reserve Roastery in sunny (sic) Seattle prior to the second day of the International Society of Performance Improvement’s (ISPI) annual conference.

After introducing ourselves and learning that we had a “newbie” in our midst who had learned about the meeting from a friend’s re-tweet (see Networking Power on my blog), Will asked “Why do you debunk?” I somewhat sheepishly admitted that the root cause of my debunking desires could be traced back to a presentation I had done with a couple of colleagues in 2006 which was very early in my training and performance career. This was before I had discovered ISPI and before I understood and embraced the principles of evidence-based practice and scientific rigour.

We were working as e-Learning Instructional Designers (evangelists?) at the time and we were trying hard to communicate the benefits of e-Learning when it was designed correctly, which as we all know includes the design of activities that assist in transfer of learning. When we discovered Dale’s Cone – with the bad, bad, bad numbers, it made total sense to us. Insert foreboding music here.

The following image is an example of what we had seen (a problematic version of Dale’s Cone):

One of many bogus versions of Dale’s Cone

Our aim was to show to our training development colleagues that Dale’s Cone (with the numbers) was valid and that we should all endeavour to design activity into our training. We developed three different scenarios, one for each group. One group would read silently, one would read to each other out loud, and the last group would have an activity included. Everyone would then do a short assessment to measure transfer. The hope (Hypothesis? Pipe Dream?) was to show that the farther down the cone you went, the higher the transfer would be.

Well! That was not the outcome at all. In fact, if I remember correctly, everyone had similar scores on the exercise and the result was the exact opposite of what we were looking for. Rather than dig deeper into that when we got back home, we were on to the next big thing and Dale’s Cone faded in my memory. Before I go on, I’d like to point out that we weren’t total “hacks!” Our ISD process was based on valid models and we applied Mayer and Clark’s (2007) principles in all our work. We even received a “Gold e-Learning Award” award from the Canadian Society for Training Development, now the Institute for Performance and Learning (I4PL)

It wasn’t until much later, after being in ISPI for a number of years, that I had gotten to know Will, our head debunker, and read his research on Dale’s Cone! I was enlightened and a bit embarrassed that I had been a contributor to spreading bad “ju-ju” in the field. But hey – you don’t know what you don’t know. A couple of years after I found Will and finished my MSc, he started The Debunker Club. I knew I had to right my wrongs of the past and help spread the word to raise awareness of the myths and fads that continue to permeate our profession.

That’s why I am a debunker. Thank you, Will, for making me smarter in the work I do.

______________________________

Will’s Note: Brett is being much too kind. There are many people who take debunking very seriously these days. There are folks like De Bruyckere, Kirschner, Hulshof who wrote a book on learning myths. There is Clark Quinn who’s new debunking book is being released this month. There is Guy Wallace, Patti Shank, Julie Dirksen, Mirjam Neelen, Ruth Clark, Jane Bozarth, and many, many, many others (sorry if I’m forgetting you!). Now, there is also Brett Christensen who has been very active on social media over the last few years, debunking myths and more. The Debunker Club has over 600 members and over 50 people have applied for membership in the last month alone. And note, you are all invited to join.

Of course, debunking works most effectively if everybody jumps in and takes a stand. We must all stay current with the learning research and speak up gently and respectfully when we see bogus information being passed around.

Thanks Brett for sharing your story!! Most of us must admit that we have been taken in by bogus learning myths at some point in our careers. I know I have, and it’s a great reminder to stay humble and skeptical.

And let me point out a feature of Brett’s story that is easy to miss. Did you notice that Brett and his team actually did rigorous evaluation of their learning intervention? It was this evaluation that enabled Brett and his colleagues to learn how well things had gone. Now imagine if Brett and his team hadn’t done a good evaluation. They would never have learned that the methods they tried were not helpful in maximizing learning outcomes! Indeed, who knows what would have happened when they learned years later that the Dale’s Cone numbers were bogus. They might not have believed the truth of it!

Finally, let me say that Dale’s Cone itself, although not really research-based, is not the myth we’re talking about. It’s when Dale’s Cone is bastardized with the bogus numbers that it became truly problematic. See the link above entitled “research on Dale’s Cone” to see many other examples of bastardized cones.

Thanks again Brett for reminding us about what’s at stake. When myths are shared, the learning field loses trust, we learning professionals waste time, and our organizations bear the costs of many misspent funds. Our learners are also subjected to willy-nilly experimentation that hurts their learning.

 

 

This is a guest post by Robert O. Brinkerhoff (www.BrinkerhoffEvaluationInstitute.com).

Rob is a renowned expert on learning evaluation and performance improvement. His books, Telling Training’s Story and Courageous Training, are classics.

______________________________

70-20-10: The Good, the Bad, and the Ugly

The 70-20-10 framework may not have much if any research basis, but it is still a good reminder to all of us in in the L&D and performance improvement professions that the work-space is a powerful teacher and poses many opportunities for practice, feedback, and improvement.

But we must also recognize that a lot of the learning that is taking place on the job may not be for the good. I have held jobs in agencies, corporations and the military where I learned many things that were counter to what the organization wanted me to learn: how to fudge records, how to take unfair advantage of reimbursement policies, how to extend coffee breaks well beyond their prescribed limits, how to stretch sick leave, and so forth.

These were relatively benign instances. Consider this: Where did VW engineers learn how to falsify engine emission results? Where did Well Fargo staff learn how to create and sell fake accounts to their unwitting customers?

Besides these egregiously ugly examples, we have to also recognize that in the case of L&D programming that is intended to support new strategic and other change initiatives, the last thing the organization needs is more people learning how to do their jobs in the old way. AT&T, for example, worked very hard to drive new beliefs and actions to enable the business to shift from landline technologies to wireless; on-the-job learning dragged them backwards, and creates problems still today. As AllState Insurance tries to shift sales focus away from casualty policies to financial planning services, the old guard teaches the opposite actions, as they continue to harvest the financial benefits of policy renewals. Any organization that has to make wholesale and fundamental shifts to execute new strategies will have to cope with the negative effects of years of on-the-job learning.

When strategy is new, there are few if any on-the-job pockets of expertise and role models. Training new employees for existing jobs is a different story. Here, obviously, the on-job space is an entirely appropriate learning resource.

In short, we have to recognize that not all on-the-job learning is learning that we want. Yet on the job learning remains an inexorable force that we in L&D must learn how to understand, leverage, guide and manage.

This is a guest post by Laurel Norris (https://twitter.com/neutrinosky).

Laurel is a Training Specialist at Widen Enterprises, where she is involved in developing and delivering training, focusing on data, reporting, and strategy.

————————————————–

Robust Responses to Open-Ended Questions: Good Surveys Prime Respondents to Think Critically

By Laurel Norris

——–

I’ve always been a fan of evaluation. It’s a way to better understand the effectiveness of programs, determine if learning objectives are being met, and reveal ways to improve web workshops and live trainings.

Or so I thought.

It turns out that most evaluations don’t do those things. Performance-Focused Smile Sheets (the book is available at http://SmileSheets.com) taught me that and when I implemented the recommendations from the book, I discovered something interesting. Using Dr. Thalheimer’s method improved the quality and usefulness of survey data – and provided me with much more robust responses to open-ended questions.

By more robust, I mean they revealed what was helpful and why, talked about what they thought their challenges would be in trying it themselves, discussed what areas they thought could use more emphasis, and shared where they would have appreciated more examples. In short, they provided a huge amount of useful information.

Bigstock--187668292

Before using Dr. Thalheimer’s method, only a few open-ended responses were helpful. Most were along the lines of “Thanks!”, “Good webinar”, or “Well presented”. While those kinds of answers make me feel good, they don’t help me improve trainings.

I’m convinced that the improved survey primed people to be more engaged with the evaluation process and enabled them to easily provide useful information to me.

So what did I do differently? I’ll use real examples from web workshops I conducted. Both workshops ran around 45 minutes and had 30 responses to the end of workshop survey. They did differ in style, something that I will discuss towards the end of this article.

The Old Method

Let’s talk about the old method, what Dr. Thalheimer might call a traditional smile sheet. It was (luckily) short, with three multiple choice questions and two open-ended. Multiple choice questions included:

  • How satisfied are you with the content of this web workshop?
  • How satisfied are you with the presenter’s style?
  • How closely did this web workshop align with your expectations?

Participants answered the questions with options on Likert-like scales ranging from “Very Unsatisfied” to “Very Satisfied” or “Not at all Closely” to “Very Closely”. Of course, in true smile-sheet style, the multiple choice yielded no useful information. People were 4.1 level satisfied with the content of the webinar, “data” which did not enable me to make any useful changes to the information I provided.

Open-ended questions invited people to “Share your ideas for web workshop topics” and offer “Additional Comments”. Of the thirteen open-ended responses I got, five of them provided useful information. The other seven were either a thank you or some form of praise.

The New Method

Respondents were asked four multiple choice questions that judged effectiveness of the web workshop, how much the concepts would help them improve work outcomes, how well they understand the concepts taught, and whether or not they would use skills they learned in the workshop at their job.

The web workshop was about user engagement, in particular, administrators increasing engagement with the systems they manage. Questions were:

  • In regard to the user engagement, how able are you to put what you’ve learned into practice on the job?
  • From your perspective, how valuable are the concepts taught in the workshop? How much will they help improve engagement with your site?
  • How well do you feel you understand user engagement?
  • How motivated will you be to utilize these user engagement skills at your work?

Responses were specific and adapted from Dr. Thalheimer’s book. For example, here were the optional responses to the question “In regard to the user engagement, how able are you to put what you’ve learned into practice on the job?”

  • I’m not at all able to put the concepts into practice.
  • I have general awareness of the concepts taught, but I will need more training or practice to complete user engagement projects.
  • I am able to work on user engagement projects, but I’ll need more hands-on experience to be fully competent in using the concepts taught.
  • I am able to complete user engagement projects at a fully competent level in using the concepts taught.
  • I am able to complete user engagement projects at a expert level in using the concepts taught.

All four multiple choice questions had similarly complete options to choose from. From those responses, I was able to more appropriately determine the effectiveness of the workshop and whether my training content was performing as expected.

The open-ended question was relatively bland. I asked “What else would you like to share about your experience during the webinar today?” and received twelve specific, illuminating responses, such as:

“Loved the examples shown from other sites. Highly useful!”

“It validated some of the meetings I have had with my manager about user engagement and communication about our new site structure. It will be valuable for upcoming projects about asset distribution throughout the company.”

“I think the emphasis on planning the plan is helpful. I think I lack confidence in designing desk drops for Design teams. Also – I’m heavily engaged with my users now as it is – I am reached out to multiple times per day…but I think some of these suggestions will be valuable for more precision in those engagements.”

Even questions that didn’t give me direct feedback on the workshop, like “Still implementing our site, so a lot of today’s content isn’t yet relevant”, gave me information about my audience.

Conclusion

Clearly, I’m thrilled with the kind of information I am getting from using Dr. Thalheimer’s methods. I get useful, rich data from respondents that helps me better evaluate my content and understand my audience.

There is one positive aspect of using the new method that might have skewed the data. I designed the second web workshop after I read the book, and Dr. Thalheimer’s Training Effectiveness Taxonomy influenced the design. I thought more about the goals for the workshop, provided cognitive supports, repeated key messages, and did some situation-action triggering.

Based on those changes, the second web workshop was probably better than the first and it’s possible that the high-quality, engaging workshop contributed to the robust responses to open-ended questions I saw.

Either way, my evaluations (and learner experiences) are revolutionized. Has anyone seen a similar improvement in open-ended response rates since implementing performance-focused smile sheets?