Guest Post by Brett Christensen: How I Was Fooled by Dale’s Cone

,

This is a guest post by Brett Christensen of Workplace Performance Consulting (www.workplaceperformance.ca/)

In this post, Brett tells us a story he recounted at a gathering of Debunker Club members at the 2018 ISPI conference in Seattle. It was such a telling story that I asked him if he would write a blog post sharing his lessons learned with you. It’s a cautionary tale about how easy it is to be fooled by information about learning that is too good to be true.

One thing to know before you read Brett’s post. He’s Canadian, which explains two things about what you will read, one of which is that he uses Canadian spellings. I’ll let you figure out the other thing.

______________________________

How I Was Fooled by Dale’s Cone

Why do we debunk?

A handful of members of the Debunker Club had the rare opportunity to meet in person on the morning of 09 April 2018 at the Starbucks Reserve Roastery in sunny (sic) Seattle prior to the second day of the International Society of Performance Improvement’s (ISPI) annual conference.

After introducing ourselves and learning that we had a “newbie” in our midst who had learned about the meeting from a friend’s re-tweet (see Networking Power on my blog), Will asked “Why do you debunk?” I somewhat sheepishly admitted that the root cause of my debunking desires could be traced back to a presentation I had done with a couple of colleagues in 2006 which was very early in my training and performance career. This was before I had discovered ISPI and before I understood and embraced the principles of evidence-based practice and scientific rigour.

We were working as e-Learning Instructional Designers (evangelists?) at the time and we were trying hard to communicate the benefits of e-Learning when it was designed correctly, which as we all know includes the design of activities that assist in transfer of learning. When we discovered Dale’s Cone – with the bad, bad, bad numbers, it made total sense to us. Insert foreboding music here.

The following image is an example of what we had seen (a problematic version of Dale’s Cone):

One of many bogus versions of Dale’s Cone

Our aim was to show to our training development colleagues that Dale’s Cone (with the numbers) was valid and that we should all endeavour to design activity into our training. We developed three different scenarios, one for each group. One group would read silently, one would read to each other out loud, and the last group would have an activity included. Everyone would then do a short assessment to measure transfer. The hope (Hypothesis? Pipe Dream?) was to show that the farther down the cone you went, the higher the transfer would be.

Well! That was not the outcome at all. In fact, if I remember correctly, everyone had similar scores on the exercise and the result was the exact opposite of what we were looking for. Rather than dig deeper into that when we got back home, we were on to the next big thing and Dale’s Cone faded in my memory. Before I go on, I’d like to point out that we weren’t total “hacks!” Our ISD process was based on valid models and we applied Mayer and Clark’s (2007) principles in all our work. We even received a “Gold e-Learning Award” award from the Canadian Society for Training Development, now the Institute for Performance and Learning (I4PL)

It wasn’t until much later, after being in ISPI for a number of years, that I had gotten to know Will, our head debunker, and read his research on Dale’s Cone! I was enlightened and a bit embarrassed that I had been a contributor to spreading bad “ju-ju” in the field. But hey – you don’t know what you don’t know. A couple of years after I found Will and finished my MSc, he started The Debunker Club. I knew I had to right my wrongs of the past and help spread the word to raise awareness of the myths and fads that continue to permeate our profession.

That’s why I am a debunker. Thank you, Will, for making me smarter in the work I do.

______________________________

Will’s Note: Brett is being much too kind. There are many people who take debunking very seriously these days. There are folks like De Bruyckere, Kirschner, Hulshof who wrote a book on learning myths. There is Clark Quinn who’s new debunking book is being released this month. There is Guy Wallace, Patti Shank, Julie Dirksen, Mirjam Neelen, Ruth Clark, Jane Bozarth, and many, many, many others (sorry if I’m forgetting you!). Now, there is also Brett Christensen who has been very active on social media over the last few years, debunking myths and more. The Debunker Club has over 600 members and over 50 people have applied for membership in the last month alone. And note, you are all invited to join.

Of course, debunking works most effectively if everybody jumps in and takes a stand. We must all stay current with the learning research and speak up gently and respectfully when we see bogus information being passed around.

Thanks Brett for sharing your story!! Most of us must admit that we have been taken in by bogus learning myths at some point in our careers. I know I have, and it’s a great reminder to stay humble and skeptical.

And let me point out a feature of Brett’s story that is easy to miss. Did you notice that Brett and his team actually did rigorous evaluation of their learning intervention? It was this evaluation that enabled Brett and his colleagues to learn how well things had gone. Now imagine if Brett and his team hadn’t done a good evaluation. They would never have learned that the methods they tried were not helpful in maximizing learning outcomes! Indeed, who knows what would have happened when they learned years later that the Dale’s Cone numbers were bogus. They might not have believed the truth of it!

Finally, let me say that Dale’s Cone itself, although not really research-based, is not the myth we’re talking about. It’s when Dale’s Cone is bastardized with the bogus numbers that it became truly problematic. See the link above entitled “research on Dale’s Cone” to see many other examples of bastardized cones.

Thanks again Brett for reminding us about what’s at stake. When myths are shared, the learning field loses trust, we learning professionals waste time, and our organizations bear the costs of many misspent funds. Our learners are also subjected to willy-nilly experimentation that hurts their learning.

 

 

Guest Post from Robert O. Brinkerhoff: 70-20-10: The Good, the Bad, and the Ugly

, , ,

This is a guest post by Robert O. Brinkerhoff (www.BrinkerhoffEvaluationInstitute.com).

Rob is a renowned expert on learning evaluation and performance improvement. His books, Telling Training’s Story and Courageous Training, are classics.

______________________________

70-20-10: The Good, the Bad, and the Ugly

The 70-20-10 framework may not have much if any research basis, but it is still a good reminder to all of us in in the L&D and performance improvement professions that the work-space is a powerful teacher and poses many opportunities for practice, feedback, and improvement.

But we must also recognize that a lot of the learning that is taking place on the job may not be for the good. I have held jobs in agencies, corporations and the military where I learned many things that were counter to what the organization wanted me to learn: how to fudge records, how to take unfair advantage of reimbursement policies, how to extend coffee breaks well beyond their prescribed limits, how to stretch sick leave, and so forth.

These were relatively benign instances. Consider this: Where did VW engineers learn how to falsify engine emission results? Where did Well Fargo staff learn how to create and sell fake accounts to their unwitting customers?

Besides these egregiously ugly examples, we have to also recognize that in the case of L&D programming that is intended to support new strategic and other change initiatives, the last thing the organization needs is more people learning how to do their jobs in the old way. AT&T, for example, worked very hard to drive new beliefs and actions to enable the business to shift from landline technologies to wireless; on-the-job learning dragged them backwards, and creates problems still today. As AllState Insurance tries to shift sales focus away from casualty policies to financial planning services, the old guard teaches the opposite actions, as they continue to harvest the financial benefits of policy renewals. Any organization that has to make wholesale and fundamental shifts to execute new strategies will have to cope with the negative effects of years of on-the-job learning.

When strategy is new, there are few if any on-the-job pockets of expertise and role models. Training new employees for existing jobs is a different story. Here, obviously, the on-job space is an entirely appropriate learning resource.

In short, we have to recognize that not all on-the-job learning is learning that we want. Yet on the job learning remains an inexorable force that we in L&D must learn how to understand, leverage, guide and manage.

GUEST POST by LAUREL NORRIS: Robust Responses to Open-Ended Questions

,

This is a guest post by Laurel Norris (https://twitter.com/neutrinosky).

Laurel is a Training Specialist at Widen Enterprises, where she is involved in developing and delivering training, focusing on data, reporting, and strategy.

————————————————–

Robust Responses to Open-Ended Questions: Good Surveys Prime Respondents to Think Critically

By Laurel Norris

——–

I’ve always been a fan of evaluation. It’s a way to better understand the effectiveness of programs, determine if learning objectives are being met, and reveal ways to improve web workshops and live trainings.

Or so I thought.

It turns out that most evaluations don’t do those things. Performance-Focused Smile Sheets (the book is available at http://SmileSheets.com) taught me that and when I implemented the recommendations from the book, I discovered something interesting. Using Dr. Thalheimer’s method improved the quality and usefulness of survey data – and provided me with much more robust responses to open-ended questions.

By more robust, I mean they revealed what was helpful and why, talked about what they thought their challenges would be in trying it themselves, discussed what areas they thought could use more emphasis, and shared where they would have appreciated more examples. In short, they provided a huge amount of useful information.

Bigstock--187668292

Before using Dr. Thalheimer’s method, only a few open-ended responses were helpful. Most were along the lines of “Thanks!”, “Good webinar”, or “Well presented”. While those kinds of answers make me feel good, they don’t help me improve trainings.

I’m convinced that the improved survey primed people to be more engaged with the evaluation process and enabled them to easily provide useful information to me.

So what did I do differently? I’ll use real examples from web workshops I conducted. Both workshops ran around 45 minutes and had 30 responses to the end of workshop survey. They did differ in style, something that I will discuss towards the end of this article.

The Old Method

Let’s talk about the old method, what Dr. Thalheimer might call a traditional smile sheet. It was (luckily) short, with three multiple choice questions and two open-ended. Multiple choice questions included:

  • How satisfied are you with the content of this web workshop?
  • How satisfied are you with the presenter’s style?
  • How closely did this web workshop align with your expectations?

Participants answered the questions with options on Likert-like scales ranging from “Very Unsatisfied” to “Very Satisfied” or “Not at all Closely” to “Very Closely”. Of course, in true smile-sheet style, the multiple choice yielded no useful information. People were 4.1 level satisfied with the content of the webinar, “data” which did not enable me to make any useful changes to the information I provided.

Open-ended questions invited people to “Share your ideas for web workshop topics” and offer “Additional Comments”. Of the thirteen open-ended responses I got, five of them provided useful information. The other seven were either a thank you or some form of praise.

The New Method

Respondents were asked four multiple choice questions that judged effectiveness of the web workshop, how much the concepts would help them improve work outcomes, how well they understand the concepts taught, and whether or not they would use skills they learned in the workshop at their job.

The web workshop was about user engagement, in particular, administrators increasing engagement with the systems they manage. Questions were:

  • In regard to the user engagement, how able are you to put what you’ve learned into practice on the job?
  • From your perspective, how valuable are the concepts taught in the workshop? How much will they help improve engagement with your site?
  • How well do you feel you understand user engagement?
  • How motivated will you be to utilize these user engagement skills at your work?

Responses were specific and adapted from Dr. Thalheimer’s book. For example, here were the optional responses to the question “In regard to the user engagement, how able are you to put what you’ve learned into practice on the job?”

  • I’m not at all able to put the concepts into practice.
  • I have general awareness of the concepts taught, but I will need more training or practice to complete user engagement projects.
  • I am able to work on user engagement projects, but I’ll need more hands-on experience to be fully competent in using the concepts taught.
  • I am able to complete user engagement projects at a fully competent level in using the concepts taught.
  • I am able to complete user engagement projects at a expert level in using the concepts taught.

All four multiple choice questions had similarly complete options to choose from. From those responses, I was able to more appropriately determine the effectiveness of the workshop and whether my training content was performing as expected.

The open-ended question was relatively bland. I asked “What else would you like to share about your experience during the webinar today?” and received twelve specific, illuminating responses, such as:

“Loved the examples shown from other sites. Highly useful!”

“It validated some of the meetings I have had with my manager about user engagement and communication about our new site structure. It will be valuable for upcoming projects about asset distribution throughout the company.”

“I think the emphasis on planning the plan is helpful. I think I lack confidence in designing desk drops for Design teams. Also – I’m heavily engaged with my users now as it is – I am reached out to multiple times per day…but I think some of these suggestions will be valuable for more precision in those engagements.”

Even questions that didn’t give me direct feedback on the workshop, like “Still implementing our site, so a lot of today’s content isn’t yet relevant”, gave me information about my audience.

Conclusion

Clearly, I’m thrilled with the kind of information I am getting from using Dr. Thalheimer’s methods. I get useful, rich data from respondents that helps me better evaluate my content and understand my audience.

There is one positive aspect of using the new method that might have skewed the data. I designed the second web workshop after I read the book, and Dr. Thalheimer’s Training Effectiveness Taxonomy influenced the design. I thought more about the goals for the workshop, provided cognitive supports, repeated key messages, and did some situation-action triggering.

Based on those changes, the second web workshop was probably better than the first and it’s possible that the high-quality, engaging workshop contributed to the robust responses to open-ended questions I saw.

Either way, my evaluations (and learner experiences) are revolutionized. Has anyone seen a similar improvement in open-ended response rates since implementing performance-focused smile sheets?