Common Mistakes in Workplace Learning Evaluation

,

Dateline: This article will be updated periodically. As presented here, it is in its first iteration. I invite you to share your ideas and comments below.

= Will Thalheimer

Introduction

I’ve been in the workplace learning field for over 30 years, have made a lot of mistakes myself and have seen other mistakes get made over and over. In the last decade, as I’ve turned my attention more and more to learning evaluation, I see us making a number of critical mistakes. Because the biggest problem with these mistakes is that we continue to make them—often without realizing our errors—I aim to capture a list of common evaluation mistakes here, and update the list from time to time. I welcome your ideas. In the comment section below, please add your thoughts. Thanks!

Common Evaluation Mistakes

Listed in no particular order… and with common themes sometimes repeated across items…

When Measuring Learner Perceptions

  1. We rely on smile sheets that only tell us about learner satisfaction and course reputation—they don’t tell us enough about learning effectiveness.
  2. We rely on smile sheets as our only metric.
  3. We look at our smile sheet results and forget that what we’re seeing is not all that might be seen. That is, we may not realize that our results might be neglecting critical learning results such as learners’ comprehension, their motivation to apply what they’ve learned, their ability to remember, their success in applying what they’ve learned, etc.
  4. We ask learners about their learning, about their on-the-job performance, and about organizational results; and think we’ve actually measured learning, on-the-job performance, and organizational results—but we only have learners’ subjective opinions about these constructs.
  5. We ask learners questions they won’t be good at answering. (For example: “What percentage of your learning will you use in your job?” “Did the learning help you achieve the learning objectives?” “Did your instructor help you learn?”).
  6. We use Likert-like scales and numeric scales, both of which are too fuzzy to enable good respondent decision-making, to motivate attention to the questions, and to create results that are clear and actionable.
  7. We don’t often use after-training learner surveys to get insights into learning application.
  8. We use affirmations in our questions, biasing our results toward the positive.
  9. In using Likert-like scales, we put the positive choices first, biasing responses toward the positive.
  10. We don’t follow-up with learners to let them know what we’ve learned and the design improvements we’ve been able to target based on their feedback.
  11. We don’t attempt to persuade our learners of the importance of the learner surveys we are asking them to complete.
  12. We don’t use our survey questions as opportunities to send stealth messages to our key stakeholders about important learning-design imperatives.

Biases in Measuring Learning

  1. We measure learning in the learning context where learners are artificially triggered by contextual stimuli that help them remember more than they’ll remember when they are in a different context—for example, at their worksite.
  2. We measure learning near the end of learning, when learners have a relatively easy time remembering—so we fail to measure our learning interventions’ ability to minimize forgetting and support remembering.
  3. We measure low-importance learning metrics (like knowledge questions) rather than learning as represented in realistic decision-making and task performance.

Failing to Measure Learning Factors

  1. When we focus on measuring on-the-job performance and/or business results WHILE NEGLECTING to measure learning factors, we create for ourselves an inability to figure out how to improve our learning designs.
  2. When we don’t measure learning factors on a routine basis, we leave ourselves in the dark, we make it impossible to create a cycle of continuous improvement, and we are essentially abdicating our responsibility as professionals.

Failing to Compare Learning Factors

  1. We rarely, if ever, compare one learning method with another, as marketers do, for example in A-B testing.
  2. Even in elearning, where it wouldn’t be too difficult to randomize learners over different methods, we fail to take advantage.

Not Seeing Behind the Pretty Curtain

  1. We too often get sucked into gorgeous data visualizations without appreciating that the underlying data might be misleading, worthless, irrelevant, etc.
  2. Dashboardism is a version of this. If it looks sophisticated, we assume there is intelligence underneath.
  3. Big data and artificial intelligence may hold promise, but rarely in learning do we have big data. Certainly, in evaluating a single course, there is no big data. Even when we do have lots of data, the data has to be meaningful to be of use. Machine learning doesn’t work well if the most important factors aren’t collected as data. When we measure what is easy to measure compared to what is important to measure, we will discover mere trinkets of meaning.

Measuring On-the-Job Learning

  1. While it would be great to capture data on people’s efforts in learning on the job, so far it seems we are measuring what’s easy to measure, but perhaps not what is important to measure.
  2. We have failed to consider that measuring on-the-job learning could have as many negative consequences as positive consequences.
  3. Even with the promise of xAPI, the big obstacle is how to capture on-the-job learning data without such data-capture being onerous.
  4. Sometimes we forget that managers have been responsible for their teams’ learning ever since the modern organization was born. A mistake we make is creating another layer of learning infrastructure instead of leveraging managers.

The Biasing Effects of Pretests

  1. We forget that pretests—even if there is no feedback given—produce learning effects; perhaps activating interest, triggering future knowledge-seeking behavior, creating schemas that support knowledge formation, etc. This is problematic when we take pretested learning programs as representative of non-pretested learning results. For example, when we assume that a course piloted with a pretest-posttest design will produce similar results to the same course without the pretest.
  2. There is a similar problem with time-series evaluation designs as earlier assessments can affect learning for both good and ill. So, for example, if we see improvements in learning over time, it might be due to the assessment intervention itself rather than the actual learning program.

Not Focusing First on Evaluation Goals

  1. We too often measure just to measure. We don’t think about what decisions we want to be able to make based on our evaluation work.
  2. Too often we don’t start with the questions we want answers to and design our evaluations to answer those questions.
  3. We ask questions on learner surveys that give us information that we cannot act on are unlikely to act on even if we can.

Not Using Evaluation as a Golden Opportunity to Educate or Nudge Our Key Stakeholders (Including Ourselves)

  1. We fail to use the rare opportunity that evaluation provides—the opportunity to have meaningful conversations with key stakeholders—to push specific goals we have for action.
  2. We fail to use evaluation to promote a brand-like idea of who we are as a learning organization. For example, by asking questions about the support we provide to help learners apply their learning to their work, we could burnish our brand image as a learning department that is also a performance-improvement department.

Not Integrating Evaluation into Our Design and Development Process from the Beginning

  1. Too often we begin thinking about evaluation after we’ve already designed a learning program.
  2. Ideally, we would start with a set of evaluation objectives (that is, clear descriptions of the metrics and evaluation methods we will use), so we know in advance—and can negotiate with stakeholders in advance—how we will measure our learning outcomes.

Designing Evaluation Items from Poorly Defined Objectives

  1. Too often we begin the evaluation process by specifying low-level learning objectives that utilize action verbs (e.g., list, explain, etc.) and then derive our evaluation items from those low-level constructs—causing our evaluations to be focused on less-than meaningful metrics.
  2. Too often we utilize Bloom’s taxonomy to design our learning-evaluation assessments, distracting us from focusing on more powerful research-inspired considerations like contextually-realistic decisions and tasks.
  3. Ideally, instead of starting from poorly defined instructional objectives, we should be starting from more performance-focused evaluation objectives.

Measuring Only Obtrusively

  1. We focus mostly on obtrusive measures of learning (like knowledge checks pasted on at the end of modules) when we could also use unobtrusive measures of learning (challenging tasks incorporated as part of the learning).
  2. We fail to utilize subscription-learning opportunities (short learning sessions spread over time) to measure learning, where challenges feel like learning to learners but are also used by us to evaluate the strengths and weaknesses of our learning designs.

Failing to Distinguish between Validating Data and Non-Validating Data

  1. We too often fail to distinguish between data that can validly assess the success of a learning intervention and data that is a poor indicator of success. Some data may be useful to us, but not indicative of the success of learning. For example, the number of people who attend a training tells us nothing about whether the learning was well designed, but it can give us data to ensure that we have a large enough room the next time we run the class.
  2. We too often give ourselves credit for success when it is unwarranted; for example by capturing and reporting data on attendance, learner attention, learner interest, and learner participation—all of which are non-validating data. They can tell us things, but they cannot provide a valid indication of whether learning was successful.

Failing to Consider the Importance of Remembering

  1. We fail to see remembering as a critical node on the causal pathway from comprehension to remembering to work performance to results. By ignoring the critical importance of remembering, we leave a big blind spot in our evaluation systems.
  2. We too often measure learner comprehension and assume the result will be indicative of learners’ later ability to remember what they’ve learned. This is a huge blind spot because people can demonstrate understanding today but forget that understanding tomorrow or a week from now. By fooling ourselves with short-term tests of memory, we enable our learning interventions to continue with designs that fail to support long-term remembering (and fail to minimize forgetting).
  3. Our reliance on the Kirkpatrick-Katzell Four-Level Model exacerbates this tendency, as the Four Levels completely ignores the importance of remembering.

Failing to Evaluate Our Use of Prompting Mechanisms

  1. While we know we should measure learning, we almost always completely forget to measure the use and value of prompting mechanisms like job aids, performance support, signage, and other devices for directly prompting or guiding performance.
  2. Similarly, we fail to measure the synergy between training and prompting mechanisms. Certainly there are better ways to mix training and job aids for example, yet we rarely test different ways to use job aids to support training results.
  3. We also fail to examine grassroots prompting mechanisms—those crafted not through some formal authority, but by people doing the work. By gathering grassroots job aids and evaluating them against the more formal one’s we’ve developed, we can make better decisions about which ones to use.

Failing to Measure when Learning Technologies Give Us Obvious Opportunities

  1. As more and more learning interventions utilize some form of digital technology, we are failing to parlay the data-gathering capabilities of these technologies for use in learning evaluation. Even the simplest affordances are going unrequited. For example, we could easily keep track of how long it takes a learner to complete a task, we could diagnose knowledge through relatively simple mechanisms, we can provide follow-up assessment items after delays—and yet too few of us are using these capabilities and our authoring tools have not been redesigned to intuitively enable this functionality.
  2. We are too often failing in using the power of technology to enable the use of social evaluation methods. For example, we know from the research that peers often provide better feedback than experts to support learning—surely we can use this capability on evaluation practice as well.
  3. We fail to use technology to enable random assignment of learners to treatments—to different learning methods—to give us insights into what works best for our particular learners, content, situation, etc.

Failing to Push Against Poor Evaluation Practices

  1. Too often we report out evaluation data that are of dubious merit. For example, we highlight the number of learners who completed our programs, their general level of satisfaction, the number of words they utilized in a discussion forum, whether they were paying attention during a page-turning elearning program. By reporting these out, we venerate these measures as important, when that are not—or when they are not as important as other evaluation metrics.
  2. Our trade organizations are guilty of this as well, honoring organizations for “best-of-awards” that highlight the number of people who were trained, etc.
  3. The Kirkpatrick-Katzell Four-Level model is silent on poor practices, except that it does rightly cast suspicion on learner reaction data by putting it only at Level 1.

Please Add Ideas or Comment

This is some of what I’ve seen, but I’m sure some of you have seen other mistakes in learning evaluation. Please add them below… Also, feel free to comment on the items in my list, improving them, adding contingencies, attempting to refute that they are mistakes, etc. Thanks for your insights!

= Will Thalheimer

Michael Allen’s Question about the Last Thing in Training

, ,

At a recent online discussion held by the Minnesota Chapter of ISPI, where they were discussing the Serious eLearning Manifesto, Michael Allen offered a brilliant idea for learning professionals.

Michael’s company, Allen Interactions, talks regularly with prospective clients. It is in this capacity that Michael often asks this question (or one with this gist):

What is the last thing you want your learners to be doing in training before they go back to their work?

Michael knows the answer—he is using Socratic questioning here—and the answer should be obvious to those skilled in developing learning. We want people to be practicing what they’ve learned, and hopefully practicing in as realistic a way as possible. Of course!

Of course, but too often we don’t think like this. We have our instructional objectives and we plow through to cover content, hoping against hope that the knowledge seeds we plant will magically turn into performance on the job—as if knowledge can be harvested without any further nurturance.

We must remember the wisdom behind Michael’s question, that it is our job as learning professionals to ensure our learners are not only gaining knowledge, but that they are getting practice in making decisions and practicing realistic tasks.

One way to encourage yourself to engage in these good practices is to utilize the LTEM model, a learning evaluation model designed to support us as learning professionals in measuring what’s truly important in learning. LTEM’s Tier 5 and 6 encourage us to evaluate learners’ proficiency in making work-relevant decisions (Tier 5) and performing work-relevant tasks (Tier 6).

Whatever method you use to encourage your organization and team to engage in this research-based best practice, remember that we are harming our learners when we just teach content. Without practice, very little learning will transfer to the workplace.

Article on Training and Climate Change

,

Just today I wrote an article on Training and Climate Change and what, if anything, we workplace learning professionals can do about it.

See and comment on LinkedIn where I published the article. Click to go there now.

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

, ,

You won’t believe what a vendor said about a speaker at a conference—when that speaker spoke the truth.

 

Conferences are big business in the workplace learning field.

Conferences make organizers a ton of money. That’s great because pulling off a good conference is not as easy as it looks. In addition to finding a venue and attracting people to come to your event, you also have to find speakers. Some speakers are well-known quantities, but others are unknown.

In the learning field, where we are inundated with fads, myths, and misconceptions; finding speakers who will convey the most helpful messages, and avoid harmful messages, is particularly difficult. Ideally, as attendees, we’d like to hear truth from our speakers rather than fluff and falsehoods.

On the other hand, vendors pay big money to exhibit their products and services at a conference. Their goal is connecting with attendees who are buyers or who can influence buyers. Even conferences that don’t have exhibit halls usually get money from vendors in one way or another.

So, conference owners have two groups of customers to keep happy: attendees and vendors. In an ideal world, both groups would want the most helpful messages to be conveyed. Truth would be a common goal. So for example, let’s say new research is done that shows that freep learning is better than traditional elearning. A speaker at a conference shares the news that freep learning is great. Vendors in the audience hear the news. What will they do?

  • Vendor A hires a handsome and brilliant research practitioner to verify the power of freep learning with the idea of moving forward quickly and providing this powerful new tool to their customers.
  • Vendor B jumps right in and starts building freep learning to ensure their customers get the benefits of this powerful new learning method.
  • Vendor C pulls the conference organizers aside and tells them, “If you ever use that speaker again, we will not be back; you will not get our money any more.”

Impossible you say!

Would never happen you think!

You’re right. Not enough vendors are hiring fadingly-good-lookingly brilliant research-to-practice experts!

Here’s a true story from a conference that took place within the last year or so.

Clark Quinn spoke about learning myths and misconceptions during his session, describing the findings from his wonderful book. Later when he read his conference evaluations he found the following comment among the more admiring testimonials:

“Not cool to debunk some tools that exhibitors pay a lot of money to sell at [this conference] only to hear from a presenter at the conference that in his opinion should be debunked. Why would I want to be an exhibitor at a conference that debunks my products? I will not exhibit again if this speaker speaks at [conference name]” (emphasis added).

This story was recounted by Clark and captured by Jane Bozarth in an article on the myth of learning styles she wrote as the head of research for the eLearning Guild. Note that the conference in question was NOT an eLearning Guild conference.

What can we do?

Corruption is everywhere. Buyer beware! As adults, we know this! We know politicians lie (some more than others!!). We know that we have to take steps not to be ripped off. We get three estimates when we need a new roof. We ask for personal references. We look at the video replay. We read TripAdvisor reviews. We look for iron-clad guarantees that we can return products we purchased.

We don’t get flustered or worried, we take precautions. In the learning field, you can do the following:

  • Look for conference organizers who regularly include research-based sessions (scientific research NOT opinion research).
  • Look for the conferences that host the great research-to-practice gurus. People like Patti Shank, Julie Dirksen, Clark Quinn, Mirjam Neelen, Ruth Clark, Karl Kapp, Jane Bozarth, Dick Clark, Paul Kirschner, and others.
  • Look for conferences that do NOT have sessions—or have fewer sessions—that propagate common myths and misinformation (learning styles, the learning pyramid, MBTI, DISC, millennials learn differently, people only use 10% of their brains, only 10% of learning transfers, neuroscience as a panacea, people have the attention span of a goldfish, etc.).
  • If you want to look into Will’s Forbidden Future, you might look for the following:
    • conferences and/or trade organizations that have hired a content trustee, someone with a research background to promote valid information and cull bad information.
    • conferences that point speakers to a list of learning myths to avoid.
    • conferences that evaluate sessions based on the quality of the content.

Being exposed to false information isn’t just bad for us as professionals. It’s also bad for our organizations. Think of all the wasted effort—the toil, the time, the money—that was flushed down the toilet trying to redesign all our learning to meet the so-called learning-styles approach. Egads! If you need to persuade your management about the danger of learning myths you might try this.

In a previous blog post, I talked about what we can do as attendees of conferences to avoid learning bad information. That’s good reading as well. Check it out here.

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

That’s a damn good question!

 

 

Brinkerhoff Case Method — A Better Name for a Great Learning-Evaluation Innovation

, ,

Updated July 3rd, 2018—a week after the original post. See end of post for the update, featuring Rob Brinkerhoff’s response.

Rob Brinkerhoff’s “Success Case Method” needs a subtle name change. I think a more accurate name would be the “Brinkerhoff Case Method.”

I’m one of Rob’s biggest fans, having selected him in 2008 as the Neon Elephant Award Winner for his evaluation work.

Thirty five years ago, in 1983, Rob published an article where he introduced the “Success Case Method.” Here is a picture of the first page of that article:

In that article, the Success-Case Method was introduced as a way to find the value of training when it works. Rob wrote, “The success-case method does not purport to produce a balanced assessment of the total results of training. It does, however, attempt to answer the question: When training works, how well does it work?” (page 58, which is visible above).

The Success-Case Method didn’t stand still. It evolved and improved as Rob refined it based on his research and his work with clients. In his landmark book that details the methodology in 2006, Telling Training’s Story: Evaluation Made Simple, Credible, and Effective, Rob describes how to first survey learners and then sample some of them for interviews by selecting them based on their level of success in applying the training. “Once the sorting is complete, the next step is to select the interviewees from among the high and low success candidates, and perhaps from the middle categories.” (page 102).

To call this the success-case method seems more aligned with the original naming then the actual recommended practice. For that reason, I recommend that we simply call it the Brinkerhoff Case Method. This gives Rob the credit he deserves, and it more accurately reflects the rigor and balance of the method itself.

As soon as I posted the original post, I reached out to Rob Brinkerhoff to let him know. After some reflection, Rob wrote this and asked me to post it:

“Thank you for raising the issue of the currency of the name Success Case Method (SCM). It is kind of you to also think about identifying it more closely with my name. Your thoughts are not unlike others and on occasion even myself. 

It is true the SCM collects data from extreme portions of the respondent distribution including likely successes, non-successes, and ‘middling’ users of training. Digging into these different groups yields rich and useful information. 

Interestingly the original name I gave to the method some 40 years ago when I first started forging it was the “Pioneer” method since when we studied the impact of a new technology or innovation we felt we learned the most from the early adopters – those out ahead of the pack that tried out new things and blazed a trail for others to follow. I refined that name to a more familiar term but the concept and goal remained identical: accelerate the pace of change and learning by studying and documenting the work of those who are using it the most and the best. Their experience is where the gold is buried. 

Given that, I choose to stick with the “success” name. It expresses our overall intent: to nurture and learn from and drive more success. In a nutshell, this name expresses best not how we do it, but why we do it. 

Thanks again for your thoughtful reflections. We’re on the same page.“ 

Rob’s response is thoughtful, as usual. Yet my feelings on this remain steady. As I’ve written in my report on the new Learning-Transfer Evaluation Model (LTEM), our models should nudge appropriate actions. The same is true for the names we give things. Mining for success stories is good, but it has to be balanced. After all, if evaluation doesn’t look for the full truth—without putting a thumb on the scale—than we are not evaluating, we are doing something else.

I know Rob’s work. I know that he is not advocating for, nor does he engage in, unbalanced evaluations. I do fear that the name Success Case Method may give permission or unconsciously nudge lesser practitioners to find more success and less failure than is warranted by the facts.

Of course, the term “Success Case Method” has one brilliant advantage. Where people are hesitant to evaluate for fear of uncovering unpleasant results, the name “Success Case Method” may lessen the worry of moving forward and engaging in evaluation—and so it may actually enable the balanced evaluation that is necessary to uncover the truth of learning’s level of success.

Whatever we call it, the Success Case Method or the Brinkerhoff Case Method—and this is the most important point—it is one of the best learning-evaluation innovations in the past half century.

I also agree that since Rob is the creator, his voice should have the most influence in terms of what to call his invention.

I will end with one of my all-time favorite quotations from the workplace learning field, from Tim Mooney and Robert Brinkerhoff’s excellent book, Courageous Training:

“The goal of training evaluation is not to prove the value of training; the goal of evaluation is to improve the value of training.” (p. 94-95)

On this we should all agree!

Thankful for So Much!! Paying Off My Student Loans at 60 Years of Age

, ,

Today, after turning 60 a few months ago, I finally paid off my student loans—the loans that made it possible for me to get my doctorate from Columbia University. I was in school for eight years from 1988 to 1996, studying with some of the brightest minds in learning, development, and psychology (Rothkopf, Black, Peverly, Kuhn, Higgins, Dweck, Mischel, Darling-Hammond, not to mention my student cohort). If my math is right, that’s 22 years to pay off my student-loan debt. A ton of interest paid too!

I’m eternally grateful! Without the federal government funding my education, my life would have been so much different. I would never have learned how to understand the research on learning. My work at Work-Learning Research, Inc.—attempting to bridge the gap between research and practice—would not have been possible. Thank you to my country—the United States of America—and fellow citizens for giving me the opportunity of a lifetime!! Thanks also must go to my wife for marrying into the forever-string of monthly payments. Without her tolerance and support I certainly would be lost in a different life.

I’ve often reflected on my good fortune in being able to pursue my interests, and wondered why we as a society don’t do more to give our young people an easier road to pursue their dreams. Even when I hear about the brilliant people winning MacArthur fellowships, I wonder why only those who have proven their genius are being boosted. They are deserving of course, but where is our commitment to those who might be teetering on a knife edge of opportunity and economic desperation? I was even lucky as an undergrad back in the late 1970’s, paying relatively little for a good education at a state school and having parents who funded my tuition and living expenses. College today is wicked expensive, cutting out even more of our promising youth from realizing their potential.

Economic mobility is not as easy as we might like it. The World Bank just released a report showing that worldwide only 12% of young adults have been able to obtain more education than their parents. The United States iis no longer the land of opportunity we once liked to imagine.

This is crazy short-sighted, and combine this with our tendency to underfund our public schools, it has the smell of a societal suicide.

That’s depressing! Today I’m celebrating my ability to get student loans two-and-a-half decades ago and pay them off over the last twenty-some years! Hooray!

Seems not so important when put into perspective. It’s something though.

 

 

Reflections This Morning On Brushing My Teeth

,

I use a toothbrush that has a design that research shows maximizes the benefits of brushing. It spins, and spinning is better than oscillations. It also has a timer, telling me when I’ve brushed for two minutes. Ever since a hockey stick broke up my mouth when I was twenty, I’ve been sensitive about the health of my teeth.

But what the heck does this have to so with learning and development? Well, let’s see.

Maybe my toothbrush is a performance-support exemplar. Maybe no training is needed. I didn’t read any instructions. I just used it. The design is intuitive. There’s an obvious button that turns it on, an obvious place to put toothpaste (on the bristles), and it’s obvious that the bristles should be placed against the teeth. So, the tool itself seems like it needs no training.

But I’m not so sure. Let’s do a thought experiment. If I give a spinning toothbrush to a person who’s never brushed their teeth, would they use it correctly? Would they use it at all? Doubtful!

What is needed to encourage or enable good tooth-brushing?

  • People probably need something to compel them to brush, perhaps knowledge that brushing prevents dental calamities like tooth decay, gum disease, bad breath—and may even prevent cognitive decline as in Alzheimer’s. Training may help motivate action.
  • People will probably be more likely to brush if they know other people are brushing. Tons of behavioral economics studies have shown that people are very attuned to social comparisons. Again, training may help motivate action. Interestingly, people may be more likely to brush with a spinning toothbrush if others around them are also brushing with spinning toothbrushes. Training coworkers (or in this case other family members) may also help motivate action.
  • People will probably brush more effectively if they know to brush all their teeth, and to brush near their gums as well—not just the biting surfaces of their teeth. Training may provide this critical knowledge.
  • People will probably brush more effectively if they are set up—probably if they set themselves up—to be triggered by environmental cues. For example, tooth-brushing is often most effectively triggered when people brush right after breakfast and right before they go to bed. Training people to set up situation-action triggering may increase later follow through.
  • People will probably brush more effectively if they know that they should brush for two minutes or so rather than just brushing quickly. Training may provide this critical knowledge. Note, of course, that the toothbrush’s two-minute timer may act to support this behavior. Training and performance support can work together to enable effective behavior.
  • People will be more likely to use an effective toothbrush if the cost of the toothbrush is reasonable given the benefits. The costs of people’s tools will affect their use.
  • People will be more likely to use a toothbrush if the design is intuitive and easy to use. The design of tools will affect their use.

I’m probably missing some things in the list above, but it should suffice to show the complex interplay between our workplace tools/practices/solutions and training and prompting mechanisms (i.e., performance support and the like).

But what insights, or dare we say wisdom, can we glean from these reflections? How about these for starters:

  • We could provide excellent training, but if our tools/practices/solutions are poorly designed they won’t get used.
  • We could provide excellent training, but if our tools/practices/solutions are too expensive they won’t get used.
  • Let’s not forget the importance of prior knowledge. Most of us know the basics of tooth brushing. It would waste time, and be boring, to repeat that in a training. The key is to know, to really know, not just guess, what our learners know—and compare that to what they really need to know.
  • Even when we seem to have a perfectly intuitive, well-designed tool/practice/solution let’s not assume that no training is needed. There might be knowledge or motivational gaps that need to be bridged (yes, the pun was intended! SMILE). There might be situation-action triggering sets that can be set up. There might be reminders that would be useful to maintain motivation and compel correct technique.
  • Learning should not be separated from design of tools/practices/solutions. We can support better designs by reminding the designers and developers of these objects/procedures that training can’t fix a bad design. Better yet, we can work hand in hand involved in prototyping the tool/training bundle to enable the most pertinent feedback during the design process itself.
  • Training isn’t just about knowledge, it’s also about motivation.
  • Motivation isn’t just the responsibility of training. Motivation is an affordance of the tools/practices/solutions themselves, it is borne in the social environment, it is subject to organizational influence, particularly through managers and peers.
  • Training shouldn’t be thought of as a one-time event. Reminders may be valuable as well, particularly around the motivational aspects (for simple tasks), and to support remembering (for tasks that are easily forgotten or misunderstood).

One final note. We might also train people to use the time when they are engaged in automated tasks—tooth-brushing for example—to reflect on important aspects of their lives, gaining from the learning that might occur or the thoughts that may enable future learning. And adding a little fun into mundane tasks. Smile for the tiny nooks and crannies of our lives that may illuminate our thinking!

 

Dealing with Emotional Readiness — What Should We be Doing?

,

I included this piece in my newsletter this morning (which you can sign up for here) and it seemed to really resonate with people, so I’m including it here.

I’ve always had a high tolerance for pain, but breaking my collarbone at the end of February really sent me crashing down a mountain. Lying in bed, I got thinking about the emotional side of workplace performance. I don’t have brilliant insights here, just maybe some thoughts that will get you thinking.

Skiing with my family in Vermont, it had been a very good week. My wife and I, skiing together on our next-to-last day on the mountain, went to look for the kids who told us they’d be skiing in the terrain park (where the jumps are). My wife skied down first, then I went. There was a little jump, about a foot high, of the kind I’d jumped many times. But this time would be different.

As I sailed over the jump — slowly because I’m wary of going too fast and flying too far — I looked down and saw, NOT snow, but stairs. WTF? Every other time I took a small jump there was snow on the other side. Not metal stairs. Not dry metal stairs. In mid-air my thought was, “okay, just stay calm, you’ll ski over the stairs back to snow.” Alas, what happened was that I came crashing down on my left shoulder, collarbone splintering into five or six pieces, and lay 20 feet down the hill. I knew right away that things were bad. I knew that my life would be upended for weeks or months. I knew that miserable times lay ahead.

I got up quickly. I was in shock and knew it. I looked up the mountain back at the jump. Freakin’ stairs!! What they hell were they doing there? I was rip-roaring mad! One of my skis was still on the stairs. The dry surface must have grabbed it, preventing me from skiing further down the slope. I retrieved my ski. A few people skied by me. My wife was long gone down the mountain. I was in shock and I was mad as hell and I couldn’t think straight, but I knew I shouldn’t sit down so I just stood there for five or ten minutes in a daze. Finally someone asked if I was okay, and I yelled crazy loud for the whole damn mountain to hear, “NO!” He was nice, said he’d contact the ski patrol.

I’ll spare you the details of the long road to recovery — a recovery not yet complete — but the notable events are that I had badly broken my collarbone, badly sprained my right thumb and mildly sprained my left thumb, couldn’t button my shirts or pants for a while, had to lie in bed in one position or the pain would be too great, watched a ton of Netflix (I highly recommend Seven Seconds!), couldn’t do my work, couldn’t help around the house, got surgery on my collarbone, got pneumonia, went to physical therapy, etc… Enough!

Feeling completely useless, I couldn’t help reflect on the emotional side of learning, development, and workplace performance in general. In L&D, we tend to be helping people who are able to learn and take actions — but maybe not all the people we touch are emotionally present and able. Some are certainly dealing with family crises, personal insecurities, previous job setbacks, and the like. Are we doing enough for them?

I’m not a person prone to depression, but I was clearly down for the count. My ability to do meaningful work was nil. At first it was the pain and the opiates. Later it was the knowledge that I just couldn’t get much work done, that I was unable to keep up with promises I’d made, that I was falling behind. I knew, intellectually, that I just had to wait it out — and this was a great comfort. But still, my inability to think and to work reminded me that as a learning professional I ought to be more empathetic with learners who may be suffering as well.

Usually, dealing with emotional issues of an employee falls to the employee and his or her manager. I used to be a leadership trainer and I don’t remember preparing my learners for how to deal with direct reports who might be emotionally unready to fully engage with work. Fortunately today we are willing to talk about individual differences, but I think we might be forgetting the roller-coaster ride of being human, that we may differ in our emotional readiness on any given day. Managers/supervisors rightly are the best resource for dealing with such issues, but we in L&D might have a role to play as well.

I don’t have answers here. I wish I did. Probably it begins with empathy. We also can help more when we know our learners more — and when we can look them in the eyes. This is tricky business though. We’re not qualified to be therapists and simple solutions like being nice and kind and keeping things positive is not always the answer. We know from the research that challenging people with realistic decision-making challenges is very beneficial. Giving honest feedback on poor performance is beneficial.

We should probably avoid scolding and punishment and reprimands. Competition has been shown to harmful in at least some learning situations. Leaderboards may make emotional issues worse, and generally the limited research suggests they aren’t very useful anyway. But these negative actions are rarely invoked, so we have to look deeper.

I wish I had more wisdom about this. I wish there was research-based evidence I could draw on. I wish I could say more than just be human, empathetic, understanding.

Now that I’m aware of this, I’m going to keep my eyes and ears open to learning more about how we as learning professionals can design learning interventions to be more sensitive to the ups and downs of our fellow travelers.

If you’ve got good ideas, please send them my way or use the LinkedIn Post generated from this to join the discussion.

Will Thalheimer Interviewed by Jeffrey Dalto

, , ,

Series of Four Interviews

I was recently interviewed by Jeffrey Dalto of Convergence Training. Jeffrey is a big fan of research-based practice. He did a great job compiling the interviews.

Click on the title of each one to read the interview:

The Backfire Effect is NOT Prevalent: Good News for Debunkers, Humans, and Learning Professionals!

, , ,

An exhaustive new research study reveals that the backfire effect is not as prevalent as previous research once suggested. This is good news for debunkers, those who attempt to correct misconceptions. This may be good news for humanity as well. If we cannot reason from truth, if we cannot reliably correct our misconceptions, we as a species will certainly be diminished—weakened by realities we have not prepared ourselves to overcome. For those of us in the learning field, the removal of the backfire effect as an unbeatable Goliath is good news too. Perhaps we can correct the misconceptions about learning that every day wreak havoc on our learning designs, hurt our learners, push ineffective practices, and cause an untold waste of time and money spent chasing mythological learning memes.

 

 

The Backfire Effect

The backfire effect is a fascinating phenomenon. It occurs when a person is confronted with information that contradicts an incorrect belief that they hold. The backfire effect results from the surprising finding that attempts at persuading others with truthful information may actually make the believer believe the untruth even more than if they hadn’t been confronted in the first place.

The term “backfire effect” was coined by Brendan Nyhan and Jason Reifler in a 2010 scientific article on political misperceptions. Their article caused an international sensation, both in the scientific community and in the popular press. At a time when dishonesty in politics seems to be at historically high levels, this is no surprise.

In their article, Nyhan and Reifler concluded:

“The experiments reported in this paper help us understand why factual misperceptions about politics are so persistent. We find that responses to corrections in mock news articles differ significantly according to subjects’ ideological views. As a result, the corrections fail to reduce misperceptions for the most committed participants. Even worse, they actually strengthen misperceptions among ideological subgroups in several cases.”

Subsequently, other researchers found similar backfire effects, and notable researchers working in the area (e.g., Lewandowsky) have expressed the rather fatalistic view that attempts at correcting misinformation were unlikely to work—that believers would not change their minds even in the face of compelling evidence.

 

Debunking the Myths in the Learning Field

As I have communicated many times, there are dozens of dangerously harmful myths in the learning field, including learning styles, neuroscience as fundamental to learning design, and the myth that “people remember 10% of what they read, 20% of what they hear, 30% of what they see…etc.” I even formed a group to confront these myths (The Debunker Club), although, and I must apologize, I have not had the time to devote to enabling our group to be more active.

The “backfire effect” was a direct assault on attempts to debunk myths in the learning field. Why bother if we would make no difference? If believers of untruths would continue to believe? If our actions to persuade would have a boomerang effect, causing false beliefs to be believed even more strongly? It was a leg-breaking, breath-taking finding. I wrote a set of recommendations to debunkers in the learning field on how best to be successful in debunking, but admittedly many of us, me included, were left feeling somewhat paralyzed by the backfire finding.

Ironically perhaps, I was not fully convinced. Indeed, some may think I suffered from my own backfire effect. In reviewing a scientific research review in 2017 on how to debunk, I implored that more research be done so we could learn more about how to debunk successfully, but I also argued that misinformation simply couldn’t be a permanent condition, that there was ample evidence to show that people could change their minds even on issues that they once believed strongly. Racist bigots have become voices for diversity. Homophobes have embraced the rainbow. Religious zealots have become agnostic. Lovers of technology have become anti-technology. Vegans have become paleo meat lovers. Devotees of Coke have switched to Pepsi.

The bottom line is that organizations waste millions of dollars every year when they use faulty information to guide their learning designs. As a professional in the learning field, it’s our professional responsibility to avoid the danger of misinformation! But is this even possible?

 

The Latest Research Findings

There is good news in the latest research! Thomas Wood and Ethan Porter just published an article (2018) that could not find any evidence for a backfire effect. They replicated the Nyhan and Reifler research, they expanded tenfold the number of misinformation instances studied, they modified the wording of their materials, they utilized over 10,000 participants in their research, and they varied their methods for obtaining those participants. They did not find any evidence for a backfire effect.

“We find that backfire is stubbornly difficult to induce, and is thus unlikely to be a characteristic of the public’s relationship to factual information. Overwhelmingly, when presented with factual information that corrects politicians—even when the politician is an ally—the average subject accedes to the correction and distances himself from the inaccurate claim.”

There is additional research to show that people can change their minds, that fact-checking can work, that feedback can correct misconceptions. Rich and Zaragoza (2016) found that misinformation can be fixed with corrections. Rich, Van Loon, Dunlosky, and  Zaragoza (2017) found that corrective feedback could work, if it was designed to be believed. More directly, Nyhan and Reifler (2016), in work cited by the American Press Institute Accountability Project, found that fact checking can work to debunk misinformation.

 

Some Perspective

First of all, let’s acknowledge that science sometimes works slowly. We don’t yet know all we will know about these persuasion and information-correction effects.

Also, let’s please be careful to note that backfire effects, when they are actually evoked, are typically found in situations where people are ideologically inclined to a system of beliefs for which they strongly identify. Backfire effects have been studied most of in situations where someone identifies themselves as a conservative or liberal—when this identity is singularly or strongly important to their self identity. Are folks in the learning field such strong believers in a system of beliefs and self-identity to easily suffer from the backfire effect? Maybe sometimes, but perhaps less likely than in the area of political belief which seems to consume many of us.

Here are some learning-industry beliefs that may be so deeply held that the light of truth may not penetrate easily:

  • Belief that learners know what is best for their learning.
  • Belief that learning is about conveying information.
  • Belief that we as learning professionals must kowtow to our organizational stakeholders, that we have no grounds to stand by our own principles.
  • Belief that our primary responsibility is to our organizations not our learners.
  • Belief that learner feedback is sufficient in revealing learning effectiveness.

These beliefs seem to undergird other beliefs and I’ve seen in my work where these beliefs seem to make it difficult to convey important truths. So let me clarify and first say that it is speculative on my part that these beliefs have substantial influence. This is a conjecture on my part. Note also that given that the research on the “backfire effect” has now been shown to be tenuous, I’m not claiming that fighting such foundational beliefs will cause damage. On the contrary, it seems like it might be worth doing.

 

Knowledge May Be Modifiable, But Attitudes and Belief Systems May Be Harder to Change

The original backfire effect showed that people believed facts more strongly when confronted with correct information, but this misses an important distinction. There are facts and there are attitudes, belief systems, and policy preferences.

A fascinating thing happened when Wood and Porter looked for—but didn’t find—the backfire effect. They talked with the original researchers, Nyhan and Reifler, and they began working together to solve the mystery. Why did the backfire effect happen sometimes but not regularly?

In a recent podcast (January 28, 2018) from the “You Are Not So Smart” podcast, Wood, Porter, and Nyhan were interviewed by David McRaney and they nicely clarified the distinction between factual backfire and attitudinal backfire.

Nyhan:

“People often focus on changing factual beliefs with the assumption that it will have consequences for the opinions people hold, or the policy preferences that they have, but we know from lots of social science research…that people can change their factual beliefs and it may not have an effect on their opinions at all.”

“The fundamental misconception here is that people use facts to form opinions and in practice that’s not how we tend to do it as human beings. Often we are marshaling facts to defend a particular opinion that we hold and we may be willing to discard a particular factual belief without actually revising the opinion that we’re using it to justify.”

Porter:

“Factual backfire if it exits would be especially worrisome, right? I don’t really believe we are going to find it anytime soon… Attitudinal backfire is less worrisome, because in some ways attitudinal backfire is just another description for failed persuasion attempts… that doesn’t mean that it’s impossible to change your attitude. That may very well just mean that what I’ve done to change your attitude has been a failure. It’s not that everyone is immune to persuasion, it’s just that persuasion is really, really hard.”

McRaney (Podcast Host):

“And so the facts suggest that the facts do work, and you absolutely should keep correcting people’s misinformation because people do update their beliefs and that’s important, but when we try to change people’s minds by only changing their [factual] beliefs, you can expect to end up, and engaging in, belief whack-a-mole, correcting bad beliefs left and right as the person on the other side generates new ones to support, justify, and protect the deeper psychological foundations of the self.”

Nyhan:

“True backfire effects, when people are moving overwhelmingly in the opposite direction, are probably very rare, they are probably on issues where people have very strong fixed beliefs….”

 

Rise Up! Debunk!

Here’s the takeaway for us in the learning field who want to be helpful in moving practice to more effective approaches.

  • While there may be some underlying beliefs that influence thinking in the learning field, they are unlikely to be as strongly held as the political beliefs that researchers have studied.
  • The research seems fairly clear that factual backfire effects are extremely unlikely to occur, so we should not be afraid to debunk factual inaccuracies.
  • Persuasion is difficult but not impossible, so it is worth making attempts to debunk. Such attempts are likely to be more effective if we take a change-management approach, look to the science of persuasion, and persevere respectfully and persistently over time.

Here is the message that one of the researchers, Tom Wood, wants to convey:

“I want to affirm people. Keep going out and trying to provide facts in your daily lives and know that the facts definitely make some difference…”

Here are some methods of persuasion from a recent article by Flynn, Nyhan, and Reifler (2017) that have worked even with people’s strongly-held beliefs:

  • When the persuader is seen to be ideologically sympathetic with those who might be persuaded.
  • When the correct information is presented in a graphical form rather than a textual form.
  • When an alternative causal account of the original belief is offered.
  • When credible or professional fact-checkers are utilized.
  • When multiple “related stories” are also encountered.

The stakes are high! Bad information permeates the learning field and makes our learning interventions less effective, harming our learners and our organizations while wasting untold resources.

We owe it to our organizations, our colleagues, and our fellow citizens to debunk bad information when we encounter it!

Let’s not be assholes about it! Let’s do it with respect, with openness to being wrong, and with all our persuasive wisdom. But let’s do it. It’s really important that we do!

 

Research Cited

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303–330.

Nyhan, B., & Zaragoza, J. (2016). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Available at: www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.
Rich, P. R., Van Loon, M. H., Dunlosky, J., & Zaragoza, M. S. (2017). Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 492-501.
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62-74. http://dx.doi.org/10.1037/xlm0000155
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence, Political Behavior, Advance Online Publication.