Posts

When Machine Learning Uses Data Unrepresentative of the Domain

, , ,

I read a brilliantly clear article today by Karen Hao from the MIT Technology Review. It explains what machine learning is and provides a very clear diagram, which I really like.

Now, I am not a machine learning expert, but I have a hypothesis that has a ton of face validity when I look in the mirror. My hypothesis is this:

Machine learning will return meaningful results to the extent that the data it uses is representative of the domain of interest.

A simple thought experiment will demonstrate my point. If a learning machine is given data about professional baseball in the United States from 1890 to 2000, it would learn all kinds of things, including the benefits of pulling the ball as a batter. Pulling the ball occurs when a right-handed batter hits the ball to left field or a left-handed batter hits the ball to right field. In the long history of baseball, many hitters benefited by trying to pull the ball because it produces a more natural swing and one that generates more power. Starting in the 2000s, with the advent of advanced analytics that show where each player is likely to hit the ball, a maneuver called “the shift” has been used more and more, and pulling the ball consistently has become a disadvantage. In the shift, players in the field migrate to positions where the batter is most likely to hit the ball, thus negating the power benefits of pulling the ball. Our learning machine would not know about the decreased benefits of pulling the ball because it would never have seen that data (the data from 2000 to now).

Machine Learning about Learning

I raise this point because of the creeping danger in the world of learning and education. My concern is relevant to all domains where it is difficult to collect data on the most meaningful factors and outcomes, but where it is easy to collect data on less meaningful factors and outcomes. In such cases, our learning machines will only have access to the data that is easy to collect and will not have access to the data that is difficult or impossible to collect. People using machine learning on inadequate data sets will certainly find some interesting relationships in the data, but they will have no way of knowing what they’re missing. The worst part is that they’ll report out some fanciful finding, we’ll all jump up and down in excitement and then make bad decisions based on the bad learning caused by the incomplete data.

In the learning field—where trainers, instructional designers, elearning developers, and teachers reside—we have learned a great deal about research-based methods of improving learning results, but we don’t know everything. And, many of the factors which we know work are not tracked in most big data sets. Do we track the spacing effect, the number of concepts repeated with attention-grabbing variation, the alignment between context cues present in learning materials compared with the cues that will be present in our learners’ future performance situations? Ha! Our large data sets certainly miss many of these causal factors.

Our large data sets also fail to capture the most important outcomes metrics. Indeed, as I have been regularly recounting for years now, typical learning measurements are often biased by measuring immediately at the end of learning (before memories fade), by measuring in the learning context (where contextual cues offer inauthentic hints or subconscious triggering of recall targets), and by measuring with tests of low-level knowledge (compared to more relevant skill-focused decision-making or task performances). We also overwhelmingly rely on learner feedback surveys, both in workplace learning and in higher education. Learner surveys—at least traditional ones—have been found virtually uncorrelated with learning results. To use these meaningless metrics as a primary dependent variable (or just a variable) in a machine-learning data set is complete malpractice.

So if our machine learning data sets have a poor handle on both the inputs and outputs to learning, how can we see machine learning interpretations of learning data as anything but a shiny new alchemy?

 

Measurement Illuminates Some Things But Leaves Others Hidden

In my learning-evaluation workshops, I often show this image.

The theme expressed in the picture is relevant to all types of evaluation, but it is especially relevant for machine learning.

When we review our smile-sheet data, we should not fool ourselves into thinking that we have learned the truth about the success of our learning. When we see a beautiful data-visualized dashboard, we should not deceive ourselves and our organizations that what we see is all there is to see.

So it is with machine learning, especially in domains where the data is not all the data, where the data flawed, and where the boundaries on the full population of domain data are not known.

 

With Apologies to Karen Hao

I don’t know Karen, but I do love her diagram. It’s clear and makes some very cogent points—as does her accompanying article.

Here is her diagram, which you can see in the original at this URL.

Like measurement itself, I think the diagram illuminates some aspects of machine learning but fails to illuminate the danger of incomplete or unrepresentative data sets. So, I made a modification in the flow chart.

And yes, that seven-letter provocation is a new machine-learning term that arises from the data as I see it.

Corrective Feedback Welcome

As I said to start this invective, my hypothesis about machine learning and data is just that—a semi-educated hypothesis that deserves a review from people more knowledgeable than me about machine learning. So, what do you think machine learning gurus?

 

Karen Hao Responds

I’m so delighted! One day after I posted this, Karen Hao responded:

 

 

 

Article on Training and Climate Change

,

Just today I wrote an article on Training and Climate Change and what, if anything, we workplace learning professionals can do about it.

See and comment on LinkedIn where I published the article. Click to go there now.

Collecting Learning Bloggers Most Popular Posts

One of my blog posts is much, much, much more popular than any of my other blog posts. It’s the blog post on the fake numbers on the learning pyramid sometimes associated with Edgar Dale’s Cone.

The disproportional popularity of this blog post has gone on for years and it fascinates me. It says something. I can’t be sure exactly what it says, but I have some good guesses.

This has me thinking. What if we all–bloggers in the learning industry–shared our most popular blog posts. Maybe this would say something about our industry.

Let’s try it!!

 

If You’ve Been Blogging in the Learning Industry for One Year or More,
Please Answer The Questions Below.

Or use this link: https://www.surveymonkey.com/r/bigposts

 

Create your own user feedback survey

Reflections for Labor Day, Inspired by Long-Time Efforts Channeling Brilliant Researchers

My research-and-consulting practice, Work-Learning Research, was 20 years old last Saturday. This has given me pause to reflect on where I’ve been and how learning research has involved in the past two decades.

Today, as I’m preparing a conference proposal for next year’s ISPI conference, I found an early proposal I put together for the Great Valley chapter of ISPI to speak at one of their monthly meetings back in 2002. I don’t remember whether they actually accepted my proposal, but here is an excerpt:

 

 

Interesting that even way back then, I had found and compiled research on retrieval practice, spacing, feedback, etc. from the scientific journals and the exhaustive labor of hundreds of academic researchers. I am still talking about these foundational learning principles even today—because they are fundamental and because research and practice continue to demonstrate their power. You can look at recent books and websites that are now celebrating these foundational learning factors (Make it Stick, Design for How People Learn, The Ingredients for Great Teaching, Learning Scientists website, etc.).

Feeling blessed today, as we here in the United States move into a weekend where we honor our workers, that I have been able to use my labor to advance these proven principles, uncovered first by brilliant academic researchers such as Bjork, Bahrick, Mayer, Ebbinghaus, Crowder, Sweller, van Merriënboer, Rothkopf, Runquist, Izawa, Smith, Roediger, Melton, Hintzman, Glenberg, Dempster, Estes, Eich, Ericsson, Davies, Garner, Chi, Godden, Baddeley, Hall, Hintzman, Herz, Karpicke, Butler, Kirschner, Clark, Kulhavy, Moreno, Pashler, Cepeda, and many others.

From these early beginnings, I created a listing of twelve foundational learning factors—factors that I have argued should be our first priority in creating great learning—reviewed here in this document.

Happy Labor Day everyone and special thanks to the researchers who continue to make my work possible—and enable learning professionals of all stripes to build increasingly effective learning!

If you’d like to leave a remembrance in regard to Work-Learning Research’s 20th anniversary, or just read my personal reflections about it, you can do that here.

 

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

, ,

You won’t believe what a vendor said about a speaker at a conference—when that speaker spoke the truth.

 

Conferences are big business in the workplace learning field.

Conferences make organizers a ton of money. That’s great because pulling off a good conference is not as easy as it looks. In addition to finding a venue and attracting people to come to your event, you also have to find speakers. Some speakers are well-known quantities, but others are unknown.

In the learning field, where we are inundated with fads, myths, and misconceptions; finding speakers who will convey the most helpful messages, and avoid harmful messages, is particularly difficult. Ideally, as attendees, we’d like to hear truth from our speakers rather than fluff and falsehoods.

On the other hand, vendors pay big money to exhibit their products and services at a conference. Their goal is connecting with attendees who are buyers or who can influence buyers. Even conferences that don’t have exhibit halls usually get money from vendors in one way or another.

So, conference owners have two groups of customers to keep happy: attendees and vendors. In an ideal world, both groups would want the most helpful messages to be conveyed. Truth would be a common goal. So for example, let’s say new research is done that shows that freep learning is better than traditional elearning. A speaker at a conference shares the news that freep learning is great. Vendors in the audience hear the news. What will they do?

  • Vendor A hires a handsome and brilliant research practitioner to verify the power of freep learning with the idea of moving forward quickly and providing this powerful new tool to their customers.
  • Vendor B jumps right in and starts building freep learning to ensure their customers get the benefits of this powerful new learning method.
  • Vendor C pulls the conference organizers aside and tells them, “If you ever use that speaker again, we will not be back; you will not get our money any more.”

Impossible you say!

Would never happen you think!

You’re right. Not enough vendors are hiring fadingly-good-lookingly brilliant research-to-practice experts!

Here’s a true story from a conference that took place within the last year or so.

Clark Quinn spoke about learning myths and misconceptions during his session, describing the findings from his wonderful book. Later when he read his conference evaluations he found the following comment among the more admiring testimonials:

“Not cool to debunk some tools that exhibitors pay a lot of money to sell at [this conference] only to hear from a presenter at the conference that in his opinion should be debunked. Why would I want to be an exhibitor at a conference that debunks my products? I will not exhibit again if this speaker speaks at [conference name]” (emphasis added).

This story was recounted by Clark and captured by Jane Bozarth in an article on the myth of learning styles she wrote as the head of research for the eLearning Guild. Note that the conference in question was NOT an eLearning Guild conference.

What can we do?

Corruption is everywhere. Buyer beware! As adults, we know this! We know politicians lie (some more than others!!). We know that we have to take steps not to be ripped off. We get three estimates when we need a new roof. We ask for personal references. We look at the video replay. We read TripAdvisor reviews. We look for iron-clad guarantees that we can return products we purchased.

We don’t get flustered or worried, we take precautions. In the learning field, you can do the following:

  • Look for conference organizers who regularly include research-based sessions (scientific research NOT opinion research).
  • Look for the conferences that host the great research-to-practice gurus. People like Patti Shank, Julie Dirksen, Clark Quinn, Mirjam Neelen, Ruth Clark, Karl Kapp, Jane Bozarth, Dick Clark, Paul Kirschner, and others.
  • Look for conferences that do NOT have sessions—or have fewer sessions—that propagate common myths and misinformation (learning styles, the learning pyramid, MBTI, DISC, millennials learn differently, people only use 10% of their brains, only 10% of learning transfers, neuroscience as a panacea, people have the attention span of a goldfish, etc.).
  • If you want to look into Will’s Forbidden Future, you might look for the following:
    • conferences and/or trade organizations that have hired a content trustee, someone with a research background to promote valid information and cull bad information.
    • conferences that point speakers to a list of learning myths to avoid.
    • conferences that evaluate sessions based on the quality of the content.

Being exposed to false information isn’t just bad for us as professionals. It’s also bad for our organizations. Think of all the wasted effort—the toil, the time, the money—that was flushed down the toilet trying to redesign all our learning to meet the so-called learning-styles approach. Egads! If you need to persuade your management about the danger of learning myths you might try this.

In a previous blog post, I talked about what we can do as attendees of conferences to avoid learning bad information. That’s good reading as well. Check it out here.

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

That’s a damn good question!

 

 

Reflections This Morning On Brushing My Teeth

,

I use a toothbrush that has a design that research shows maximizes the benefits of brushing. It spins, and spinning is better than oscillations. It also has a timer, telling me when I’ve brushed for two minutes. Ever since a hockey stick broke up my mouth when I was twenty, I’ve been sensitive about the health of my teeth.

But what the heck does this have to so with learning and development? Well, let’s see.

Maybe my toothbrush is a performance-support exemplar. Maybe no training is needed. I didn’t read any instructions. I just used it. The design is intuitive. There’s an obvious button that turns it on, an obvious place to put toothpaste (on the bristles), and it’s obvious that the bristles should be placed against the teeth. So, the tool itself seems like it needs no training.

But I’m not so sure. Let’s do a thought experiment. If I give a spinning toothbrush to a person who’s never brushed their teeth, would they use it correctly? Would they use it at all? Doubtful!

What is needed to encourage or enable good tooth-brushing?

  • People probably need something to compel them to brush, perhaps knowledge that brushing prevents dental calamities like tooth decay, gum disease, bad breath—and may even prevent cognitive decline as in Alzheimer’s. Training may help motivate action.
  • People will probably be more likely to brush if they know other people are brushing. Tons of behavioral economics studies have shown that people are very attuned to social comparisons. Again, training may help motivate action. Interestingly, people may be more likely to brush with a spinning toothbrush if others around them are also brushing with spinning toothbrushes. Training coworkers (or in this case other family members) may also help motivate action.
  • People will probably brush more effectively if they know to brush all their teeth, and to brush near their gums as well—not just the biting surfaces of their teeth. Training may provide this critical knowledge.
  • People will probably brush more effectively if they are set up—probably if they set themselves up—to be triggered by environmental cues. For example, tooth-brushing is often most effectively triggered when people brush right after breakfast and right before they go to bed. Training people to set up situation-action triggering may increase later follow through.
  • People will probably brush more effectively if they know that they should brush for two minutes or so rather than just brushing quickly. Training may provide this critical knowledge. Note, of course, that the toothbrush’s two-minute timer may act to support this behavior. Training and performance support can work together to enable effective behavior.
  • People will be more likely to use an effective toothbrush if the cost of the toothbrush is reasonable given the benefits. The costs of people’s tools will affect their use.
  • People will be more likely to use a toothbrush if the design is intuitive and easy to use. The design of tools will affect their use.

I’m probably missing some things in the list above, but it should suffice to show the complex interplay between our workplace tools/practices/solutions and training and prompting mechanisms (i.e., performance support and the like).

But what insights, or dare we say wisdom, can we glean from these reflections? How about these for starters:

  • We could provide excellent training, but if our tools/practices/solutions are poorly designed they won’t get used.
  • We could provide excellent training, but if our tools/practices/solutions are too expensive they won’t get used.
  • Let’s not forget the importance of prior knowledge. Most of us know the basics of tooth brushing. It would waste time, and be boring, to repeat that in a training. The key is to know, to really know, not just guess, what our learners know—and compare that to what they really need to know.
  • Even when we seem to have a perfectly intuitive, well-designed tool/practice/solution let’s not assume that no training is needed. There might be knowledge or motivational gaps that need to be bridged (yes, the pun was intended! SMILE). There might be situation-action triggering sets that can be set up. There might be reminders that would be useful to maintain motivation and compel correct technique.
  • Learning should not be separated from design of tools/practices/solutions. We can support better designs by reminding the designers and developers of these objects/procedures that training can’t fix a bad design. Better yet, we can work hand in hand involved in prototyping the tool/training bundle to enable the most pertinent feedback during the design process itself.
  • Training isn’t just about knowledge, it’s also about motivation.
  • Motivation isn’t just the responsibility of training. Motivation is an affordance of the tools/practices/solutions themselves, it is borne in the social environment, it is subject to organizational influence, particularly through managers and peers.
  • Training shouldn’t be thought of as a one-time event. Reminders may be valuable as well, particularly around the motivational aspects (for simple tasks), and to support remembering (for tasks that are easily forgotten or misunderstood).

One final note. We might also train people to use the time when they are engaged in automated tasks—tooth-brushing for example—to reflect on important aspects of their lives, gaining from the learning that might occur or the thoughts that may enable future learning. And adding a little fun into mundane tasks. Smile for the tiny nooks and crannies of our lives that may illuminate our thinking!

 

Dealing with Emotional Readiness — What Should We be Doing?

,

I included this piece in my newsletter this morning (which you can sign up for here) and it seemed to really resonate with people, so I’m including it here.

I’ve always had a high tolerance for pain, but breaking my collarbone at the end of February really sent me crashing down a mountain. Lying in bed, I got thinking about the emotional side of workplace performance. I don’t have brilliant insights here, just maybe some thoughts that will get you thinking.

Skiing with my family in Vermont, it had been a very good week. My wife and I, skiing together on our next-to-last day on the mountain, went to look for the kids who told us they’d be skiing in the terrain park (where the jumps are). My wife skied down first, then I went. There was a little jump, about a foot high, of the kind I’d jumped many times. But this time would be different.

As I sailed over the jump — slowly because I’m wary of going too fast and flying too far — I looked down and saw, NOT snow, but stairs. WTF? Every other time I took a small jump there was snow on the other side. Not metal stairs. Not dry metal stairs. In mid-air my thought was, “okay, just stay calm, you’ll ski over the stairs back to snow.” Alas, what happened was that I came crashing down on my left shoulder, collarbone splintering into five or six pieces, and lay 20 feet down the hill. I knew right away that things were bad. I knew that my life would be upended for weeks or months. I knew that miserable times lay ahead.

I got up quickly. I was in shock and knew it. I looked up the mountain back at the jump. Freakin’ stairs!! What they hell were they doing there? I was rip-roaring mad! One of my skis was still on the stairs. The dry surface must have grabbed it, preventing me from skiing further down the slope. I retrieved my ski. A few people skied by me. My wife was long gone down the mountain. I was in shock and I was mad as hell and I couldn’t think straight, but I knew I shouldn’t sit down so I just stood there for five or ten minutes in a daze. Finally someone asked if I was okay, and I yelled crazy loud for the whole damn mountain to hear, “NO!” He was nice, said he’d contact the ski patrol.

I’ll spare you the details of the long road to recovery — a recovery not yet complete — but the notable events are that I had badly broken my collarbone, badly sprained my right thumb and mildly sprained my left thumb, couldn’t button my shirts or pants for a while, had to lie in bed in one position or the pain would be too great, watched a ton of Netflix (I highly recommend Seven Seconds!), couldn’t do my work, couldn’t help around the house, got surgery on my collarbone, got pneumonia, went to physical therapy, etc… Enough!

Feeling completely useless, I couldn’t help reflect on the emotional side of learning, development, and workplace performance in general. In L&D, we tend to be helping people who are able to learn and take actions — but maybe not all the people we touch are emotionally present and able. Some are certainly dealing with family crises, personal insecurities, previous job setbacks, and the like. Are we doing enough for them?

I’m not a person prone to depression, but I was clearly down for the count. My ability to do meaningful work was nil. At first it was the pain and the opiates. Later it was the knowledge that I just couldn’t get much work done, that I was unable to keep up with promises I’d made, that I was falling behind. I knew, intellectually, that I just had to wait it out — and this was a great comfort. But still, my inability to think and to work reminded me that as a learning professional I ought to be more empathetic with learners who may be suffering as well.

Usually, dealing with emotional issues of an employee falls to the employee and his or her manager. I used to be a leadership trainer and I don’t remember preparing my learners for how to deal with direct reports who might be emotionally unready to fully engage with work. Fortunately today we are willing to talk about individual differences, but I think we might be forgetting the roller-coaster ride of being human, that we may differ in our emotional readiness on any given day. Managers/supervisors rightly are the best resource for dealing with such issues, but we in L&D might have a role to play as well.

I don’t have answers here. I wish I did. Probably it begins with empathy. We also can help more when we know our learners more — and when we can look them in the eyes. This is tricky business though. We’re not qualified to be therapists and simple solutions like being nice and kind and keeping things positive is not always the answer. We know from the research that challenging people with realistic decision-making challenges is very beneficial. Giving honest feedback on poor performance is beneficial.

We should probably avoid scolding and punishment and reprimands. Competition has been shown to harmful in at least some learning situations. Leaderboards may make emotional issues worse, and generally the limited research suggests they aren’t very useful anyway. But these negative actions are rarely invoked, so we have to look deeper.

I wish I had more wisdom about this. I wish there was research-based evidence I could draw on. I wish I could say more than just be human, empathetic, understanding.

Now that I’m aware of this, I’m going to keep my eyes and ears open to learning more about how we as learning professionals can design learning interventions to be more sensitive to the ups and downs of our fellow travelers.

If you’ve got good ideas, please send them my way or use the LinkedIn Post generated from this to join the discussion.

Preparing for Attending a Learning Conference in 2018 and Beyond

, ,

Conferences can be beautiful things—helping us learn, building relationships that help us grow and bring us joy, prompting us to see patterns in our industry we might miss otherwise, helping us set our agenda for what we need to learn more fully.

 

Conferences can be ugly things—teaching us myths, reinforcing our misconceptions, connecting us to people who steer us toward misinformation, echo chambers of bad thinking, a vendor-infested shark tank that can lead us to buy stuff that’s not that helpful or is actually harmful, pushing us to set our learning agenda on topics that distract us from what’s really important.

Given this dual reality, your job as a conference attendee is to be smart and skeptical, and work to validate your learning. In the Training Maximizers model, the first goal is ensuring our learning interventions are built from a base of “valid, credible content.” In conferences, where we curate our own learning, we have to be sure we are imbibing the good stuff and avoiding the poison. Here, I’ll highlight a few things to keep in mind as you attend a conference. I’ll aim to make this especially relevant for this year, 2018, when you are likely to encounter certain memes and themes.

Drinking the Good Stuff

  • Look for speakers who have a background doing two things, (1) studying the scientific research (not opinion research), and (2) working with real-world learning professionals in implementing research-based practices.
  • If speakers make statements without evidence, ask for the evidence or the research—or be highly skeptical.
  • If things seem almost too good to be true, warn yourself that learning is complicated and there are no magic solutions.
  • Be careful not to get sucked into group-think. Just because others seem to like something, doesn’t necessarily make it good. Think for yourself.
  • Remember that correlation does not mean causation. Just because some factors seem to move in the same direction doesn’t mean that one caused the other. It could be the other way around. Or some third factor may have caused both to move in the same direction.

Prepare Yourself for This Year’s Shiny Objects

  • Learning Styles — Learning Styles is bogus, but it keeps coming up every year. Don’t buy into it. Learn about it first. The Debunker.Club has a nice post on why we should avoid learning styles. Read it. And don’t let people tell you that learning styles if bad but learning preferences is good. They’re pulling the wool.
  • Dale’s Cone with Percentages — People do NOT remember 10% of what they read, 20% of what they read, 30% of what they see (or anything similar). Here’s the Internet’s #1 URL debunking this silly myth.
  • Neuroscience and Learning — It’s a very hot topic with vendors touting neuroscience to entice you to be impressed. But neuroscience at this time has nothing to say about learning.
  • Microlearning — Because it’s a hot topic, vendors and consultants are yapping about microlearning endlessly. But microlearning is not a thing. It’s many things. Here’s the definitive definition of microlearning, if I must say so myself.
  • AI, Machine Learning, and Big Data — Sexy stuff certainly, but it’s not clear whether these things can be applied to learning, or whether they can be applied now (given the state of our knowledge). Beware of taking these claims too seriously. Be open, but skeptical.
  • Gamification — We are almost over this fad thankfully. Still, keep in mind that gamification, like microlearning, is comprised of multiple learning methods. Gamification is NOT a thing.
  • Personalization — Personalization is a great idea, if carried out properly. Be careful if what someone calls personalization is just another way of saying learning styles. Also, don’t buy into the idea that personalization is new. It’s quite old. See Skinner and Keller back in the early 1900’s.
  • Learning Analytics — There is a lot of movement in learning evaluation, but much of it is wrong-headed focus on pretty dashboards, and a focus only on business impact. Look for folks who are talking about how to get better feedback to make learning better. I’ll tout my own effort to develop a new approach to gathering learner feedback. But beware and do NOT just do smile sheets (said by the guy who wrote a book on smile sheets)! Beware of vendors telling you to focus only on measuring behavior and business results. Read why here.
  • Kirkpatrick-Katzell Four-Level Model of Evaluation — Always a constant in the workplace learning field for the past 60 years. But even with recent changes it still has too many problems to be worthwhile. See the new Learning-Transfer Evaluation Model (LTEM), a worthy replacement.

Wow! So much to be worried about.

Well, sorry to say, I surely missing some stuff. It’s up to you to be smart and skeptical at the same time you stay open to new ideas.

You might consider joining the Debunker Club, folks who have agreed on the importance of debunking myths in the learning field.

The Learning-Transfer Evaluation Model (LTEM)

NOTICE OF UPDATE (17 May 2018):

The LTEM Model and accompanying Report were updated today and can be found below.

Two major changes were included:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

 

This blog post introduces a new learning-evaluation model, the Learning-Transfer Evaluation Model (LTEM).

 

Why We Need a New Evaluation Model

It is well past time for a new learning-evaluation model for the workplace learning field. The Kirkpatrick-Katzell Model is over 60 years old. It was born in a time before computers, before cognitive psychology revolutionized the learning field, before the training field was transformed from one that focused on the classroom learning experience to one focused on work performance.

The Kirkpatrick-Katzell model—created by Raymond Katzell and popularized by Donald Kirkpatrick—is the dominant standard in our field. It has also done a tremendous amount of harm, pushing us to rely on inadequate evaluation practices and poor learning designs.

I am not the only critic of the Kirkpatrick-Katzell model. There are legions of us. If you do a Google search starting with these letters, “Criticisms of the Ki,” Google anticipates the following: “Criticisms of the Kirkpatrick Model” as one of the most popular searches.

Here’s what a seminal research review said about the Kirkpatrick-Katzell model (before the model’s name change):

The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders…

The New Model

For the past year or so I’ve been working to develop a new learning-evaluation model. The current version is the eleventh iteration, improved after reflection, after asking some of the smartest people in our industry to provide feedback, after sharing earlier versions with conference attendees at the 2017 ISPI innovation and design-thinking conference and the 2018 Learning Technologies conference in London.

Special thanks to the following people who provided significant feedback that improved the model and/or the accompanying article:

Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, Mike Rustici

The model, which I’ve named the Learning-Transfer Evaluation Model (LTEM, pronounced L-tem) is a one page, eight-level model, augmented with color coding and descriptive explanations. In addition to the model itself, I’ve prepared a 34-page report to describe the need for the model, the rationale for its design, and recommendations on how to use it.

You can access the model and the report by clicking on the following links:

 

 

Release Notes

The LTEM model and report were researched, conceived, and written by Dr. Will Thalheimer of Work-Learning Research, Inc., with significant and indispensable input from others. No one sponsored or funded this work. It was a labor of love and is provided as a valentine for the workplace learning field on February 14th, 2018 (Version 11). Version 12 was released on May 17th, 2018 based on feedback from its use. The model and report are copyrighted by Will Thalheimer, but you are free to share them as is, as long as you don’t sell them.

If you would like to contact me (Will Thalheimer), you can do that at this link: https://www.worklearning.com/contact/

If you would like to sign up for my list, you can do that here: https://www.worklearning.com/sign-up/

 

 

The Backfire Effect is NOT Prevalent: Good News for Debunkers, Humans, and Learning Professionals!

, , ,

An exhaustive new research study reveals that the backfire effect is not as prevalent as previous research once suggested. This is good news for debunkers, those who attempt to correct misconceptions. This may be good news for humanity as well. If we cannot reason from truth, if we cannot reliably correct our misconceptions, we as a species will certainly be diminished—weakened by realities we have not prepared ourselves to overcome. For those of us in the learning field, the removal of the backfire effect as an unbeatable Goliath is good news too. Perhaps we can correct the misconceptions about learning that every day wreak havoc on our learning designs, hurt our learners, push ineffective practices, and cause an untold waste of time and money spent chasing mythological learning memes.

 

 

The Backfire Effect

The backfire effect is a fascinating phenomenon. It occurs when a person is confronted with information that contradicts an incorrect belief that they hold. The backfire effect results from the surprising finding that attempts at persuading others with truthful information may actually make the believer believe the untruth even more than if they hadn’t been confronted in the first place.

The term “backfire effect” was coined by Brendan Nyhan and Jason Reifler in a 2010 scientific article on political misperceptions. Their article caused an international sensation, both in the scientific community and in the popular press. At a time when dishonesty in politics seems to be at historically high levels, this is no surprise.

In their article, Nyhan and Reifler concluded:

“The experiments reported in this paper help us understand why factual misperceptions about politics are so persistent. We find that responses to corrections in mock news articles differ significantly according to subjects’ ideological views. As a result, the corrections fail to reduce misperceptions for the most committed participants. Even worse, they actually strengthen misperceptions among ideological subgroups in several cases.”

Subsequently, other researchers found similar backfire effects, and notable researchers working in the area (e.g., Lewandowsky) have expressed the rather fatalistic view that attempts at correcting misinformation were unlikely to work—that believers would not change their minds even in the face of compelling evidence.

 

Debunking the Myths in the Learning Field

As I have communicated many times, there are dozens of dangerously harmful myths in the learning field, including learning styles, neuroscience as fundamental to learning design, and the myth that “people remember 10% of what they read, 20% of what they hear, 30% of what they see…etc.” I even formed a group to confront these myths (The Debunker Club), although, and I must apologize, I have not had the time to devote to enabling our group to be more active.

The “backfire effect” was a direct assault on attempts to debunk myths in the learning field. Why bother if we would make no difference? If believers of untruths would continue to believe? If our actions to persuade would have a boomerang effect, causing false beliefs to be believed even more strongly? It was a leg-breaking, breath-taking finding. I wrote a set of recommendations to debunkers in the learning field on how best to be successful in debunking, but admittedly many of us, me included, were left feeling somewhat paralyzed by the backfire finding.

Ironically perhaps, I was not fully convinced. Indeed, some may think I suffered from my own backfire effect. In reviewing a scientific research review in 2017 on how to debunk, I implored that more research be done so we could learn more about how to debunk successfully, but I also argued that misinformation simply couldn’t be a permanent condition, that there was ample evidence to show that people could change their minds even on issues that they once believed strongly. Racist bigots have become voices for diversity. Homophobes have embraced the rainbow. Religious zealots have become agnostic. Lovers of technology have become anti-technology. Vegans have become paleo meat lovers. Devotees of Coke have switched to Pepsi.

The bottom line is that organizations waste millions of dollars every year when they use faulty information to guide their learning designs. As a professional in the learning field, it’s our professional responsibility to avoid the danger of misinformation! But is this even possible?

 

The Latest Research Findings

There is good news in the latest research! Thomas Wood and Ethan Porter just published an article (2018) that could not find any evidence for a backfire effect. They replicated the Nyhan and Reifler research, they expanded tenfold the number of misinformation instances studied, they modified the wording of their materials, they utilized over 10,000 participants in their research, and they varied their methods for obtaining those participants. They did not find any evidence for a backfire effect.

“We find that backfire is stubbornly difficult to induce, and is thus unlikely to be a characteristic of the public’s relationship to factual information. Overwhelmingly, when presented with factual information that corrects politicians—even when the politician is an ally—the average subject accedes to the correction and distances himself from the inaccurate claim.”

There is additional research to show that people can change their minds, that fact-checking can work, that feedback can correct misconceptions. Rich and Zaragoza (2016) found that misinformation can be fixed with corrections. Rich, Van Loon, Dunlosky, and  Zaragoza (2017) found that corrective feedback could work, if it was designed to be believed. More directly, Nyhan and Reifler (2016), in work cited by the American Press Institute Accountability Project, found that fact checking can work to debunk misinformation.

 

Some Perspective

First of all, let’s acknowledge that science sometimes works slowly. We don’t yet know all we will know about these persuasion and information-correction effects.

Also, let’s please be careful to note that backfire effects, when they are actually evoked, are typically found in situations where people are ideologically inclined to a system of beliefs for which they strongly identify. Backfire effects have been studied most of in situations where someone identifies themselves as a conservative or liberal—when this identity is singularly or strongly important to their self identity. Are folks in the learning field such strong believers in a system of beliefs and self-identity to easily suffer from the backfire effect? Maybe sometimes, but perhaps less likely than in the area of political belief which seems to consume many of us.

Here are some learning-industry beliefs that may be so deeply held that the light of truth may not penetrate easily:

  • Belief that learners know what is best for their learning.
  • Belief that learning is about conveying information.
  • Belief that we as learning professionals must kowtow to our organizational stakeholders, that we have no grounds to stand by our own principles.
  • Belief that our primary responsibility is to our organizations not our learners.
  • Belief that learner feedback is sufficient in revealing learning effectiveness.

These beliefs seem to undergird other beliefs and I’ve seen in my work where these beliefs seem to make it difficult to convey important truths. So let me clarify and first say that it is speculative on my part that these beliefs have substantial influence. This is a conjecture on my part. Note also that given that the research on the “backfire effect” has now been shown to be tenuous, I’m not claiming that fighting such foundational beliefs will cause damage. On the contrary, it seems like it might be worth doing.

 

Knowledge May Be Modifiable, But Attitudes and Belief Systems May Be Harder to Change

The original backfire effect showed that people believed facts more strongly when confronted with correct information, but this misses an important distinction. There are facts and there are attitudes, belief systems, and policy preferences.

A fascinating thing happened when Wood and Porter looked for—but didn’t find—the backfire effect. They talked with the original researchers, Nyhan and Reifler, and they began working together to solve the mystery. Why did the backfire effect happen sometimes but not regularly?

In a recent podcast (January 28, 2018) from the “You Are Not So Smart” podcast, Wood, Porter, and Nyhan were interviewed by David McRaney and they nicely clarified the distinction between factual backfire and attitudinal backfire.

Nyhan:

“People often focus on changing factual beliefs with the assumption that it will have consequences for the opinions people hold, or the policy preferences that they have, but we know from lots of social science research…that people can change their factual beliefs and it may not have an effect on their opinions at all.”

“The fundamental misconception here is that people use facts to form opinions and in practice that’s not how we tend to do it as human beings. Often we are marshaling facts to defend a particular opinion that we hold and we may be willing to discard a particular factual belief without actually revising the opinion that we’re using it to justify.”

Porter:

“Factual backfire if it exits would be especially worrisome, right? I don’t really believe we are going to find it anytime soon… Attitudinal backfire is less worrisome, because in some ways attitudinal backfire is just another description for failed persuasion attempts… that doesn’t mean that it’s impossible to change your attitude. That may very well just mean that what I’ve done to change your attitude has been a failure. It’s not that everyone is immune to persuasion, it’s just that persuasion is really, really hard.”

McRaney (Podcast Host):

“And so the facts suggest that the facts do work, and you absolutely should keep correcting people’s misinformation because people do update their beliefs and that’s important, but when we try to change people’s minds by only changing their [factual] beliefs, you can expect to end up, and engaging in, belief whack-a-mole, correcting bad beliefs left and right as the person on the other side generates new ones to support, justify, and protect the deeper psychological foundations of the self.”

Nyhan:

“True backfire effects, when people are moving overwhelmingly in the opposite direction, are probably very rare, they are probably on issues where people have very strong fixed beliefs….”

 

Rise Up! Debunk!

Here’s the takeaway for us in the learning field who want to be helpful in moving practice to more effective approaches.

  • While there may be some underlying beliefs that influence thinking in the learning field, they are unlikely to be as strongly held as the political beliefs that researchers have studied.
  • The research seems fairly clear that factual backfire effects are extremely unlikely to occur, so we should not be afraid to debunk factual inaccuracies.
  • Persuasion is difficult but not impossible, so it is worth making attempts to debunk. Such attempts are likely to be more effective if we take a change-management approach, look to the science of persuasion, and persevere respectfully and persistently over time.

Here is the message that one of the researchers, Tom Wood, wants to convey:

“I want to affirm people. Keep going out and trying to provide facts in your daily lives and know that the facts definitely make some difference…”

Here are some methods of persuasion from a recent article by Flynn, Nyhan, and Reifler (2017) that have worked even with people’s strongly-held beliefs:

  • When the persuader is seen to be ideologically sympathetic with those who might be persuaded.
  • When the correct information is presented in a graphical form rather than a textual form.
  • When an alternative causal account of the original belief is offered.
  • When credible or professional fact-checkers are utilized.
  • When multiple “related stories” are also encountered.

The stakes are high! Bad information permeates the learning field and makes our learning interventions less effective, harming our learners and our organizations while wasting untold resources.

We owe it to our organizations, our colleagues, and our fellow citizens to debunk bad information when we encounter it!

Let’s not be assholes about it! Let’s do it with respect, with openness to being wrong, and with all our persuasive wisdom. But let’s do it. It’s really important that we do!

 

Research Cited

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303–330.

Nyhan, B., & Zaragoza, J. (2016). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Available at: www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.
Rich, P. R., Van Loon, M. H., Dunlosky, J., & Zaragoza, M. S. (2017). Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 492-501.
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62-74. http://dx.doi.org/10.1037/xlm0000155
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence, Political Behavior, Advance Online Publication.