Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

, ,

You won’t believe what a vendor said about a speaker at a conference—when that speaker spoke the truth.

 

Conferences are big business in the workplace learning field.

Conferences make organizers a ton of money. That’s great because pulling off a good conference is not as easy as it looks. In addition to finding a venue and attracting people to come to your event, you also have to find speakers. Some speakers are well-known quantities, but others are unknown.

In the learning field, where we are inundated with fads, myths, and misconceptions; finding speakers who will convey the most helpful messages, and avoid harmful messages, is particularly difficult. Ideally, as attendees, we’d like to hear truth from our speakers rather than fluff and falsehoods.

On the other hand, vendors pay big money to exhibit their products and services at a conference. Their goal is connecting with attendees who are buyers or who can influence buyers. Even conferences that don’t have exhibit halls usually get money from vendors in one way or another.

So, conference owners have two groups of customers to keep happy: attendees and vendors. In an ideal world, both groups would want the most helpful messages to be conveyed. Truth would be a common goal. So for example, let’s say new research is done that shows that freep learning is better than traditional elearning. A speaker at a conference shares the news that freep learning is great. Vendors in the audience hear the news. What will they do?

  • Vendor A hires a handsome and brilliant research practitioner to verify the power of freep learning with the idea of moving forward quickly and providing this powerful new tool to their customers.
  • Vendor B jumps right in and starts building freep learning to ensure their customers get the benefits of this powerful new learning method.
  • Vendor C pulls the conference organizers aside and tells them, “If you ever use that speaker again, we will not be back; you will not get our money any more.”

Impossible you say!

Would never happen you think!

You’re right. Not enough vendors are hiring fadingly-good-lookingly brilliant research-to-practice experts!

Here’s a true story from a conference that took place within the last year or so.

Clark Quinn spoke about learning myths and misconceptions during his session, describing the findings from his wonderful book. Later when he read his conference evaluations he found the following comment among the more admiring testimonials:

“Not cool to debunk some tools that exhibitors pay a lot of money to sell at [this conference] only to hear from a presenter at the conference that in his opinion should be debunked. Why would I want to be an exhibitor at a conference that debunks my products? I will not exhibit again if this speaker speaks at [conference name]” (emphasis added).

This story was recounted by Clark and captured by Jane Bozarth in an article on the myth of learning styles she wrote as the head of research for the eLearning Guild. Note that the conference in question was NOT an eLearning Guild conference.

What can we do?

Corruption is everywhere. Buyer beware! As adults, we know this! We know politicians lie (some more than others!!). We know that we have to take steps not to be ripped off. We get three estimates when we need a new roof. We ask for personal references. We look at the video replay. We read TripAdvisor reviews. We look for iron-clad guarantees that we can return products we purchased.

We don’t get flustered or worried, we take precautions. In the learning field, you can do the following:

  • Look for conference organizers who regularly include research-based sessions (scientific research NOT opinion research).
  • Look for the conferences that host the great research-to-practice gurus. People like Patti Shank, Julie Dirksen, Clark Quinn, Mirjam Neelen, Ruth Clark, Karl Kapp, Jane Bozarth, Dick Clark, Paul Kirschner, and others.
  • Look for conferences that do NOT have sessions—or have fewer sessions—that propagate common myths and misinformation (learning styles, the learning pyramid, MBTI, DISC, millennials learn differently, people only use 10% of their brains, only 10% of learning transfers, neuroscience as a panacea, people have the attention span of a goldfish, etc.).
  • If you want to look into Will’s Forbidden Future, you might look for the following:
    • conferences and/or trade organizations that have hired a content trustee, someone with a research background to promote valid information and cull bad information.
    • conferences that point speakers to a list of learning myths to avoid.
    • conferences that evaluate sessions based on the quality of the content.

Being exposed to false information isn’t just bad for us as professionals. It’s also bad for our organizations. Think of all the wasted effort—the toil, the time, the money—that was flushed down the toilet trying to redesign all our learning to meet the so-called learning-styles approach. Egads! If you need to persuade your management about the danger of learning myths you might try this.

In a previous blog post, I talked about what we can do as attendees of conferences to avoid learning bad information. That’s good reading as well. Check it out here.

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

That’s a damn good question!

 

 

VARK Defined and Illustrated

The Snake Oil Story—Preface to Clark Quinn’s Book on Debunking

, , ,

This is my preface to Clark Quinn’s book on debunking the myths in the learning field, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions. (available from Amazon here).

Clark Stanley worked as cowboy and later as a very successful entrepreneur, selling medicine in the United States that he made based on secrets he learned from an Arizona Hopi Indian medicine man. His elixir was made from rattlesnake oil, and was marketed in the 1890’s through public events in which Stanley killed live rattlesnakes and squeezed out their oil in front of admiring crowds. After his medicine gained a wide popularity, Stanley was able to set up production facilities in Massachusetts and Rhode Island with the help of a pharmacist. Stanley made himself a rich man.

You may not know his name, but you’ve certainly heard of his time and place. It was the era of patent medicines—false and sometimes dangerous elixirs sold to men and women of all stripes. Dr. Kilmer’s Swamp Root. Oxien. Kickapoo Indian Sagwa. Dr. Morse’s Indian Root Pills. Enzyte. Bonnore’s Electro Magnetic Bathing Fluid. Radithor. Liquozone. And of course, Clark Stanley’s Snake Oil Liniment.

These medicines were bought by the millions. Fortunes were made. Millions of people were bamboozled, made sick, killed or murdered depending on how you see it. It turns out that, upon being tested, Stanley’s elixir was found to be made mostly from mineral oil, a worthless potion sold by a charlatan. His story of the medicine man and the rattlesnake juice was a more potent concoction than his famous elixir, which when tested was found to have no snake oil anyway.

What causes men and women to miss the truth, to fail to see, to continue happily in harming themselves and those around them? This, unfortunately, is not a question just for the era of patent medicines. It is eternal. It goes back to the dawn of humanity and continues today as well. I have no answer except to assume that our credulity is part of our humanity—and should guide us to be on guard at all times.

What stopped the patent-medicine pandemic of poison, persuasion, and placebo? Did we the people rise up on our own and throw out the scoundrels, the money-grubbers, the snake-oil salesmen? Did we see that we were deceived, or too hopeful, or too blind? Did we as a community heed our senses and find a way to overcome the dangers hidden from us?

No! We did not!

It was not a mass movement back to rationality and truth that saved us. It was the work of a few intrepid agitators who made all the difference. Journalists began reporting on deaths, sicknesses, and addictions resulting from the use of patent medicines. In 1905, Collier’s Weekly published a cover story that exploded the industry. Written by Samuel Hopkins Adams a former crime reporter, with the title, “The Great American Fraud: The Patent Medicine Evil,” the long expose contained sections with headings like, “Medicine or Liquor?”, “The Men Who Back the Fake,” “Absolutely False Claims,” “Drugs that Deprave,” “Prescribing Without Authority,” and “Where the Money Goes.”

The article—or series of articles that today we would call investigative journalism—opened the floodgates and led directly to the Pure Food and Drug Act in 1906, followed later by additional regulations and requirements that continue to this day, with some success, protecting our health and safety.

The ugly truth is that we need help in seeing what we don’t see. This is true too in the learning industry and has been true since at least the early 1900’s when thought leaders in our industry floated bogus claims that people remember 10% of what they read, 20% of what they hear, 30% of what they see, et cetera. Indeed, it was partly the bogus claims floating around the learning industry in the late 1990’s that made me optimistic that starting a research-based consulting practice would find an audience, that perhaps the learning field could be protected from snake oil charlatans.

Bogus claims are not merely inert flotsam to be navigated around. At a minimum, they take attention away from learning practices that are more fundamental and effective, pushing us to waste time and resources. More insidious is that they proactively cause harm, hurting learners and weakening our learning outcomes.

I wish I could report that starting Work-Learning Research twenty years ago has had the influence that Samuel Hopkins Adams had in his journalism. Alas, I am a faint voice in the howling wind of our industry. Fortunately, there are many muckraking research-to-practice practitioners today, including folks like Paul Kirschner, Patti Shank, Guy Wallace, Pedro De Bruyckere, Julie Dirksen, Donald Clark, Ruth Clark, Mirjam Neelen, Jane Bozarth, and more. There are also legions of academic researchers who do the science necessary to enable research-to-practice wisdom to be compiled and conveyed to trainers, instructional designers, elearning developers and learning executives.

I am especially optimistic now that Clark Quinn has compiled, for the first time, the myths, misconceptions, and confusions that imbue the workplace learning field with faulty decision making and ineffective learning practices. As Clark rightly advises, don’t read the book in one sitting. You will find it too much—too many misconceptions and malingering falsehoods, and too much heartache to think that our field could tolerate so much snake oil.

Here’s what we don’t realize. Today’s workplace-learning snake oil is costing us billions of dollars in wasted effort, misspent resources, ill-advised decisions, and distraction from the science-of-learning fundamentals that have proven to be effective! Every time a trainer reads an article on learning styles and adjusts his or her training to make it suitable for visual, auditory, kinesthetic, and olfactory learners; time is wasted, money is spent, and learning is hurt. Every time an instructional designer goes to a conference and hears that neuroscience should guide learning design, he or she takes this faulty meme back to colleagues and infects them with false hope and ineffective learning strategies. Every time a Chief Learning Officer hears that learning events should be shrunk to 4-minute microlearning videos, that storytelling is everything, that all learning is social, that virtual reality is the future of learning—every time our learning executives jump on a bandwagon and send us to training or conferences or hire experts in these multitudinous fascinations—we are diverted from the veritable essence of learning. We waste our own developmental budgets with snake-oil rostrums. We waste time organizing ourselves around wrong-headed initiatives. We ignore what really works, all the while costing our organizations billions of dollars in waste and ineffective learning practices.

Let us start anew today. We can begin with Clark’s book. It is a veritable treasure chest of wisdom. But let’s keep going. Let’s stay skeptical. Let’s look to the scientific research for knowledge. Let’s become more demanding and knowledgeable ourselves, knowing that we all have more to learn. Let’s look to the research translators who know the work that we do as instructional designers, trainers, and developers. Let’s do our own testing. Let’s improve our evaluation systems so that we get better feedback day by day. Let’s pilot, rework, improve, and continue to learn!

As the history of patent medicine shows, we must be forever vigilant against our own blindness and against those who will sell us the miraculous hope of snake-oil cure-alls.

Debunker Club Webinar with Clark Quinn

,

The Debunker Club, with over 600 members devoted to squashing the myths in the learning field, is offering a FREE webinar with noted author and learning guru Dr. Clark Quinn on myths and misconceptions in the learning field, based on his new book, just released last month, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions. (available from Amazon here).

DATE:

  • June 6th

TIME:

  • 10AM (San Francisco, USA)
  • 1PM (New York, USA)
  •  6PM (London, UK)
  • 10:30PM (Mumbai, India)
  • 3AM June 7th (Sydney, Australia)

REGISTER NOW:

Debunker Club Social Gathering at ATD 2018

In San Diego on Tuesday Morning May 8th at 7AM!

With Special Guest! (Revealed Below)

ATD is holding its annual conference starting this weekend in San Diego. On Tuesday morning (May 8), I’m going to convene us debunkers for a brief social gathering over coffee at Copa Vida, just a few short blocks from the San Diego Convention Center.

Come join me for coffee and caffeinated conversation…just for the fun of it.

We had a great gathering at the ISPI conference in Seattle a few weeks ago. Great conversation, friendly, low key, just a little debunking. Let’s do it again!

JOIN the GATHERING:

  • 7AM to 7:50AM
  • At Copa Vida,
    Ninth is the nearest cross street.
  • Feel free to join whether you’re attending the ATD conference or not…
  • Look for me, Will Thalheimer

Special Guest at our gathering:

Clark Quinn, author of the brand new debunking book:

Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions

Preparing for Attending a Learning Conference in 2018 and Beyond

, ,

Conferences can be beautiful things—helping us learn, building relationships that help us grow and bring us joy, prompting us to see patterns in our industry we might miss otherwise, helping us set our agenda for what we need to learn more fully.

 

Conferences can be ugly things—teaching us myths, reinforcing our misconceptions, connecting us to people who steer us toward misinformation, echo chambers of bad thinking, a vendor-infested shark tank that can lead us to buy stuff that’s not that helpful or is actually harmful, pushing us to set our learning agenda on topics that distract us from what’s really important.

Given this dual reality, your job as a conference attendee is to be smart and skeptical, and work to validate your learning. In the Training Maximizers model, the first goal is ensuring our learning interventions are built from a base of “valid, credible content.” In conferences, where we curate our own learning, we have to be sure we are imbibing the good stuff and avoiding the poison. Here, I’ll highlight a few things to keep in mind as you attend a conference. I’ll aim to make this especially relevant for this year, 2018, when you are likely to encounter certain memes and themes.

Drinking the Good Stuff

  • Look for speakers who have a background doing two things, (1) studying the scientific research (not opinion research), and (2) working with real-world learning professionals in implementing research-based practices.
  • If speakers make statements without evidence, ask for the evidence or the research—or be highly skeptical.
  • If things seem almost too good to be true, warn yourself that learning is complicated and there are no magic solutions.
  • Be careful not to get sucked into group-think. Just because others seem to like something, doesn’t necessarily make it good. Think for yourself.
  • Remember that correlation does not mean causation. Just because some factors seem to move in the same direction doesn’t mean that one caused the other. It could be the other way around. Or some third factor may have caused both to move in the same direction.

Prepare Yourself for This Year’s Shiny Objects

  • Learning Styles — Learning Styles is bogus, but it keeps coming up every year. Don’t buy into it. Learn about it first. The Debunker.Club has a nice post on why we should avoid learning styles. Read it. And don’t let people tell you that learning styles if bad but learning preferences is good. They’re pulling the wool.
  • Dale’s Cone with Percentages — People do NOT remember 10% of what they read, 20% of what they read, 30% of what they see (or anything similar). Here’s the Internet’s #1 URL debunking this silly myth.
  • Neuroscience and Learning — It’s a very hot topic with vendors touting neuroscience to entice you to be impressed. But neuroscience at this time has nothing to say about learning.
  • Microlearning — Because it’s a hot topic, vendors and consultants are yapping about microlearning endlessly. But microlearning is not a thing. It’s many things. Here’s the definitive definition of microlearning, if I must say so myself.
  • AI, Machine Learning, and Big Data — Sexy stuff certainly, but it’s not clear whether these things can be applied to learning, or whether they can be applied now (given the state of our knowledge). Beware of taking these claims too seriously. Be open, but skeptical.
  • Gamification — We are almost over this fad thankfully. Still, keep in mind that gamification, like microlearning, is comprised of multiple learning methods. Gamification is NOT a thing.
  • Personalization — Personalization is a great idea, if carried out properly. Be careful if what someone calls personalization is just another way of saying learning styles. Also, don’t buy into the idea that personalization is new. It’s quite old. See Skinner and Keller back in the early 1900’s.
  • Learning Analytics — There is a lot of movement in learning evaluation, but much of it is wrong-headed focus on pretty dashboards, and a focus only on business impact. Look for folks who are talking about how to get better feedback to make learning better. I’ll tout my own effort to develop a new approach to gathering learner feedback. But beware and do NOT just do smile sheets (said by the guy who wrote a book on smile sheets)! Beware of vendors telling you to focus only on measuring behavior and business results. Read why here.
  • Kirkpatrick-Katzell Four-Level Model of Evaluation — Always a constant in the workplace learning field for the past 60 years. But even with recent changes it still has too many problems to be worthwhile. See the new Learning-Transfer Evaluation Model (LTEM), a worthy replacement.

Wow! So much to be worried about.

Well, sorry to say, I surely missing some stuff. It’s up to you to be smart and skeptical at the same time you stay open to new ideas.

You might consider joining the Debunker Club, folks who have agreed on the importance of debunking myths in the learning field.

Guest Post by Brett Christensen: How I Was Fooled by Dale’s Cone

,

This is a guest post by Brett Christensen of Workplace Performance Consulting (www.workplaceperformance.ca/)

In this post, Brett tells us a story he recounted at a gathering of Debunker Club members at the 2018 ISPI conference in Seattle. It was such a telling story that I asked him if he would write a blog post sharing his lessons learned with you. It’s a cautionary tale about how easy it is to be fooled by information about learning that is too good to be true.

One thing to know before you read Brett’s post. He’s Canadian, which explains two things about what you will read, one of which is that he uses Canadian spellings. I’ll let you figure out the other thing.

______________________________

How I Was Fooled by Dale’s Cone

Why do we debunk?

A handful of members of the Debunker Club had the rare opportunity to meet in person on the morning of 09 April 2018 at the Starbucks Reserve Roastery in sunny (sic) Seattle prior to the second day of the International Society of Performance Improvement’s (ISPI) annual conference.

After introducing ourselves and learning that we had a “newbie” in our midst who had learned about the meeting from a friend’s re-tweet (see Networking Power on my blog), Will asked “Why do you debunk?” I somewhat sheepishly admitted that the root cause of my debunking desires could be traced back to a presentation I had done with a couple of colleagues in 2006 which was very early in my training and performance career. This was before I had discovered ISPI and before I understood and embraced the principles of evidence-based practice and scientific rigour.

We were working as e-Learning Instructional Designers (evangelists?) at the time and we were trying hard to communicate the benefits of e-Learning when it was designed correctly, which as we all know includes the design of activities that assist in transfer of learning. When we discovered Dale’s Cone – with the bad, bad, bad numbers, it made total sense to us. Insert foreboding music here.

The following image is an example of what we had seen (a problematic version of Dale’s Cone):

One of many bogus versions of Dale’s Cone

Our aim was to show to our training development colleagues that Dale’s Cone (with the numbers) was valid and that we should all endeavour to design activity into our training. We developed three different scenarios, one for each group. One group would read silently, one would read to each other out loud, and the last group would have an activity included. Everyone would then do a short assessment to measure transfer. The hope (Hypothesis? Pipe Dream?) was to show that the farther down the cone you went, the higher the transfer would be.

Well! That was not the outcome at all. In fact, if I remember correctly, everyone had similar scores on the exercise and the result was the exact opposite of what we were looking for. Rather than dig deeper into that when we got back home, we were on to the next big thing and Dale’s Cone faded in my memory. Before I go on, I’d like to point out that we weren’t total “hacks!” Our ISD process was based on valid models and we applied Mayer and Clark’s (2007) principles in all our work. We even received a “Gold e-Learning Award” award from the Canadian Society for Training Development, now the Institute for Performance and Learning (I4PL)

It wasn’t until much later, after being in ISPI for a number of years, that I had gotten to know Will, our head debunker, and read his research on Dale’s Cone! I was enlightened and a bit embarrassed that I had been a contributor to spreading bad “ju-ju” in the field. But hey – you don’t know what you don’t know. A couple of years after I found Will and finished my MSc, he started The Debunker Club. I knew I had to right my wrongs of the past and help spread the word to raise awareness of the myths and fads that continue to permeate our profession.

That’s why I am a debunker. Thank you, Will, for making me smarter in the work I do.

______________________________

Will’s Note: Brett is being much too kind. There are many people who take debunking very seriously these days. There are folks like De Bruyckere, Kirschner, Hulshof who wrote a book on learning myths. There is Clark Quinn who’s new debunking book is being released this month. There is Guy Wallace, Patti Shank, Julie Dirksen, Mirjam Neelen, Ruth Clark, Jane Bozarth, and many, many, many others (sorry if I’m forgetting you!). Now, there is also Brett Christensen who has been very active on social media over the last few years, debunking myths and more. The Debunker Club has over 600 members and over 50 people have applied for membership in the last month alone. And note, you are all invited to join.

Of course, debunking works most effectively if everybody jumps in and takes a stand. We must all stay current with the learning research and speak up gently and respectfully when we see bogus information being passed around.

Thanks Brett for sharing your story!! Most of us must admit that we have been taken in by bogus learning myths at some point in our careers. I know I have, and it’s a great reminder to stay humble and skeptical.

And let me point out a feature of Brett’s story that is easy to miss. Did you notice that Brett and his team actually did rigorous evaluation of their learning intervention? It was this evaluation that enabled Brett and his colleagues to learn how well things had gone. Now imagine if Brett and his team hadn’t done a good evaluation. They would never have learned that the methods they tried were not helpful in maximizing learning outcomes! Indeed, who knows what would have happened when they learned years later that the Dale’s Cone numbers were bogus. They might not have believed the truth of it!

Finally, let me say that Dale’s Cone itself, although not really research-based, is not the myth we’re talking about. It’s when Dale’s Cone is bastardized with the bogus numbers that it became truly problematic. See the link above entitled “research on Dale’s Cone” to see many other examples of bastardized cones.

Thanks again Brett for reminding us about what’s at stake. When myths are shared, the learning field loses trust, we learning professionals waste time, and our organizations bear the costs of many misspent funds. Our learners are also subjected to willy-nilly experimentation that hurts their learning.

 

 

Vendors Seeking Confirmatory Research in the Learning Field

, ,

I’ve been at the helm of Work-Learning Research, Inc. for almost 20 years. Ever since I began to have a following as a research-to-practice consultant, I’ve been approached by vendors to “research” their products. A great majority who approach me are basically asking me to tell the industry that their products are good. I tell these vendors that I don’t do that kind of “research,” but if they want a fair, honest, and research-based evaluation of their product for their own benefit—advice not for public consumption but for their own feedback and deliberations—I can do that for them. Some take me up on this, but most don’t.

I recently got another request and I thought I’d share what this looks like (I’ve removed identifying information):

Vendor:

I’m reaching out as the co-founder of [GreatNewCompany], a [high-tech blankety-bling] platform. We’re trying to create a product that [does incredibly wonderful things to change the world of learning].

I wanted to ask if you’d consider reviewing our product? I know you’ve spoken to [this industry luminary about such-and-such] and wondered if this was an area of research you’d planned to do more work in?

A free account has access to almost all features but is just limited to [25] unique recipients [https URL generously offered]. If you need more access to perform a comprehensive review or have any questions then please let me know.

I understand that this isn’t a small ask as it’d take a decent amount of your time but thought I’d see if you found us interesting.

Gentleman Researcher/Consultant:

I do review products, but not for public consumption. I do it to provide feedback to developers, not for marketing purposes.

My cost is [such-and-such] per hour.

Let me know if you’re interested.

Vendor:

Thanks for letting me know – it’s appreciated.

We’d be interested in some consultancy on helping raise awareness of our product and to better reach more customers. We’re not sure if we’re just failing at marketing or whether our product just doesn’t have the broad appeal. Do you think you’d be a good fit helping us with that?

Thanks.

Gentleman Researcher/Consultant:

It’s a crazy market now, with lots of new entries. Very hard to gain visibility and traction.

I don’t schlep for others. I run a high-integrity consultancy here. SMILE.

One recommendation I make is to actually do good research on your product. This helps you to learn more and it gives you something to talk about in your content marketing efforts. A way to stand above the screaming crowd.

I can help you with high-integrity research, but this usually costs a ton…

Vendor:

Hi Will,

Thanks again for the thoughts, sounds like we’re a bad fit for the kind of consultancy that we need so I appreciate you being open about that.

Cheers!

THE END

A happy ending?

================

Conclusions:

  • Be careful when you hear about product endorsements. They may be paid for.
  • Remember, not all communications that are called “research” are created equal.
  • Look for consultants who can’t be bought. You want valid advice not advice tilted toward those who pay the consultants.
  • Look for vendors who tell true stories, who honestly research their products, who learn from their experience.
  • Be skeptical of communications coming out of trade associations when those messages are paid for directly or indirectly (through long commercial association between the vendor and the association).
  • Be even more skeptical of best-in-industry lists where those listed pay to be listed. Yes! These exist!
  • In general, be skeptical and look to work with those who have integrity. They exist too!

 

The Backfire Effect is NOT Prevalent: Good News for Debunkers, Humans, and Learning Professionals!

, , ,

An exhaustive new research study reveals that the backfire effect is not as prevalent as previous research once suggested. This is good news for debunkers, those who attempt to correct misconceptions. This may be good news for humanity as well. If we cannot reason from truth, if we cannot reliably correct our misconceptions, we as a species will certainly be diminished—weakened by realities we have not prepared ourselves to overcome. For those of us in the learning field, the removal of the backfire effect as an unbeatable Goliath is good news too. Perhaps we can correct the misconceptions about learning that every day wreak havoc on our learning designs, hurt our learners, push ineffective practices, and cause an untold waste of time and money spent chasing mythological learning memes.

 

 

The Backfire Effect

The backfire effect is a fascinating phenomenon. It occurs when a person is confronted with information that contradicts an incorrect belief that they hold. The backfire effect results from the surprising finding that attempts at persuading others with truthful information may actually make the believer believe the untruth even more than if they hadn’t been confronted in the first place.

The term “backfire effect” was coined by Brendan Nyhan and Jason Reifler in a 2010 scientific article on political misperceptions. Their article caused an international sensation, both in the scientific community and in the popular press. At a time when dishonesty in politics seems to be at historically high levels, this is no surprise.

In their article, Nyhan and Reifler concluded:

“The experiments reported in this paper help us understand why factual misperceptions about politics are so persistent. We find that responses to corrections in mock news articles differ significantly according to subjects’ ideological views. As a result, the corrections fail to reduce misperceptions for the most committed participants. Even worse, they actually strengthen misperceptions among ideological subgroups in several cases.”

Subsequently, other researchers found similar backfire effects, and notable researchers working in the area (e.g., Lewandowsky) have expressed the rather fatalistic view that attempts at correcting misinformation were unlikely to work—that believers would not change their minds even in the face of compelling evidence.

 

Debunking the Myths in the Learning Field

As I have communicated many times, there are dozens of dangerously harmful myths in the learning field, including learning styles, neuroscience as fundamental to learning design, and the myth that “people remember 10% of what they read, 20% of what they hear, 30% of what they see…etc.” I even formed a group to confront these myths (The Debunker Club), although, and I must apologize, I have not had the time to devote to enabling our group to be more active.

The “backfire effect” was a direct assault on attempts to debunk myths in the learning field. Why bother if we would make no difference? If believers of untruths would continue to believe? If our actions to persuade would have a boomerang effect, causing false beliefs to be believed even more strongly? It was a leg-breaking, breath-taking finding. I wrote a set of recommendations to debunkers in the learning field on how best to be successful in debunking, but admittedly many of us, me included, were left feeling somewhat paralyzed by the backfire finding.

Ironically perhaps, I was not fully convinced. Indeed, some may think I suffered from my own backfire effect. In reviewing a scientific research review in 2017 on how to debunk, I implored that more research be done so we could learn more about how to debunk successfully, but I also argued that misinformation simply couldn’t be a permanent condition, that there was ample evidence to show that people could change their minds even on issues that they once believed strongly. Racist bigots have become voices for diversity. Homophobes have embraced the rainbow. Religious zealots have become agnostic. Lovers of technology have become anti-technology. Vegans have become paleo meat lovers. Devotees of Coke have switched to Pepsi.

The bottom line is that organizations waste millions of dollars every year when they use faulty information to guide their learning designs. As a professional in the learning field, it’s our professional responsibility to avoid the danger of misinformation! But is this even possible?

 

The Latest Research Findings

There is good news in the latest research! Thomas Wood and Ethan Porter just published an article (2018) that could not find any evidence for a backfire effect. They replicated the Nyhan and Reifler research, they expanded tenfold the number of misinformation instances studied, they modified the wording of their materials, they utilized over 10,000 participants in their research, and they varied their methods for obtaining those participants. They did not find any evidence for a backfire effect.

“We find that backfire is stubbornly difficult to induce, and is thus unlikely to be a characteristic of the public’s relationship to factual information. Overwhelmingly, when presented with factual information that corrects politicians—even when the politician is an ally—the average subject accedes to the correction and distances himself from the inaccurate claim.”

There is additional research to show that people can change their minds, that fact-checking can work, that feedback can correct misconceptions. Rich and Zaragoza (2016) found that misinformation can be fixed with corrections. Rich, Van Loon, Dunlosky, and  Zaragoza (2017) found that corrective feedback could work, if it was designed to be believed. More directly, Nyhan and Reifler (2016), in work cited by the American Press Institute Accountability Project, found that fact checking can work to debunk misinformation.

 

Some Perspective

First of all, let’s acknowledge that science sometimes works slowly. We don’t yet know all we will know about these persuasion and information-correction effects.

Also, let’s please be careful to note that backfire effects, when they are actually evoked, are typically found in situations where people are ideologically inclined to a system of beliefs for which they strongly identify. Backfire effects have been studied most of in situations where someone identifies themselves as a conservative or liberal—when this identity is singularly or strongly important to their self identity. Are folks in the learning field such strong believers in a system of beliefs and self-identity to easily suffer from the backfire effect? Maybe sometimes, but perhaps less likely than in the area of political belief which seems to consume many of us.

Here are some learning-industry beliefs that may be so deeply held that the light of truth may not penetrate easily:

  • Belief that learners know what is best for their learning.
  • Belief that learning is about conveying information.
  • Belief that we as learning professionals must kowtow to our organizational stakeholders, that we have no grounds to stand by our own principles.
  • Belief that our primary responsibility is to our organizations not our learners.
  • Belief that learner feedback is sufficient in revealing learning effectiveness.

These beliefs seem to undergird other beliefs and I’ve seen in my work where these beliefs seem to make it difficult to convey important truths. So let me clarify and first say that it is speculative on my part that these beliefs have substantial influence. This is a conjecture on my part. Note also that given that the research on the “backfire effect” has now been shown to be tenuous, I’m not claiming that fighting such foundational beliefs will cause damage. On the contrary, it seems like it might be worth doing.

 

Knowledge May Be Modifiable, But Attitudes and Belief Systems May Be Harder to Change

The original backfire effect showed that people believed facts more strongly when confronted with correct information, but this misses an important distinction. There are facts and there are attitudes, belief systems, and policy preferences.

A fascinating thing happened when Wood and Porter looked for—but didn’t find—the backfire effect. They talked with the original researchers, Nyhan and Reifler, and they began working together to solve the mystery. Why did the backfire effect happen sometimes but not regularly?

In a recent podcast (January 28, 2018) from the “You Are Not So Smart” podcast, Wood, Porter, and Nyhan were interviewed by David McRaney and they nicely clarified the distinction between factual backfire and attitudinal backfire.

Nyhan:

“People often focus on changing factual beliefs with the assumption that it will have consequences for the opinions people hold, or the policy preferences that they have, but we know from lots of social science research…that people can change their factual beliefs and it may not have an effect on their opinions at all.”

“The fundamental misconception here is that people use facts to form opinions and in practice that’s not how we tend to do it as human beings. Often we are marshaling facts to defend a particular opinion that we hold and we may be willing to discard a particular factual belief without actually revising the opinion that we’re using it to justify.”

Porter:

“Factual backfire if it exits would be especially worrisome, right? I don’t really believe we are going to find it anytime soon… Attitudinal backfire is less worrisome, because in some ways attitudinal backfire is just another description for failed persuasion attempts… that doesn’t mean that it’s impossible to change your attitude. That may very well just mean that what I’ve done to change your attitude has been a failure. It’s not that everyone is immune to persuasion, it’s just that persuasion is really, really hard.”

McRaney (Podcast Host):

“And so the facts suggest that the facts do work, and you absolutely should keep correcting people’s misinformation because people do update their beliefs and that’s important, but when we try to change people’s minds by only changing their [factual] beliefs, you can expect to end up, and engaging in, belief whack-a-mole, correcting bad beliefs left and right as the person on the other side generates new ones to support, justify, and protect the deeper psychological foundations of the self.”

Nyhan:

“True backfire effects, when people are moving overwhelmingly in the opposite direction, are probably very rare, they are probably on issues where people have very strong fixed beliefs….”

 

Rise Up! Debunk!

Here’s the takeaway for us in the learning field who want to be helpful in moving practice to more effective approaches.

  • While there may be some underlying beliefs that influence thinking in the learning field, they are unlikely to be as strongly held as the political beliefs that researchers have studied.
  • The research seems fairly clear that factual backfire effects are extremely unlikely to occur, so we should not be afraid to debunk factual inaccuracies.
  • Persuasion is difficult but not impossible, so it is worth making attempts to debunk. Such attempts are likely to be more effective if we take a change-management approach, look to the science of persuasion, and persevere respectfully and persistently over time.

Here is the message that one of the researchers, Tom Wood, wants to convey:

“I want to affirm people. Keep going out and trying to provide facts in your daily lives and know that the facts definitely make some difference…”

Here are some methods of persuasion from a recent article by Flynn, Nyhan, and Reifler (2017) that have worked even with people’s strongly-held beliefs:

  • When the persuader is seen to be ideologically sympathetic with those who might be persuaded.
  • When the correct information is presented in a graphical form rather than a textual form.
  • When an alternative causal account of the original belief is offered.
  • When credible or professional fact-checkers are utilized.
  • When multiple “related stories” are also encountered.

The stakes are high! Bad information permeates the learning field and makes our learning interventions less effective, harming our learners and our organizations while wasting untold resources.

We owe it to our organizations, our colleagues, and our fellow citizens to debunk bad information when we encounter it!

Let’s not be assholes about it! Let’s do it with respect, with openness to being wrong, and with all our persuasive wisdom. But let’s do it. It’s really important that we do!

 

Research Cited

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303–330.

Nyhan, B., & Zaragoza, J. (2016). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Available at: www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.
Rich, P. R., Van Loon, M. H., Dunlosky, J., & Zaragoza, M. S. (2017). Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 492-501.
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62-74. http://dx.doi.org/10.1037/xlm0000155
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence, Political Behavior, Advance Online Publication.

 

Donald Kirkpatrick was NOT the Originator of the Four-Level Model of Learning Evaluation

, , , ,

Donald Kirkpatrick (1924-2014) was a giant in the workplace learning and development field, widely known for creating the four-level model of learning evaluation. Evidence however contradicts this creation myth and points to Raymond Katzell, a distinguished industrial-organizational psychologist, as the true originator. This, of course, does not diminish Don Kirkpatrick’s contribution to framing and popularizing the four-level framework of learning evaluation.

The Four-Levels Creation Myth

The four-level model is traditionally traced back to a series of four articles Donald Kirkpatrick wrote in 1959 and 1960, each article covering one of the four levels, Reaction, Learning, Behavior, Results. These articles were published in the magazine of ASTD (then called the American Society of Training Directors). Here’s a picture of the first page of the first article:

In June of 1977, ASTD (known by then as the American Society of Training and Development, now ATD, the Association for Talent Development) reissued Kirkpatrick’s original four articles, combining them into one article in the Training and Development Journal. The story has always been that it was those four articles that introduced the world to the four-level model of training evaluation.

In 1994, in the first edition of his book, Evaluating Training Programs: The Four Levels, Donald Kirkpatrick wrote:

“In 1959, I wrote a series of four articles called ‘Techniques for Evaluating Training Programs,’ published in Training and Development, the journal of the American Society for Training and Development (ASTD). The articles described the four levels of evaluation that I had formulated. I am not sure where I got the idea for this model, but the concept originated with work on my Ph.D. dissertation at the University of Wisconsin, Madison.” (p. xiii). [Will’s Note: Kirkpatrick was slightly inaccurate here. At the time of his four articles, the initials ASTD stood for the American Society of Training Directors and the four articles were published in the Journal of the American Society of Training Directors. This doesn’t diminish Kirkpatrick’s central point: that he was the person who formulated the four levels of learning evaluation].

In 2011, in a tribute to Dr. Kirkpatrick, he is asked about how he came up with the four levels. This is what he said in that video tribute:

“[after I finished my dissertation in 1954], between 54 and 59 I did some research on behavior and results. I went into companies. I found out are you using what you learned and if so what can you show any evidence of productivity or quality or more sales or anything from it. So I did some research and then in 1959 Bob Craig, editor of the ASTD journal, called me and said, ‘Don, I understand you’ve done some research on evaluation would you write an article?’ I said, ‘Bob, I’ll tell you what I’ll do, I’ll write four articles, one on reaction, one on learning, one on behavior, and one on results.'”

In 2014, when asked to reminisce on his legacy, Dr. Kirkpatrick said this:

“When I developed the four levels in the 1950s, I had no idea that they would turn into my legacy. I simply needed a way to determine if the programs I had developed for managers and supervisors were successful in helping them perform better on the job. No models available at that time quite fit the bill, so I created something that I thought was useful, implemented it, and wrote my dissertation about it.” (Quote from blog post published January 22, 2014).

As recently as this month (January 2018), on the Kirkpatrick Partners website, the following is written:

“Don was the creator of the Kirkpatrick Model, the most recognized and widely used training evaluation model in the world. The four levels were developed in the writing of his Ph.D. dissertation, Evaluating a Human Relations Training Program for Supervisors.

Despite these public pronouncements, Kirkpatrick’s legendary 1959-1960 articles were not the first published evidence of a four-level evaluation approach.

Raymond Katzell’s Four-Step Framework of Evaluation

In an article written by Donald Kirkpatrick in 1956, the following “steps” were laid out and were attributed to “Raymond Katzell, a well known authority in the field [of training evaluation].”

  1. To determine how the trainees feel about the program.
  2. To determine how much the trainees learn in the form of increased knowledge and understanding.
  3. To measure the changes in the on-the-job behavior of the trainees.
  4. To determine the effects of these behavioral changes on objective criteria such as production, turnover, absenteeism, and waste.

These four steps are the same as Kirkpatrick’s four levels, except there are no labels.

Raymond Katzell went on to a long and distinguished career as an industrial-organizational psychologist, even winning the Society for Industrial and Organizational Performance’s Distinguished Scientific Contributions award.

Raymond Katzell. Picture used by SIOP (Society for Industrial and Organizational Psychology) when they talk about The Raymond A. Katzell Media Award in I-O Psychology.

The first page of Kirkpatrick’s 1956 article—written three years before his famous 1959 introduction to the four levels—is pictured below:

And here is a higher-resolution view of the quote from that front page, regarding Katzell’s contribution:

So Donald Kirkpatrick mentions Katzell’s four-step model in 1956, but not in 1959 when he—Kirkpatrick—introduces the four labels in his classic set of four articles.

It Appears that Kirkpatrick Never Mentions Katzell’s Four Steps Again

As far I can tell, after searching for and examining many publications, Donald Kirkpatrick never mentioned Katzell’s four steps after his 1956 article.

Three years after the 1956 article, Kirkpatrick did not mention Katzell’s taxonomy when he wrote his four famous articles in 1959. He did mention an unrelated article where Katzell was a co-author (Merrihue & Katzell, 1955), but he did not mention Katzell’s four steps.

Neither did Kirkpatrick mention Katzell in his 1994 book, Evaluating Training Programs: The Four Levels.

Nor did Kirkpatrick mention Katzell in the third edition of the book, written with Jim Kirkpatrick, his son.

Nor was Katzell mentioned in a later version of the book written by Jim and Wendy Kirkpatrick in 2016. I spoke with Jim and Wendy recently (January 2018), and they seemed as surprised as I was about the 1956 article and about Raymond Katzell.

Nor did Donald Kirkpatrick mention Katzell in any of the interviews he did to mark the many anniversaries of his original 1959-1960 articles.

To summarize, Katzell, despite coming up with the four-step taxonomy of learning evaluation, was only given credit by Kirkpatrick once, in the 1956 article, three years prior to the articles that introduced the world to the Kirkpatrick Model’s four labels.

Kirkpatrick’s Dissertation

Kirkpatrick did not introduce the four-levels in his 1954 dissertation. There is not even a hint at a four-level framework.

In his dissertation, Kirkpatrick cited two publications by Katzell. The first, was an article from 1948, “Testing a Training Program in Human Relations.” That article studies the effect of leadership training, but makes no mention of Katzell’s four steps. It does, however, hint at the value of measuring on-the-job performance, in this case the value of leadership behaviors. Katzell writes, “Ideally, a training program of this sort [a leadership training program] should be evaluated in terms of the on-the-job behavior of those with whom the trainees come in contact.

The second Katzell article cited by Kirkpatrick in his dissertation was an article entitled, “Can We Evaluate Training?” from 1952. Unfortunately, it was a mimeographed article published by the Industrial Management Institute at the University of Wisconsin, and seems to be lost to history. Even after several weeks of effort (in late 2017), the University of Wisconsin Archives could not locate the article. Interestingly, in a 1955 publication entitled, “Monthly Checklist of State Publications” a subtitle was added to Katzell’s Can We Evaluate Training? The subtitle was:A summary of a one day Conference for Training Managers” from April 23, 1952.

To be clear, Kirkpatrick did not mention the four levels in his 1954 dissertation. The four levels notion came later.

How I Learned about Katzell’s Contribution

I’ve spent the last several years studying learning evaluation, and as part of these efforts, I decided to find Kirkpatrick’s original four articles and reread them. ATD (The Association for Talent Development) in 2017 had a wonderful archive of the articles it had published over the years. As I searched for “Kirkpatrick,” several other articles—besides the famous four—came up, including the 1956 article. I was absolutely freaking stunned when I read it. Donald Kirkpatrick had cited Katzell as the originator of the four level notion!!!

I immediately began searching for more information on the Kirkpatrick-Katzell connection and found that I wasn’t the first person to uncover the connection. I found an article by Stephen Smith who acknowledged Kazell’s contribution in 2008, also in an ASTD publication. I communicated with Smith recently (December 2017) and he had nothing but kind words to say about Donald Kirkpatrick, who he said coached him on training evaluations. Here is a graphic taken directly from Smith’s 2008 article:

Smith’s article was not focused on Katzell’s contribution to the four levels, which is probably why it wasn’t more widely cited. In 2011, Cynthia Lewis wrote a dissertation and directly compared the Katzell and Kirkpatrick formulations. She appears to have learned about Katzell’s contribution from Smith’s 2008 article. Lewis’s (2011) comparison chart is reproduced below:

In 2004, four years before Smith wrote his article with the Katzell sidebar, ASTD republished Kirkpatrick’s 1956 article—the one in which Kirkpatrick acknowledges Katzell’s four steps. Here is the front page of that article:

In 2016, an academic article appeared in a book that referred to the Katzell-Kirkpatrick connection. The book is only available in French and the article appears to have had little impact in the English-speaking learning field. Whereas neither Kirkpatrick’s 2004 reprint nor Smith’s 2008 article offered commentary about Katzell’s contribution except to acknowledge it, Bouteiller, Cossette, & Bleau (2016) were clear in stating that Katzell deserves to be known as the person who conceptualized the four levels of training evaluation, while Kirkpatrick should get credit for popularizing it. The authors also lamented that Kirkpatrick, who himself recognized Katzell as the father of the four-level model of evaluation in his 1956 article, completely ignored Katzell for the next 55 years and declared himself in all his books and on his website as the sole inventor of the model. I accessed their chapter through Google Scholar and used Google Translate to make sense of it. I also followed up with two of the authors (Bouteiller and Cossette in January 2018) to confirm I was understanding their messaging clearly.

Is There Evidence of a Transgression?

Raymond Katzell seems to be the true originator of the four-level framework of learning evaluation and yet Donald Kirkpatrick on multiple occasions claimed to be the creator of the four-level model.

Of course, we can never know the full story. Kirkpatrick and Katzell are dead. Perhaps Katzell willingly gave his work away. Perhaps Kirkpatrick asked Katzell if he could use it. Perhaps Kirkpatrick cited Katzell because he wanted to bolster the credibility of a framework he developed himself. Perhaps Kirkpatrick simply forgot Katzell’s four steps when he went on to write his now-legendary 1959-1960 articles. This last explanation may seem a bit forced given that Kirkpatrick referred to the Merrihue and Katzell work in the last of his four articles—and we might expect that the name “Katzell” would trigger memories of Katzell’s four steps, especially given that Katzell was cited by Kirkpatrick as a “well known authority.” This forgetting hypothesis also doesn’t explain why Kirkpatrick would continue to fail to acknowledge Katzell’s contribution after ASTD republished Kirkpatrick’s 1956 article in 2004 or after Steven Smith’s 2008 article showed Katzell’s four steps. Smith was well-known to Kirkpatrick and is likely to have at least mentioned his article to Kirkpatrick.

We can’t know for certain what transpired, but we can analyze the possibilities. Plagiarism means that we take another person’s work and claim it as our own. Plagiarism, then, has two essential features (see this article for details). First, an idea or creation is copied in some way. Second, no attribution is offered. That is, no credit is given to the originator. Kirkpatrick had clear contact with the essential features of Katzell’s four-level framework. He wrote about them in 1956! This doesn’t guarantee that he copied them intentionally. He could have generated the four levels subconsciously, without knowing that Katzell’s ideas were influencing his thinking. Alternatively, he could have spontaneously created them without any influence from Katzell’s ideas. People often generate similar ideas when the stimuli they encounter are similar. How many people claim that they invented the term, “email?” Plagiarism does not require intent, but intentional plagiarism is generally considered a higher-level transgression than sloppy scholarship.

A personal example of how easy it is to think you invented something: In the 1990’s or early 2000’s, I searched for just the right words to explain a concept. I wrangled on it for several weeks. Finally, I came up with the perfect wording, with just the right connotation. “Retrieval Practice.” It was better than the prevailing terminology at the time—the testing effect—because people could retrieve without being tested. Eureka I thought! Brilliant I thought! It was several years later, rereading Robert Bjork’s 1988 article, “Retrieval practice and the maintenance of knowledge,” that I realized that my label was not original to me, and that even if I did generate it without consciously thinking of Bjork’s work, that my previous contact with the term “retrieval practice” almost certainly influenced my creative construction.

The second requirement for plagiarism is that the original creator is not given credit. This is evident in the case of the four levels of learning evaluation. Donald Kirkpatrick never mentioned Katzell after 1956. He certainly never mentioned Katzell when it would have been most appropriate, for example when he first wrote about the four levels in 1959, when he first published a book on the four levels in 1994, and when he received awards for the four levels.

Finally, one comment may be telling, Kirkpatrick’s statement from his 1994 book: “I am not sure where I got the idea for this model, but the concept originated with work on my Ph.D. dissertation at the University of Wisconsin, Madison.” The statement seems to suggest that Kirkpatrick recognized that there was a source for the four-level model—a source that was not Kirkpatrick himself.

Here is the critical timeline:

  • Katzell was doing work on learning evaluation as early at 1948.
  • Kirkpatrick’s 1954 dissertation offers no trace of a four-part learning-evaluation framework.
  • In 1956, the first reference to a four-part learning evaluation framework was offered by Kirkpatrick and attributed to Raymond Katzell.
  • In 1959, the first mention of the Kirkpatrick terminology (i.e., Reaction, Learning, Behavior, Results) was published, but Katzell was not credited.
  • In 1994, Kirkpatrick published his book on the four levels, saying specifically that he formulated the four levels. He did not mention Katzell’s contribution.
  • In 2004, Kirkpatrick’s 1956 article was republished, repeating Kirkpatrick’s acknowledgement that Katzell invented the four-part framework of learning evaluation.
  • In 2008, Smith published the article where he cited Katzell’s contribution.
  • In 2014, Kirkpatrick claimed to have developed the four levels in the 1950s.
  • As far as I’ve been able to tell—corroborated by Bouteiller, Cossette, & Bleau (2016)—Donald Kirkpatrick never mentioned Katzell’s four-step formulation after 1956.

Judge Not Too Quickly

I have struggled writing this article, and have rewritten it dozens of times. I shared an earlier version with four trusted colleagues in the learning field and asked them if I was being fair. I’ve searched exhaustively for source documents. I reached out to key players to see if I was missing something.

It is not a trifle to curate evidence that impacts other people’s reputations. It is a sacred responsibility. I as the writer have the most responsibility, but you as a reader have a responsibility too to weigh the evidence and make your own judgments.

First we should not be too quick to judge. We simply don’t know why Donald Kirkpatrick never mentioned Katzell after the original 1956 article. Indeed, perhaps he did mention Katzell in his workshops and teachings. We just don’t know.

Here are some distinct possibilities:

  • Perhaps Katzell and Kirkpatrick had an agreement that Kirkpatrick could write about the four levels. Let’s remember the 1959-1960 articles were not written to boost Kirkpatrick’s business interests. He didn’t have any business interests at that time—he was an employee—and his writing seemed aimed specifically at helping others do better evaluation.
  • Perhaps Kirkpatrick, being a young man without much of résumé in 1956, had developed a four-level framework but felt he needed to cite Katzell in 1956 to add credibility to his own ideas. Perhaps later in 1959 he dropped this false attribution to give himself the credit he deserved.
  • Perhaps Kirkpatrick felt that citing Katzell once was enough. Where many academics and researchers see plagiarism as one of the deadly sins, others have not been acculturated into the strongest form of this ethos. Let’s remember that in 1959 Kirkpatrick was not intending to create a legendary meme, he was just writing some articles. Perhaps at the time it didn’t seem important to acknowledge Katzell’s contribution. I don’t mean to dismiss this lightly. All of us are raised to believe in fairness and giving credit where credit is due. Indeed, research suggests that even the youngest infants have a sense of fairness. Kirkpatrick earned his doctorate at a prestigious research university. He should have been aware of the ethic of attribution, but perhaps because the 1959-1960 articles seemed so insignificant at the time, it didn’t seem important to site Katzell.
  • Perhaps Kirkpatrick intended to cite Katzell’s contribution in his 1959-1960 articles but the journal editor talked him out of it or disallowed it.
  • Perhaps Kirkpatrick realized that Katzell’s four steps were simply not resonant enough to be important. Let’s admit that Kirkpatrick’s framing of the four levels into the four labels was a brilliant marketing masterstroke. If Kirkpatrick believed this, he might have seen Katzell’s contribution as minimal and not deserving of acknowledgement.
  • Perhaps Kirkpatrick completely forget Katzell’s four-step taxonomy. Perhaps it didn’t influence him when he created his four labels, that he didn’t think of Katzell’s contribution when he wrote about Katzell’s article with Merrihue, that for the rest of his life he never remembered Katzell’s formulation, that he never saw the 2004 reprinting of his 1956 article, that he never saw Smith’s 2008 article, and that he never talked with Smith about Katzell’s work even though Smith has claimed a working relationship. Admittedly, this last possibility seems unlikely.

Let us also not judge Jim and Wendy Kirkpatrick, proprietors of Kirkpatrick Partners, a global provider of learning-evaluation workshops and consulting. None of this is on them! They were genuinely surprised to hear the news when I told them. They seemed to have no idea about Katzell or his contribution. What is past is past, and Jim and Wendy bear no responsibility for the history recounted here. What they do henceforth is their responsibility. Already, since we spoke last week, they have updated their website to acknowledge Katzell’s contribution!

Article Update (two days after original publication of this article): Yesterday, on the 31st of January 2018, Jim and Wendy Kirkpatrick posted a blog entry (copied here for the historic record) that admitted Katzell’s contribution but ignored Donald Kirkpatrick’s failure to acknowledge Katzell’s contribution as the originator of the four-level concept.

What about our trade associations and their responsibilities? It seems that ASTD bears a responsibility for their actions over the years, not only as the American Society of Training Directors who published the 1959-1960 articles without insisting that Katzell be acknowledged even though they themselves had published the 1956 articles where Katzell’s four-step framework was included on the first page; but also as the American Society of Training and Development who republished Kirkpatrick’s 1956 article in 2004 and republished the 1959-1960 articles in 1977. Recently rebranded as ATD (Association for Talent Development), the organization should now make amends. Other trade associations should also help set the record straight by acknowledging Katzell’s contribution to the four-level model of learning evaluation.

Donald Kirkpatrick’s Enduring Contribution

Regardless of who invented the four-level model of evaluation, it was Donald Kirkpatrick who framed it to perfection with the four labels and popularized it, helping it spread worldwide throughout the workplace learning and performance field.

As I have communicated elsewhere, I think the four-level model has issues—that it sends messages about learning evaluation that are not helpful.

On the other hand, the four-level model has been instrumental in pushing the field toward a focus on performance improvement. This shift—away from training as our sole responsibility, toward a focus on how to improve on-the-job performance—is one of the most important paradigm shifts in the long history of workplace learning. Kirkpatrick’s popularization of the four levels enabled us—indeed, it pushed us—to see the importance of focusing on work outcomes. For this, we owe Donald Kirkpatrick a debt of gratitude.

And we owe Raymond Katzell our gratitude as well. Not only did he originate the four levels, but he also put forth the idea that it was valuable to measure the impact learners have on their organizations.

What Should We Do Now?

What now is our responsibility as workplace learning professionals? What is ethical? The preponderance of the evidence points to Katzell as the originator of the four levels and Donald Kirkpatrick as the creator of the four labels (Reaction, Learning, Behavior, Results) and the person responsible for the popularization of the four levels. Kirkpatrick himself in 1956 acknowledged Katzell’s contribution, so it seems appropriate that we acknowledge it too.

Should we call them Katzell’s Four Levels of Evaluation? Or, the Katzell-Kirkpatrick Four Levels? I can’t answer this question for you, but it seems that we should acknowledge that Katzell was the first to consider a four-part taxonomy for learning evaluation.

For me, for the foreseeable future, I will either call it the Kirkpatrick Model and then explain that Raymond Katzell was the originator of the four levels, or I’ll simply call it the Kirkpatrick-Katzell Model.

Indeed, I think in fairness to both men—Kirkpatrick for the powerful framing of his four labels and his exhaustive efforts to popularize the model and Katzell for the original formulation—I recommend that we call it the Kirkpatrick-Katzell Four-Level Model of Training Evaluation. Or simply, the Kirkpatrick-Katzell Model.

Research Cited

Bjork, R. A. (1988). Retrieval practice and the maintenance of knowledge. In M. M. Gruneberg, P. E. Morris, R. N. Sykes (Eds.), Practical Aspects of Memory: Current Research and Issues, Vol. 1., Memory in Everyday Life (pp. 396-401). NY: Wiley.

Bouteiller, D., Cossette, M., & Bleau, M-P. (2016). Modèle d’évaluation de la formation de Kirkpatrick: retour sur les origins et mise en perspective. Dans M. Lauzier et D. Denis (éds.), Accroître le transfert des apprentissages: Vers de nouvelles connaissances, pratiques et expériences. Presses de l’Université du Québec, Chapitre 10, 297-339. [In English: Bouteiller, D., Cossette, M., & Bleau, M-P. (2016). Kirkpatrick training evaluation model: back to the origins and put into perspective. In M. Lauzier and D. Denis (eds.), Increasing the Transfer of Learning: Towards New Knowledge, Practices and Experiences. Presses de l’Université du Québec, Chapter 10, 297-339.]

Katzell, R. A. (1948). Testing a training program in human relations. Personnel Psychology, 1, 319-329.

Katzell, R. A. (1952). Can we evaluate training? A summary of a one day conference for training managers. A publication of the Industrial Management Institute, University of Wisconsin, April, 1952.

Kirkpatrick, D. L. (1956). How to start an objective evaluation of your training program. Journal of the American Society of Training Directors, 10, 18-22.

Kirkpatrick, D. L. (1959a). Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13(11), 3-9.

Kirkpatrick, D. L. (1959b). Techniques for evaluating training programs: Part 2—Learning. Journal of the American Society of Training Directors, 13(12), 21-26.

Kirkpatrick, D. L. (1960a). Techniques for evaluating training programs: Part 3—Behavior. Journal of the American Society of Training Directors, 14(1), 13-18.

Kirkpatrick, D. L. (1960b). Techniques for evaluating training programs: Part 4—Results. Journal of the American Society of Training Directors, 14(2), 28-32.

Kirkpatrick, D. L. (1956-2004). A T+D classic: How to start an objective evaluation of your training program. T+D, 58(5), 1-3.

Lewis, C. J. (2011). A study of the impact of the workplace learning function on organizational excellence by examining the workplace learning practices of six Malcolm Baldridge Quality Award recipients. San Diego: CA. Available at http://sdsu-dspace.calstate.edu/bitstream/handle/10211.10/1424/Lewis_Cynthia.pdf.

Merrihue, W. V., & Katzell, R. A. (1955). ERI: Yardstick of employee relations. Harvard Business Review, 33, 91-99.

Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74–101.

Smith, S. (2008). Why follow levels when you can build bridges? T+D, September 2008, 58-62.