Posts

Released Today: Research Report on Learning Evaluation Conducted with The eLearning Guild.

Report Title: Evaluating Learning: Insights from Learning Professionals.

I am delighted to announce that a research effort that I led in conjunction with Dr. Jane Bozarth and the eLearning Guild has been released today. I’ll be blogging about our findings over the next couple of months.

This is a major report — packed into 39 pages — and should be read by everyone in the workplace learning field interested in learning evaluation!

Just a teaser here:

We asked folks to consider the last three learning programs their units developed and to reflect on the learning-evaluation approaches they used.

While a majority were generally happy with their evaluation methods on these recent learning programs, about 40% where dissatisfied. Later, in a more general question about whether learning professionals are able to do the learning measurement they want to do, fully 52% said they were NOT able to do the kind of evaluation they thought was right to do.

In the full report, available only to Guild members, we dig down and explore the practices and perspectives that drive our learning-evaluation efforts. I encourage you to get the full report, as it touches on the methods we use, how we communicate with senior business leaders, what we’d like to do differently, and what we think we’re good at. Also, the report concludes with 12 powerful action strategies for getting the most out of our learning-evaluation efforts.

You can get the full report by clicking here.

 

 

Respondents

Over 200 learning professionals responded to Work-Learning Research’s 2017-2018 survey on current practices in gathering learner feedback, and today I will reveal the results. The survey ran from November 29th, 2017 to September 16th, 2018. The sample of respondents was drawn from Work-Learning Research’s mailing list and through extensive calls for participation in a variety of social media. Because of this sampling methodology, the survey results are likely skewed toward professionals who care and/or pay attention to research-based practice recommendations more than the workplace learning field as a whole. They are also likely more interested and experienced in learning evaluation as well.

Feel free to share this link with others.

Goal of the Research

The goal of the research was to determine what people are doing in the way of evaluating their learning interventions through the practice of asking learners for their perspectives.

Questions the Research Hoped to Answer

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?
  2. How does their use compare with other methods? Are other methods growing in prominence/use?
  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?
  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?
  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?
  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?
  7. What tools are organizations using to gather learner feedback?
  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?
  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

A summary of the findings indexed to these questions can be found at the end of this post.

Situating the Practice of Gathering Learner Feedback

When we gather feedback from learners, we are using a Tier 3 methodology on the LTEM (Learning-Transfer Evaluation Model) or Level 1 on the Kirkpatrick-Katzell Four-Level Model of Training Evaluation.

Demographic Background of Respondents

Respondents came from a wide range of organizations, including small, midsize, and large organizations.

Respondents play a wide range of roles in the learning field.

Most respondents live in the United States and Canada, but there was some significant representation from many predominantly English-speaking countries.

Learner-Feedback Findings

About 67% of respondents report that learners are asked about their perceptions on more than half of their organization’s learning programs, including elearning. Only about 22% report that they survey learners on less than half of their learning programs. This finding is consistent with past findings—surveying learners is the most common form of learning evaluation and is widely practiced.

The two most common question types in use are Likert-like questions and numeric-scale questions. I have argued against their use* and I am pleased that Performance-Focused Smile Sheet questions have been utilized by so many so quickly. Of course, this sample of respondents is comprised of folks on my mailing list so this result surely doesn’t represent current practice in the field as a whole. Not yet! LOL.

*Likert-like questions and numeric-scale questions are problematic for several reasons. First, because they offer fuzzy response choices, learners have a difficult time deciding between them and this likely makes their responses less precise. Second, such fuzziness may inflate bias as there are not concrete anchors to minimize biasing effects of the question stems. Third, Likert-like options and numeric scales likely deflate learner responding because learners are habituated to such scales and because they may be skeptical that data from such scales will actually be useful. Finally, Likert-like options and numeric scales produce indistinct results—averages all in the same range. Such results are difficult to assess, failing to support decision-making—the whole purpose for evaluation in the first place. To learn more, check out Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form (book website here).

The most common tools used to gather feedback from learners were paper surveys and SurveyMonkey. Questions delivered from within an LMS were the next highest. High-end evaluation systems like Metrics that Matter were not highly represented in our respondents.

Our respondents did not rate their learner-feedback efforts as very effective. Their learner surveys were seen as most effective in gauging learner satisfaction. Only about 33% of respondents thought their learner surveys gave them insights on the effectiveness of the learning.

Only about 15% of respondents found their data very useful in providing them feedback about how to improve their learning interventions.

Respondents report that their organizations are somewhat open to alternatives to their current learner-feedback approaches, but overall they are not actively looking for alternatives.

Most respondents report that their organizations are at least “modestly happy” with their learner-feedback assessments. Yet only 22% reported being “generally happy” with them. Combining this finding with the one above showing that lots of organizations are open to alternatives, it seems that organizational satisfaction with current learner-feedback approaches is soft.

We asked respondents about their organizations’ attempts to measure the following:

  • Learner Attendance
  • Whether Learner is Paying Attention
  • Learner Perceptions of the Learning (eg, Smile Sheets, Learner Feedback)
  • Amount or Quality of Learner Participation
  • Learner Knowledge of the Content
  • Learner Ability to Make Realistic Decisions
  • Learner Ability to Complete Realistic Tasks
  • Learner Performance on the Job (or in another future performance situation)
  • Impact of Learning on the Learner
  • Impact of Learning on the Organization
  • Impact of Learning on Coworkers, Family, Friends of the Learner
  • Impact of Learning on the Community or Society
  • Impact of Learning on the Environment

These evaluation targets are encouraged in LTEM (The Learning-Transfer Evaluation Model).

Results are difficult to show—because our question was very complicated (admittedly too complicated)—but I will summarize the findings below.

As you can see, learner attendance and learner perceptions (smile sheets) were the most commonly measured factors, with learner knowledge a distant third. The least common measures involved the impact of the learning on the environment, community/society, and the learner’s coworkers/family/friends.

The flip side—methods rarely utilized in respondents’ organizations—shows pretty much the same thing.

Note that the question above, because it was too complicated, probably produced some spurious results, even if the trends at the extremes are probably indicative of the whole range. In other words, it’s likely that attendance and smile sheets are the most utilized and measures of impact on the environment, community/society, and learners’ coworkers/family/friends are the least utilized.

Questions Answered Based on Our Sample

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?

    Yes! Smile sheets are clearly the most popular evaluation method, along with measuring attendance (if we include that as a metric).

  2. How does their use compare with other methods? Are other methods growing in prominence/use?

    Except for Attendance, nothing else comes close. The next most common method is measuring knowledge. Remarkably, given the known importance of decision-making (Tier 5 in LTEM) and task competence (Tier 6 in LTEM), these are used in evaluation at a relatively low level. Similar low levels are found in measuring work performance (Tier 7 in LTEM) and organizational results (part of Tier 8 in LTEM). We’ve known about these relatively low levels from many previous research surveys.

    Hardly any measurement is being done on the impact of learning on learner or his/her coworkers/family/friends, the impact of the learning on the community/society/environment, or on learner participation/attention.

  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?

    Learning professionals are moderately satisfied.

  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?

    Organizations are open to alternatives, with some actively seeking alternatives and some not looking.

  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?

    Likert-like options and numeric scales are the most commonly used. Thalheimer’s performance-focused smile-sheet method has gained traction in this sample of respondents—people likely more in the know about Thalheimer’s approach than the industry at large.

  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?

    Learning professionals think their current smile sheets are fairly good at measuring the satisfaction of learners. A full one-third of respondents feel that their current approaches are not valid enough to provide them with meaningful insights about the learning interventions.

  7. What tools are organizations using to gather learner feedback?

    The two most common methods for collecting learner feedback are paper surveys and SurveyMonkey. Questions from LMSs are the next most widely used. Sophisticated evaluation tools are not much in use in our respondent sample.

  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?

    This may be the most important question we might ask, given that evaluation is supposed to aid us in maintaining our successes and improving on our deficiencies. Only 15% of respondents found learner feedback “very helpful” in helping them improve their learning. Many found the feedback “somewhat helpful” but a full one-third found the feedback “not very useful” in enabling them to improve learning.

  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

    As described in Question 2 above, many of the targets of LTEM are not being adequately measured at this point in time (November 2017 to September 2018, during the time immediately before and after LTEM was introduced). This indicates that LTEM is poised to help organizations uncover evaluation targets that can be helpful in setting goals for learning improvements.

Lessons to be Drawn

The results of this survey reinforce what we’ve known for years. In the workplace learning industry, we default to learner-feedback questions (smile sheets) as our most common learning-evaluation method. This is a big freakin’ problem for two reasons. First, our learner-feedback methods are inadequate. We often use poor survey methodologies and ones particularly unsuited to learner feedback, including the use of fuzzy Likert-like options and numeric scales. Second, even if we used the most advanced learner-feedback methods, we still would not be doing enough to gain insights into the strengths and weaknesses of our learning interventions.

Evaluation is meant to provide us with data we can use to make our most critical decisions. We need to know, for example, whether our learning designs are supporting learner comprehension, learner motivation to apply what they’ve learned, learner ability to remember what they’ve learned, and the supports available to help learners transfer their learning to their work. We typically don’t know these things. As a result, we don’t make design decisions we ought to. We don’t make improvements in the learning methods we use or the way we deploy learning. The research captured here should be seen as a wake up call.

The good news from this research is that learning professionals are often aware and sensitized to the deficiencies of their learning-evaluation methods. This seems like a good omen. When improved methods are introduced, they will seek to encourage their use.

LTEM, the new learning-evaluation model (which I developed with the help of some of the smartest folks in the workplace learning field) is targeting some of the most critical learning metrics—metrics that have too often been ignored. It is too new to be certain of its impact, but it seems like a promising tool.

Why I have turned my Attention to Evaluation (and why you should too!)

For 20 years, I’ve focused on compiling scientific research on learning in the belief that research-based information—when combined with a deep knowledge of practice—can drastically improve learning results. I still believe that wholeheartedly! What I’ve also come to understand is that we as learning professionals must get valid feedback on our everyday efforts. It’s simply our responsibility to do so.

We have to create learning interventions based on the best blend of practical wisdom and research-based guidance. We have to measure key indices that tell us how our learning interventions are doing. We have to find out what their strengths are and what their weaknesses are. Then we have to analyze and assess and make decisions about what to keep and what to improve. Then we have to make improvements and again measure our results and continue the cycle—working always toward continuous improvement.

Here’s a quick-and-dirty outline of the recommended cycle for using learning to improve work performance. “Quick-and-dirty” means I might be missing something!

  1. Learn about and/or work to uncover performance-improvement needs.
  2. If you determine that learning can help, continue. Otherwise, build or suggest alternative methods to get to improved work performance.
  3. Deeply understand the work-performance context.
  4. Sketch out a very rough draft for your learning intervention.
  5. Specify your evaluation goals—the metrics you will use to measure your intervention’s strengths and weaknesses.
  6. Sketch out a rough draft for your learning intervention.
  7. Specify your learning objectives (notice that evaluation goals come first!).
  8. Review the learning research and consider your practical constraints (two separate efforts subsequently brought together).
  9. Sketch out a reasonably good draft for your learning intervention.
  10. Build your learning intervention and your learning evaluation instruments (Iteratively testing and improving).
  11. Deploy your “ready-to-go” learning intervention.
  12. Measure your results using the previously determined evaluation instruments, which were based on your previously determined evaluation objectives.
  13. Analyze your results.
  14. Determine what to keep and what to improve.
  15. Make improvements.
  16. Repeat (maybe not every step, but at least from Step 6 onward)

And here is a shorter version:

  1. Know the learning research
  2. Understand your project needs.
  3. Outline your evaluation objectives—the metrics you will use.
  4. Design your learning.
  5. Deploy your learning and your measurement.
  6. Analyze your results.
  7. Make Improvements
  8. Repeat.

More Later Maybe

The results shared here are the result from all respondents. If I get the time, I’d like to look at subsets of respondents. For example, I’d like to look at how learning executives and managers might differ from learning practitioners. Let me know how interested you would be in these results.

Also, I will be conducting other surveys on learning-evaluation practices, so stay tuned. We have been too long frustrated with our evaluation practices and more work needs to be done in understanding the forces that keep us from doing what we want to do. We could also use more and better learning-evaluation tools because the truth is that learning evaluation is still a nascent field.

Finally, because I learn a ton by working with clients who challenge themselves to do more effective interventions, please get in touch with me if you’d like a partner in thinking things through and trying new methods to build more effective evaluation practices. Also, please let me know how you’ve used LTEM (The Learning-Transfer Evaluation Model).

Some links to make this happen:

Appreciations

As always, I am grateful to all the people I learn from, including clients, researchers, thought leaders, conference attendees, and more… Thanks also to all who acknowledge and share my work! It means a lot!

My research-and-consulting practice, Work-Learning Research, was 20 years old last Saturday. This has given me pause to reflect on where I’ve been and how learning research has involved in the past two decades.

Today, as I’m preparing a conference proposal for next year’s ISPI conference, I found an early proposal I put together for the Great Valley chapter of ISPI to speak at one of their monthly meetings back in 2002. I don’t remember whether they actually accepted my proposal, but here is an excerpt:

 

 

Interesting that even way back then, I had found and compiled research on retrieval practice, spacing, feedback, etc. from the scientific journals and the exhaustive labor of hundreds of academic researchers. I am still talking about these foundational learning principles even today—because they are fundamental and because research and practice continue to demonstrate their power. You can look at recent books and websites that are now celebrating these foundational learning factors (Make it Stick, Design for How People Learn, The Ingredients for Great Teaching, Learning Scientists website, etc.).

Feeling blessed today, as we here in the United States move into a weekend where we honor our workers, that I have been able to use my labor to advance these proven principles, uncovered first by brilliant academic researchers such as Bjork, Bahrick, Mayer, Ebbinghaus, Crowder, Sweller, van Merriënboer, Rothkopf, Runquist, Izawa, Smith, Roediger, Melton, Hintzman, Glenberg, Dempster, Estes, Eich, Ericsson, Davies, Garner, Chi, Godden, Baddeley, Hall, Hintzman, Herz, Karpicke, Butler, Kirschner, Clark, Kulhavy, Moreno, Pashler, Cepeda, and many others.

From these early beginnings, I created a listing of twelve foundational learning factors—factors that I have argued should be our first priority in creating great learning—reviewed here in this document.

Happy Labor Day everyone and special thanks to the researchers who continue to make my work possible—and enable learning professionals of all stripes to build increasingly effective learning!

If you’d like to leave a remembrance in regard to Work-Learning Research’s 20th anniversary, or just read my personal reflections about it, you can do that here.

 

Back in 2008, I began discussing the scientific research on “implementation intentions.” I did this first at an eLearning Guild conference in March of 2008. I also spoke about it in 2008 at a talk to Salem State University, in a Chicago Workshop entitled Creating and Measuring Learning Transfer, and in one of my Brown Bag Lunch sessions delivered online.

In 2014, I wrote about implementation intentions specifically as a way to increase after-training follow-through. Thinking the term “Implementation Intentions” was too opaque and too general, I coined the term “Triggered Action Planning,” and argued that goal-setting at the end of training—what was often called action planning—would not be effective as triggered action planning. Indeed, in recounting the scientific research on implementation intentions, I often talked about how researchers were finding that setting situation-action triggers could create results that were twice as good as goal-setting alone. Doubling the benefits of goal setting! These kinds of results are huge!

I just came across a scientific study that supports the benefits of triggered action planning.

 

Shlomit Friedman and Simcha Ronen conducted two experiments and found similar results in each. I’m going to focus on their second one because it focused on a real training class with real employees. They used a class that taught retail sales managers how to improve interactions with customers. All the participants got the same exact training and were then randomly assigned to two different experimental groups:

  • Triggered Action Planning—Participants were asked to visualize situations with customers and how they would respond to seven typical customer objections.
  • Goal-Reminding Action Planning—Participants were asked to write down the goals of the training program and the aspects of the training program that they felt were most important.

Four weeks after the training, secret shoppers were used. They interacted with the supervisors using the key phrases and rated each supervisor on dichotomously-anchored rating scales from 1 to 10, with ten being best. The secret shoppers were blind to condition—that is they did not know which supervisors had gotten triggered action planning and which received the goal instructions. The findings showed that the triggered action planning produced improvements over the goal-setting condition by 76%, almost doubling the results.

It should be pointed out that this experiment could have been better designed to have the control group select their own goals. There may be some benefit to actual goal-setting compared with being reminded about the goals of the course. The experiment had its strengths too, most notably (1) the use of observers to record real-world performance four weeks after the training, and (2) the fact that all the supervisors had gone through the exact same training and were randomly assigned to either triggered action planning or the goal-reminding condition.

Triggered Action Planning

Triggered Action Planning has great potential to radically improve the likelihood that your learners will actually use what you’ve taught them. The reason it works so well is that it is based on a fundamental characteristic of human cognition. We are triggered to think and act based on cues in our environment. As learning professionals we should do whatever we can to:

  • Figure out what cues our learners will face in their work situations.
  • Teach them what to do when they encounter these cues.
  • Give them a rich array of spaced, repeated practice in handling these situations.

To learn more about how to implement triggered action planning, see my original blog post.

Research Cited

Friedman, S., & Ronen, S. (2015). The effect of implementation intentions on transfer of training. European Journal of Social Psychology, 45(4), 409-416.

This blog post took three hours to write.

An exhaustive new research study reveals that the backfire effect is not as prevalent as previous research once suggested. This is good news for debunkers, those who attempt to correct misconceptions. This may be good news for humanity as well. If we cannot reason from truth, if we cannot reliably correct our misconceptions, we as a species will certainly be diminished—weakened by realities we have not prepared ourselves to overcome. For those of us in the learning field, the removal of the backfire effect as an unbeatable Goliath is good news too. Perhaps we can correct the misconceptions about learning that every day wreak havoc on our learning designs, hurt our learners, push ineffective practices, and cause an untold waste of time and money spent chasing mythological learning memes.

 

 

The Backfire Effect

The backfire effect is a fascinating phenomenon. It occurs when a person is confronted with information that contradicts an incorrect belief that they hold. The backfire effect results from the surprising finding that attempts at persuading others with truthful information may actually make the believer believe the untruth even more than if they hadn’t been confronted in the first place.

The term “backfire effect” was coined by Brendan Nyhan and Jason Reifler in a 2010 scientific article on political misperceptions. Their article caused an international sensation, both in the scientific community and in the popular press. At a time when dishonesty in politics seems to be at historically high levels, this is no surprise.

In their article, Nyhan and Reifler concluded:

“The experiments reported in this paper help us understand why factual misperceptions about politics are so persistent. We find that responses to corrections in mock news articles differ significantly according to subjects’ ideological views. As a result, the corrections fail to reduce misperceptions for the most committed participants. Even worse, they actually strengthen misperceptions among ideological subgroups in several cases.”

Subsequently, other researchers found similar backfire effects, and notable researchers working in the area (e.g., Lewandowsky) have expressed the rather fatalistic view that attempts at correcting misinformation were unlikely to work—that believers would not change their minds even in the face of compelling evidence.

 

Debunking the Myths in the Learning Field

As I have communicated many times, there are dozens of dangerously harmful myths in the learning field, including learning styles, neuroscience as fundamental to learning design, and the myth that “people remember 10% of what they read, 20% of what they hear, 30% of what they see…etc.” I even formed a group to confront these myths (The Debunker Club), although, and I must apologize, I have not had the time to devote to enabling our group to be more active.

The “backfire effect” was a direct assault on attempts to debunk myths in the learning field. Why bother if we would make no difference? If believers of untruths would continue to believe? If our actions to persuade would have a boomerang effect, causing false beliefs to be believed even more strongly? It was a leg-breaking, breath-taking finding. I wrote a set of recommendations to debunkers in the learning field on how best to be successful in debunking, but admittedly many of us, me included, were left feeling somewhat paralyzed by the backfire finding.

Ironically perhaps, I was not fully convinced. Indeed, some may think I suffered from my own backfire effect. In reviewing a scientific research review in 2017 on how to debunk, I implored that more research be done so we could learn more about how to debunk successfully, but I also argued that misinformation simply couldn’t be a permanent condition, that there was ample evidence to show that people could change their minds even on issues that they once believed strongly. Racist bigots have become voices for diversity. Homophobes have embraced the rainbow. Religious zealots have become agnostic. Lovers of technology have become anti-technology. Vegans have become paleo meat lovers. Devotees of Coke have switched to Pepsi.

The bottom line is that organizations waste millions of dollars every year when they use faulty information to guide their learning designs. As a professional in the learning field, it’s our professional responsibility to avoid the danger of misinformation! But is this even possible?

 

The Latest Research Findings

There is good news in the latest research! Thomas Wood and Ethan Porter just published an article (2018) that could not find any evidence for a backfire effect. They replicated the Nyhan and Reifler research, they expanded tenfold the number of misinformation instances studied, they modified the wording of their materials, they utilized over 10,000 participants in their research, and they varied their methods for obtaining those participants. They did not find any evidence for a backfire effect.

“We find that backfire is stubbornly difficult to induce, and is thus unlikely to be a characteristic of the public’s relationship to factual information. Overwhelmingly, when presented with factual information that corrects politicians—even when the politician is an ally—the average subject accedes to the correction and distances himself from the inaccurate claim.”

There is additional research to show that people can change their minds, that fact-checking can work, that feedback can correct misconceptions. Rich and Zaragoza (2016) found that misinformation can be fixed with corrections. Rich, Van Loon, Dunlosky, and  Zaragoza (2017) found that corrective feedback could work, if it was designed to be believed. More directly, Nyhan and Reifler (2016), in work cited by the American Press Institute Accountability Project, found that fact checking can work to debunk misinformation.

 

Some Perspective

First of all, let’s acknowledge that science sometimes works slowly. We don’t yet know all we will know about these persuasion and information-correction effects.

Also, let’s please be careful to note that backfire effects, when they are actually evoked, are typically found in situations where people are ideologically inclined to a system of beliefs for which they strongly identify. Backfire effects have been studied most of in situations where someone identifies themselves as a conservative or liberal—when this identity is singularly or strongly important to their self identity. Are folks in the learning field such strong believers in a system of beliefs and self-identity to easily suffer from the backfire effect? Maybe sometimes, but perhaps less likely than in the area of political belief which seems to consume many of us.

Here are some learning-industry beliefs that may be so deeply held that the light of truth may not penetrate easily:

  • Belief that learners know what is best for their learning.
  • Belief that learning is about conveying information.
  • Belief that we as learning professionals must kowtow to our organizational stakeholders, that we have no grounds to stand by our own principles.
  • Belief that our primary responsibility is to our organizations not our learners.
  • Belief that learner feedback is sufficient in revealing learning effectiveness.

These beliefs seem to undergird other beliefs and I’ve seen in my work where these beliefs seem to make it difficult to convey important truths. So let me clarify and first say that it is speculative on my part that these beliefs have substantial influence. This is a conjecture on my part. Note also that given that the research on the “backfire effect” has now been shown to be tenuous, I’m not claiming that fighting such foundational beliefs will cause damage. On the contrary, it seems like it might be worth doing.

 

Knowledge May Be Modifiable, But Attitudes and Belief Systems May Be Harder to Change

The original backfire effect showed that people believed facts more strongly when confronted with correct information, but this misses an important distinction. There are facts and there are attitudes, belief systems, and policy preferences.

A fascinating thing happened when Wood and Porter looked for—but didn’t find—the backfire effect. They talked with the original researchers, Nyhan and Reifler, and they began working together to solve the mystery. Why did the backfire effect happen sometimes but not regularly?

In a recent podcast (January 28, 2018) from the “You Are Not So Smart” podcast, Wood, Porter, and Nyhan were interviewed by David McRaney and they nicely clarified the distinction between factual backfire and attitudinal backfire.

Nyhan:

“People often focus on changing factual beliefs with the assumption that it will have consequences for the opinions people hold, or the policy preferences that they have, but we know from lots of social science research…that people can change their factual beliefs and it may not have an effect on their opinions at all.”

“The fundamental misconception here is that people use facts to form opinions and in practice that’s not how we tend to do it as human beings. Often we are marshaling facts to defend a particular opinion that we hold and we may be willing to discard a particular factual belief without actually revising the opinion that we’re using it to justify.”

Porter:

“Factual backfire if it exits would be especially worrisome, right? I don’t really believe we are going to find it anytime soon… Attitudinal backfire is less worrisome, because in some ways attitudinal backfire is just another description for failed persuasion attempts… that doesn’t mean that it’s impossible to change your attitude. That may very well just mean that what I’ve done to change your attitude has been a failure. It’s not that everyone is immune to persuasion, it’s just that persuasion is really, really hard.”

McRaney (Podcast Host):

“And so the facts suggest that the facts do work, and you absolutely should keep correcting people’s misinformation because people do update their beliefs and that’s important, but when we try to change people’s minds by only changing their [factual] beliefs, you can expect to end up, and engaging in, belief whack-a-mole, correcting bad beliefs left and right as the person on the other side generates new ones to support, justify, and protect the deeper psychological foundations of the self.”

Nyhan:

“True backfire effects, when people are moving overwhelmingly in the opposite direction, are probably very rare, they are probably on issues where people have very strong fixed beliefs….”

 

Rise Up! Debunk!

Here’s the takeaway for us in the learning field who want to be helpful in moving practice to more effective approaches.

  • While there may be some underlying beliefs that influence thinking in the learning field, they are unlikely to be as strongly held as the political beliefs that researchers have studied.
  • The research seems fairly clear that factual backfire effects are extremely unlikely to occur, so we should not be afraid to debunk factual inaccuracies.
  • Persuasion is difficult but not impossible, so it is worth making attempts to debunk. Such attempts are likely to be more effective if we take a change-management approach, look to the science of persuasion, and persevere respectfully and persistently over time.

Here is the message that one of the researchers, Tom Wood, wants to convey:

“I want to affirm people. Keep going out and trying to provide facts in your daily lives and know that the facts definitely make some difference…”

Here are some methods of persuasion from a recent article by Flynn, Nyhan, and Reifler (2017) that have worked even with people’s strongly-held beliefs:

  • When the persuader is seen to be ideologically sympathetic with those who might be persuaded.
  • When the correct information is presented in a graphical form rather than a textual form.
  • When an alternative causal account of the original belief is offered.
  • When credible or professional fact-checkers are utilized.
  • When multiple “related stories” are also encountered.

The stakes are high! Bad information permeates the learning field and makes our learning interventions less effective, harming our learners and our organizations while wasting untold resources.

We owe it to our organizations, our colleagues, and our fellow citizens to debunk bad information when we encounter it!

Let’s not be assholes about it! Let’s do it with respect, with openness to being wrong, and with all our persuasive wisdom. But let’s do it. It’s really important that we do!

 

Research Cited

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303–330.

Nyhan, B., & Zaragoza, J. (2016). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Available at: www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.
Rich, P. R., Van Loon, M. H., Dunlosky, J., & Zaragoza, M. S. (2017). Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 492-501.
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62-74. http://dx.doi.org/10.1037/xlm0000155
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence, Political Behavior, Advance Online Publication.

 

I added these words to the sidebar of my blog, and I like them so much that I’m sharing them as a blog post itself.

Please seek wisdom from research-to-practice experts — the dedicated professionals who spend time in two worlds to bring the learning field insights based on science. These folks are my heroes, given their often quixotic efforts to navigate through an incomprehensible jungle of business and research obstacles.

These research-to-practice professionals should be your heroes as well. Not mythological heroes, not heroes etched into the walls of faraway mountains. These heroes should be sought out as our partners, our fellow travelers in learning, as people we hire as trusted advisors to bring us fresh research-based insights.

The business case is clear. Research-to-practice experts not only enlighten and challenge us with ideas we might not have considered — ideas that make our learning efforts more effective in producing business results — research-to-practice professionals also prevent us from engaging in wasted efforts, saving our organizations time and money, all the while enabling us to focus more productively on learning factors that actually matter.

 

Another research brief. Answer the question and only then read what the research says:


 

In a recent study with teenagers playing a game to learn history, adding the learning instructions hurt learning outcomes for questions that assessed transfer, but NOT recall. The first choice hurt transfer but not recall. Give yourself some credit if you chose the second or third choices.

Caveats:

  • This is only one study.
  • It was done using only one type of learner.
  • It was done using only one type of learning method.
  • It was done with teenagers.

Important Point:

  • Don’t assume that adding instructions to encourage learning will facilitate learning.

Research:

Hawlitschek, A., & Joeckel, S. (2017). Increasing the effectiveness of digital educational games: The effects of a learning instruction on students’ learning, motivation and cognitive load. Computers in Human Behavior, 72, 79-86.

The learning profession has been blessed in recent years with a steady stream of scientific research that points to practical recommendations for designers of learning. If you or your organization are NOT hooked into the learning research, find yourself a research translator to help you! Call me, for example!

That’s the good news, but I have bad news for you too. In the old days, it wasn’t hard to create a competitive advantage for your company by staying abreast of the research and using it to design your learning products and services. Pretty soon, that won’t be enough. As the research becomes more widely known, you’ll have to do more to get a competitive advantage. Vendors especially will have to differentiate their products — NOT just by basing them on the research — but also by conducting research (A-B testing at a minimum) on your own products.

I know of at least a few companies right now who are conducting research on their own products. They aren’t advertising their research, because they want to get a jumpstart on the competition. But eventually, they’ll begin sharing what they’ve done.

Do you need an example of a company who’s had their product tested? Check out this page. Scroll down to the bottom and look at the 20 or so research studies that have been done using the product. Looks pretty impressive right?

To summarize, there are at least five benefits to doing research on your own products:

  1. Gain a competitive advantage by learning to make your product better.
  2. Gain a competitive advantage by supporting a high-quality brand image.
  3. Gain a competitive advantage by enabling the creation of unique and potent content marketing.
  4. Gain a competitive advantage by supporting creativity and innovation within your team.
  5. Gain a competitive advantage by creating an engaging and learning-oriented team environment.

A 2003 meta-analysis found that fitness training was likely to improve cognitive functioning in older adults.

I'm reprising this because it is one of Psychological Science's most cited articles as recently as September 1, 2016.

Fitness and Aging

The researchers examined 18 scientific studies and 197 separate effect sizes. They categorized measures of cognitive functioning into four categories as depicted above in the graph, including:

  • Executive functioning (the ability to plan, schedule, and generally engage in high-level decision-making).
  • Controlled processing (the ability to engage in simple decision-making).
  • Visuospatial processing (the ability to transform visual or spatial information).
  • Speed processing (the ability to make quick reactions).

As you can see in the graph above, overall the groups that exercised outperformed those who didn't.

 

Some Details:

  • Results were stronger for people 66-80 than for those 55-65 (judged by effect size), although all groups showed significant benefits from exercise.
  • Exercise for less than 30 minutes produced very little benefit compared to exercise for 30-60 minutes.
  • Females seemed to get more benefits from exercising, but the way comparisons were made makes this conclusion somewhat sketchy.
  • Those who engaged in both weight-training and cardio-training had slightly better results than those who did cardio alone.

 

Research Citation:

Colcombe, S., & Kramer, A. F. (2003). Fitness effects on the cognitive function of older adults: A meta-analytic study. Psychological Science, 14, 125-130.

 

More Information

Check out a 2009 blog post I wrote on aging and cognition, and test your knowledge with the quiz!!

For millennium, scholars and thinkers of all sorts — from scientists to men or women on the street — thought that memories simply faded with time.

Locke said:

"The memory of some men, it is true, is very tenacious, even to a miracle; but yet there seems to be a constant decay of all our ideas, even of those which are struck deepest, and in the minds of the most retentive; so that if they be not sometimes renewed by repeated exercise of the senses, or reflection on those kinds of objects which at first occasioned them, the print wears out, and at last there remains nothing to be seen."  John Locke quoted by William James in Principles of Psychology (p. 445, the 1952 Great Books edition, original 1891).

However, in the mid 1900's research by McGeoch (1932), Underwood (1957) and others found that memories can fade when what is learned interferes with other things learned. Previous things learned can interfere with current learning (proactive interference) and current learning can be interfered with by subsequent learning (retroactive interference).

The debate between decay and interference went on for over a century! Indeed, it paralleled the debate in physics over the property of light. Is it a wave or a particle?

The first ever photograph of light as both a particle and wave

In physics, the debate was so important that Albert Einstein won the Nobel Prize for the solution. Einstein's solution was simple. Light was BOTH a wave and a particle. The picture above is reported by Phys.org to be the first photograph demonstrating light's dual properties.

Now in the psychological research, we have the first experimental evidence that forgetting may be caused by BOTH decay and interference.

In a clever experiment, published just this month, Talya Sadeh, Jason Ozubko, Gordon Winocur, and Morris Moscovitch found evidence for both interference and decay.

Their research appears to be inspired, at least partially, by neuroscience findings. Here's what the authors say:

"Two approaches have guided current thinking regarding the functional distinction between hippocampal and extrahippocampal memories. The first approach maintains that the hippocampus supports a mnemonic process termed recollection, whereas extrahippocampal structures, especially the perirhinal cortex, support a process termed familiarity… Recollection is a mnemonic process that involves reinstatement of memory traces within the context in which they were formed. Familiarity is a mnemonic process that manifests itself in the feeling that a studied item has been experienced, but without reinstating the original context." (p. 2)

To be clear, this was NOT a neuroscience experiment. They did not measure brain activity in any way. They measured behavioral findings only.

In their experiment, they had people engage in a word-recognition task and then gave them either (1) another word-learning task, (2) a short music task, or (3) a long music task. The first group's word-learning task was designed to create the most interference. The longer music task was designed to create the most decay (because it took longer).

The results of the experiment were consistent with the researcher's hypotheses. They claimed to have found evidence for both decay and interference.

Caveats

Every scientific experiment has caveats. Usually these are pointed out by the researchers themselves. Often, it takes an outside set of eyes to provide caveats.

Did the researchers prove, beyond the shadow of a doubt, that forgetting has two causes? Short answer: No! Did they produce some interesting findings? Maybe!

My big worry from a research-design perspective is that their manipulation distinguishing between recollection and familiarity is somewhat dubious, seemingly splitting hairs in the questions they ask the learners. My big worry from a practical learning-design perspective is that they are using words as learning materials. First, most important learning situations utilize more complicated materials. Second, words are associative by their very nature — thus more likely to react to interference than typical learning materials. Third, the final "test" of learning was a recognition-memory task that involved learners determining whether they remembered seeing the words before — again, not very relevant to practical learning situations.

Practical Ramifications for Learning Professionals

Since there are potential experimental-design issues, particularly from a practical standpoint, it would be an extremely dubious enterprise to draw practical ramifications. Let me be dubious then (because it's fun, not because it's wise). If the researchers are correct, that memories that are context-based are less likely to be subject to interference effects; we might want to follow the general recommendation — often made today by research-focused learning experts — to provide learners with realistic practice using stimuli that have contextual relevance. In short, teach "if situation–then action" rather than teaching isolated concepts. Of course, we didn't need this experiment to tell us that. There is a ton of relevant research to back this up. For example, see The Decisive Dozen research review.

Beyond the experimental results, the concepts of delay and interference are intriguing in and of themselves. We know people tend to slide down a forgetting curve. Perhaps from interference, perhaps from decay. Indeed, as the authors say, "it is important to note that interference and decay are inherently confounded."

Research

The experiment:

Sadeh, T., Ozubko, J. D., Winocur, G., & Moscovitch. M. (2016) Forgetting patterns differentiate between two forms of memory representation. Psychological Science OnlineFirst, published on May 6, 2016 as doi:10.1177/0956797616638307.

The research review:

Sadeh, T., Ozubko, J. D., Winocur, G., & Moscovitch, M. (2014). How we forget may depend on how we remember. Trends in Cognitive Sciences, 18, 26–36.