Posts

The Backfire Effect is NOT Prevalent: Good News for Debunkers, Humans, and Learning Professionals!

, , ,

An exhaustive new research study reveals that the backfire effect is not as prevalent as previous research once suggested. This is good news for debunkers, those who attempt to correct misconceptions. This may be good news for humanity as well. If we cannot reason from truth, if we cannot reliably correct our misconceptions, we as a species will certainly be diminished—weakened by realities we have not prepared ourselves to overcome. For those of us in the learning field, the removal of the backfire effect as an unbeatable Goliath is good news too. Perhaps we can correct the misconceptions about learning that every day wreak havoc on our learning designs, hurt our learners, push ineffective practices, and cause an untold waste of time and money spent chasing mythological learning memes.

 

 

The Backfire Effect

The backfire effect is a fascinating phenomenon. It occurs when a person is confronted with information that contradicts an incorrect belief that they hold. The backfire effect results from the surprising finding that attempts at persuading others with truthful information may actually make the believer believe the untruth even more than if they hadn’t been confronted in the first place.

The term “backfire effect” was coined by Brendan Nyhan and Jason Reifler in a 2010 scientific article on political misperceptions. Their article caused an international sensation, both in the scientific community and in the popular press. At a time when dishonesty in politics seems to be at historically high levels, this is no surprise.

In their article, Nyhan and Reifler concluded:

“The experiments reported in this paper help us understand why factual misperceptions about politics are so persistent. We find that responses to corrections in mock news articles differ significantly according to subjects’ ideological views. As a result, the corrections fail to reduce misperceptions for the most committed participants. Even worse, they actually strengthen misperceptions among ideological subgroups in several cases.”

Subsequently, other researchers found similar backfire effects, and notable researchers working in the area (e.g., Lewandowsky) have expressed the rather fatalistic view that attempts at correcting misinformation were unlikely to work—that believers would not change their minds even in the face of compelling evidence.

 

Debunking the Myths in the Learning Field

As I have communicated many times, there are dozens of dangerously harmful myths in the learning field, including learning styles, neuroscience as fundamental to learning design, and the myth that “people remember 10% of what they read, 20% of what they hear, 30% of what they see…etc.” I even formed a group to confront these myths (The Debunker Club), although, and I must apologize, I have not had the time to devote to enabling our group to be more active.

The “backfire effect” was a direct assault on attempts to debunk myths in the learning field. Why bother if we would make no difference? If believers of untruths would continue to believe? If our actions to persuade would have a boomerang effect, causing false beliefs to be believed even more strongly? It was a leg-breaking, breath-taking finding. I wrote a set of recommendations to debunkers in the learning field on how best to be successful in debunking, but admittedly many of us, me included, were left feeling somewhat paralyzed by the backfire finding.

Ironically perhaps, I was not fully convinced. Indeed, some may think I suffered from my own backfire effect. In reviewing a scientific research review in 2017 on how to debunk, I implored that more research be done so we could learn more about how to debunk successfully, but I also argued that misinformation simply couldn’t be a permanent condition, that there was ample evidence to show that people could change their minds even on issues that they once believed strongly. Racist bigots have become voices for diversity. Homophobes have embraced the rainbow. Religious zealots have become agnostic. Lovers of technology have become anti-technology. Vegans have become paleo meat lovers. Devotees of Coke have switched to Pepsi.

The bottom line is that organizations waste millions of dollars every year when they use faulty information to guide their learning designs. As a professional in the learning field, it’s our professional responsibility to avoid the danger of misinformation! But is this even possible?

 

The Latest Research Findings

There is good news in the latest research! Thomas Wood and Ethan Porter just published an article (2018) that could not find any evidence for a backfire effect. They replicated the Nyhan and Reifler research, they expanded tenfold the number of misinformation instances studied, they modified the wording of their materials, they utilized over 10,000 participants in their research, and they varied their methods for obtaining those participants. They did not find any evidence for a backfire effect.

“We find that backfire is stubbornly difficult to induce, and is thus unlikely to be a characteristic of the public’s relationship to factual information. Overwhelmingly, when presented with factual information that corrects politicians—even when the politician is an ally—the average subject accedes to the correction and distances himself from the inaccurate claim.”

There is additional research to show that people can change their minds, that fact-checking can work, that feedback can correct misconceptions. Rich and Zaragoza (2016) found that misinformation can be fixed with corrections. Rich, Van Loon, Dunlosky, and  Zaragoza (2017) found that corrective feedback could work, if it was designed to be believed. More directly, Nyhan and Reifler (2016), in work cited by the American Press Institute Accountability Project, found that fact checking can work to debunk misinformation.

 

Some Perspective

First of all, let’s acknowledge that science sometimes works slowly. We don’t yet know all we will know about these persuasion and information-correction effects.

Also, let’s please be careful to note that backfire effects, when they are actually evoked, are typically found in situations where people are ideologically inclined to a system of beliefs for which they strongly identify. Backfire effects have been studied most of in situations where someone identifies themselves as a conservative or liberal—when this identity is singularly or strongly important to their self identity. Are folks in the learning field such strong believers in a system of beliefs and self-identity to easily suffer from the backfire effect? Maybe sometimes, but perhaps less likely than in the area of political belief which seems to consume many of us.

Here are some learning-industry beliefs that may be so deeply held that the light of truth may not penetrate easily:

  • Belief that learners know what is best for their learning.
  • Belief that learning is about conveying information.
  • Belief that we as learning professionals must kowtow to our organizational stakeholders, that we have no grounds to stand by our own principles.
  • Belief that our primary responsibility is to our organizations not our learners.
  • Belief that learner feedback is sufficient in revealing learning effectiveness.

These beliefs seem to undergird other beliefs and I’ve seen in my work where these beliefs seem to make it difficult to convey important truths. So let me clarify and first say that it is speculative on my part that these beliefs have substantial influence. This is a conjecture on my part. Note also that given that the research on the “backfire effect” has now been shown to be tenuous, I’m not claiming that fighting such foundational beliefs will cause damage. On the contrary, it seems like it might be worth doing.

 

Knowledge May Be Modifiable, But Attitudes and Belief Systems May Be Harder to Change

The original backfire effect showed that people believed facts more strongly when confronted with correct information, but this misses an important distinction. There are facts and there are attitudes, belief systems, and policy preferences.

A fascinating thing happened when Wood and Porter looked for—but didn’t find—the backfire effect. They talked with the original researchers, Nyhan and Reifler, and they began working together to solve the mystery. Why did the backfire effect happen sometimes but not regularly?

In a recent podcast (January 28, 2018) from the “You Are Not So Smart” podcast, Wood, Porter, and Nyhan were interviewed by David McRaney and they nicely clarified the distinction between factual backfire and attitudinal backfire.

Nyhan:

“People often focus on changing factual beliefs with the assumption that it will have consequences for the opinions people hold, or the policy preferences that they have, but we know from lots of social science research…that people can change their factual beliefs and it may not have an effect on their opinions at all.”

“The fundamental misconception here is that people use facts to form opinions and in practice that’s not how we tend to do it as human beings. Often we are marshaling facts to defend a particular opinion that we hold and we may be willing to discard a particular factual belief without actually revising the opinion that we’re using it to justify.”

Porter:

“Factual backfire if it exits would be especially worrisome, right? I don’t really believe we are going to find it anytime soon… Attitudinal backfire is less worrisome, because in some ways attitudinal backfire is just another description for failed persuasion attempts… that doesn’t mean that it’s impossible to change your attitude. That may very well just mean that what I’ve done to change your attitude has been a failure. It’s not that everyone is immune to persuasion, it’s just that persuasion is really, really hard.”

McRaney (Podcast Host):

“And so the facts suggest that the facts do work, and you absolutely should keep correcting people’s misinformation because people do update their beliefs and that’s important, but when we try to change people’s minds by only changing their [factual] beliefs, you can expect to end up, and engaging in, belief whack-a-mole, correcting bad beliefs left and right as the person on the other side generates new ones to support, justify, and protect the deeper psychological foundations of the self.”

Nyhan:

“True backfire effects, when people are moving overwhelmingly in the opposite direction, are probably very rare, they are probably on issues where people have very strong fixed beliefs….”

 

Rise Up! Debunk!

Here’s the takeaway for us in the learning field who want to be helpful in moving practice to more effective approaches.

  • While there may be some underlying beliefs that influence thinking in the learning field, they are unlikely to be as strongly held as the political beliefs that researchers have studied.
  • The research seems fairly clear that factual backfire effects are extremely unlikely to occur, so we should not be afraid to debunk factual inaccuracies.
  • Persuasion is difficult but not impossible, so it is worth making attempts to debunk. Such attempts are likely to be more effective if we take a change-management approach, look to the science of persuasion, and persevere respectfully and persistently over time.

Here is the message that one of the researchers, Tom Wood, wants to convey:

“I want to affirm people. Keep going out and trying to provide facts in your daily lives and know that the facts definitely make some difference…”

Here are some methods of persuasion from a recent article by Flynn, Nyhan, and Reifler (2017) that have worked even with people’s strongly-held beliefs:

  • When the persuader is seen to be ideologically sympathetic with those who might be persuaded.
  • When the correct information is presented in a graphical form rather than a textual form.
  • When an alternative causal account of the original belief is offered.
  • When credible or professional fact-checkers are utilized.
  • When multiple “related stories” are also encountered.

The stakes are high! Bad information permeates the learning field and makes our learning interventions less effective, harming our learners and our organizations while wasting untold resources.

We owe it to our organizations, our colleagues, and our fellow citizens to debunk bad information when we encounter it!

Let’s not be assholes about it! Let’s do it with respect, with openness to being wrong, and with all our persuasive wisdom. But let’s do it. It’s really important that we do!

 

Research Cited

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303–330.

Nyhan, B., & Zaragoza, J. (2016). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Available at: www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.
Rich, P. R., Van Loon, M. H., Dunlosky, J., & Zaragoza, M. S. (2017). Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 492-501.
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62-74. http://dx.doi.org/10.1037/xlm0000155
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence, Political Behavior, Advance Online Publication.

 

One of the Biggest Lies in Learning Evaluation — Asking Learners about Level 3 and 4.

, ,

The Kirkpatrick four-level model of evaluation includes Level 1 learner reactions, Level 2 learning, Level 3 behavior, and 4 Level results. Because of the model’s ubiquity and popularity, many learning professionals and organizations are influenced or compelled by the model to measure the two higher levels—Behavior and Results—even when it doesn’t make sense to do so and even if poor methods are used to do the measurement. This pressure has led many of us astray. It has also enabled vendors to lie to us.

Let me get right to the point. When we ask learners whether a learning intervention will improve their job performance, we are getting their Level 1 reactions. We are NOT getting Level 3 data. More specifically, we are not getting information we can trust to tell us whether a person’s on-the-job behavior has improved due to the learning intervention.

Similarly, when we ask learners about the organizational results that might come from a training or elearning program, we are getting learners’ Level 1 reactions. We are NOT getting Level 4 data. More specifically, we are not getting information we can trust to tell us whether organizational results improved due to the learning intervention.

One key question is, “Are we getting information we can trust?” Another is, “Are we sure the learning intervention caused the outcome we’re targeting—or whether, at least, it was significant in helping to create the targeted outcomes?”

Whenever we gather learner answers, we have to remember that people’s subjective opinions are not always accurate. First there are general problems with human subjectivity; including people’s tendencies toward wanting to be nice, to see themselves and their organizations in a positive light, to believing they themselves are more productive, intelligent, and capable than they actually are. In addition, learners don’t always know how different learning methods affect learning outcomes, so asking them to assess learning designs has to be done with great care to avoid bias.

The Foolishness of Measuring Level 3 and 4 with Learner-Input Alone

There are also specific difficulties in having learners rate Level 3 and 4 results.

  • Having learners assess Level 3 is fraught with peril because of all the biases that are entailed. Learners may want to look good to others or to themselves. They may suffer from the Dunning-Kruger effect and rate their performance at a higher level than what is deserved.
  • Assessing Level 4 organizational results is particularly problematic. First, it is very difficult to track all the things that influence organizational performance. Asking learners for Level 4 results is a dubious enterprise because most employees cannot observe or may not fully understand the many influences that impact organizational outcomes.

Many questions we ask learners in measuring Level 3 and 4 are biased in and of themselves. These four questions are highly biasing, and yet sadly they were taken directly from two of our industry’s best-known learning-evaluation vendors:

  • “Estimate the degree to which you improved your performance related to this course?” (Rated on a scale of percentages to 100)
  • “The training has improved my job performance.” (Rated on a numeric scale)
  • “I will be able to apply on the job what I learned during this session.” (rated with a Likert-like scale)
  • “I anticipate that I will eventually see positive results as a result of my efforts.” (rated with a Likert-like scale)

At least two of our top evaluation vendors make the case explicitly that smile sheets can gather Level 3 and 4 data. This is one of the great lies in the learning industry. A smile sheet garners Level 1 results! It does not capture data at any other levels.

What about delayed smile sheets—questions delivered to learners weeks or months after a learning experience? Can these get Level 2, 3, and 4 data? No! Asking learners for their perspectives, regardless of when their answers are collected, still gives us only Level 1 outcomes! Yes, learners answers can provide hints, but the data can only be a proxy for outcomes beyond Level 1.

On top of that, the problems cited above regarding learner perspectives on their job performance and on organizational results still apply even when questions are asked well after a learning event. Remember, the key to measurement is always whether we can trust the data we are collecting! To reiterate, asking learners for their perspectives on behavior and results suffers from the following:

  • Learners’ biases skew the data
  • Learners’ blind spots make their answers suspect
  • Biased questioning spoils the data
  • The complexity in determining the network of causal influences makes assessments of learning impact difficult or impossible

In situations where learner perspectives are so in doubt, asking learners questions may generate some reasonable hypotheses, but then these hypotheses must be tested with other means.

The Ethics of the Practice

It is unfair to call Level 1 data Level 3 data or Level 4 data.

In truth, it is not only unfair, it is deceptive, disingenuous, and harmful to our learning efforts.

How Widespread is this Misconception?

If two of are top vendors are spreading this misconception, we can be pretty sure that our friend-and-neighbor foot soldiers are marching to the beat.

Last week, I posted a Twitter poll asking the following question:

If you ask your learners how the training will impact their job performance, what #Kirkpatrick level is it?

Twitter polls only allow four choices, so I gave people the choice of choosing Level 1 — Reaction, Level 2 –Learning, Level 3 — Behavior, or Level 4 — Results.

Over 250 people responded (253). Here are the results:

  • Level 1 — Reaction (garnered 31% of the votes)
  • Level 2 — Learning (garnered 15% of the votes)
  • Level 3 — Behavior (garnered 38% of the votes)
  • Level 4 — Results (garnered 16% of the votes)

Level 1 is the correct answer! Level 3 is the most common misconception!

And note, given that Twitter is built on a social-media follower-model—and many people who follow me have read my book on Performance-Focused Smile Sheets, where I specifically debunk this misconception—I’m sure this result is NOT representative of the workplace learning field in general. I’m certain that in the field, more people believe that the question represents a Level 3 measure.

Yes, it is true what they say! People like you who read my work are more informed and less subject to the vagaries of vendor promotions. Also better looking, more bold, and more likely to be humble humanitarians!

My tweet offered one randomly-chosen winner a copy of my award-winning book. And the winner is:

Sheri Kendall-DuPont, known on Twitter as:

Thanks to everyone who participated in the poll…

Replacement for the Net Promoter Score—For Learning Assessments

The Net Promoter Score is one of the most popular smile-sheet questions in use. Unfortunately, it is fatally flawed for learning. I’ve written about NPS’s problems before. Essentially, NPS was designed for marketing purposes to get people’s feelings about the products they were using. NPS was NOT designed for learning. Also, the wording and choices of the question are too fuzzy to be meaningful. Finally, and most damning, NPS follows traditional smile sheets in focusing on learner satisfaction and course reputation—even though research has shown that traditional smile sheets are uncorrelated with learning!!

Despite these problems, organizations continue their blind allegiance to NPS.

Oftentimes, we are forced into doing stupid things by our organizational stakeholders, mostly because there seems to be no alternative. Let me provide one.

Can we gauge learner satisfaction in a way that focuses the question toward learning effectiveness and less on entertainment, enjoyment, ease of attendance, etc.? Yes. We. Can!

 

Net Effectiveness Score (NES)

Here’s the question:

If someone asked you about the effectiveness of the learning experience, would you recommend the learning to them? CHOOSE ONE.

  • The learning was TOO INEFFECTIVE to recommend.
  • The learning was INEFFECTIVE ENOUGH THAT I WOULD BE HESITANT to recommend it.
  • The learning was NOT FULLY EFFECTIVE, BUT I would recommend it IF IMPROVEMENTS WERE MADE to the learning.
  • The learning was NOT FULLY EFFECTIVE, BUT I would still recommend it EVEN IF NO CHANGES WERE MADE to the learning.
  • The learning was EFFECTIVE, SO I WOULD RECOMMEND IT.
  • The learning was VERY EFFECTIVE, SO I WOULD HIGHLY RECOMMEND IT.

This question has several benefits over the NPS question.

  1. It focuses on learning.
  2. It prompts learners to think about learning effectiveness.
  3. It has concrete answer choices, not fuzzy numeric ones.
  4. It will create meaningful results.

By the way, this question should be delivered after other smile-sheet questions that nudge learners to think about learning factors that really matter.

To learn more about performance-focused learner-feedback questions, either get in touch with me or check out my book.

 

Big Data and Learning — A Wild Goose Chase?

, ,

Geese are everywhere these days, crapping all over everything. Where we might have nourishment, we get poop on our shoes.

Big data is everywhere these days…

Even flocking into the learning field.

For big-data practitioners to NOT crap up the learning field, they’ll need to find good sources of data (good luck with that!), use intelligence about learning to know what it means (blind arrogance will prevent this, at least at first), and then find where the data is actually useful in practice (will there be patience and practice or just shiny new objects for sale?).

Beware of the wild goose chase! It’s already here.

Brett Christensen Uses the Performance-Focused Smile Sheet Methodology

This week, Brett Christensen published an article on how he’s used a Performance-Focused Smile Sheet to support him in teaching one of ISPI’s flagship workshops.

What I found particularly striking is how Brett used the smile-sheet results to make sense of learning effectiveness. His goal was to help his learners be able to take what they’ve learned and use it back on the job.

One smile-sheet question he used pointed to results that suggested that learners felt they had gained awareness of concepts, but they might not be fully able to put what they learned into practice. This raised a red flag, so Brett examined results from another question on the amount of practice received in the workshop. The learners told him that practice was only a little more than 50% of the workshop, and Brett used this information to consider changes for adding more practice.

He also used a question to get a sense of whether the spacing effect was utilized to support long-term remembering–a key research-based learning approach. He got good news there–so that even in a one-day workshop–many learners felt repetitions were delivered after a delay of an hour or more. Good instructional design!

For a century or more, our learner-feedback questions have focused on satisfaction, course reputation, and other factors that are NOT directly related to learning effectiveness. Now we have a new methodology, first described in the award-winning book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. We ought to use this to get feedback about what we can do better.

Brett offers a wonderful case study from his work teaching a course offered through ISPI (Developed by Dr. Roger Chevalier). We are no longer hogtied with evaluations that provide us with bogus information. We can look for ways to get better feedback, improve our learning interventions, and get better results.

To read Brett’s full article, click here…

Recording of Webinar — On Transfer Research for 2018

,

Holy Cow Batman! Yesterday’s Webinar, which I co-hosted with Emma Weber of Lever Learning, was overbooked and some people were unable to connect. To help make amends, here is the recording of the webinar:

 

 

Click Here to View Webinar on YouTube

 

Apologies in advance that we were not able to record the actual polling results (the responses of those who attended live — to the questions we asked). Still, I think it’s pretty good as webinar recordings go.

Emma and I send our heartfelt apologies. We know some of you notified your teams, changed your schedules, and stayed up late or stayed late at work to watch. We are considering offering an encore engagement in January for those who might want to participate more intimately than a recording can provide. Watch this blog for details or sign up for my list to be notified.

Getting Better Responses on Your Smile Sheets.

,

One of the most common questions I get when I speak about the Performance-Focused Smile-Sheet approach (see the book’s website at SmileSheets.com) is “What can be done to get higher response rates from my smile sheets?”

Of course, people also refer to smile sheets as evals, level 1’s, happy sheets, hot or warm evaluation, response forms, reaction forms, etc. They also refer to both paper-and-pencil forms and online surveys. Indeed, as smile sheets go online, more and more people are finding that online surveys get a much lower response rate than in-classroom paper surveys.

Before I give you my list for how to get a higher response rate, let me blow this up a bit. The thing is, while we want high response rates, there’s something much more important than response rates. We also want response relevance and precision. We want the questions to relate to learning effectiveness, not just learning reputation and learner satisfaction. We also want the learners to be able to answer the questions knowledgeably and give our questions their full attention.

If we have bad questions — one’s that use Likert-like or numeric scales for example — it won’t matter that we have high response rates. In this post, I’m NOT going to focus on how to write better questions. Instead, I’m just tackling how we can motivate our learners to give our questions more of their full attention, thus increasing the precision of their responding while also increasing our response rates as well.

How to get Better Responses and Higher Response Rates

  1. Ask with enthusiasm, while also explaining the benefits.
  2. Have a trusted person make the request (often an instructor who our learners have bonded with).
  3. Mention the coming smile sheet early in the learning (and more than once) so that learners know it is an integral part of the learning, not just an add-on.
  4. While mentioning the smile sheet, let folks know what you’ve learned from previous smile sheets and what you’ve changed based on the feedback.
  5. Tell learners what you’ll do with the data, and how you’ll let them know the results of their feedback.
  6. Highlight the benefits to the instructor, to the instructional designers, and to the organization. Those who ask can mention how they’ve benefited in the past from smile sheet results.
  7. Acknowledge the effort that they — your learners — will be making, maybe even commiserating with them that you know how hard it can be to give their full attention when it’s the end of the day or when they are back to work.
  8. Put the time devoted to the survey in perspective, for example, “We spent 7 hours today in learning, that’s 420 minutes, and now we’re asking you for 10 more minutes.”
  9. Ensure your learners that the data will be confidential, that the data is aggregated so that an individual’s responses are never shared.
  10. Let your learners know the percentage of people like them who typically complete the survey (caveat: if it’s relatively high).
  11. Use more distinctive answer choices. Avoid Likert-like answer choices and numerical scales — because learners instinctively know they aren’t that useful.
  12. Ask more meaningful questions. Use questions that learners can answer with confidence. Ask questions that focus on meaningful information. Avoid obviously biased questions — as these may alienate your learners.

How to get Better Responses and Higher Response Rates on DELAYED SMILE SHEETS

Sometimes, we’ll want to survey our learners well after a learning event, for example three to five weeks later. Delayed smile sheets are perfectly positioned to find out more about how the learning is relevant to the actual work or to our learners’ post-learning application efforts. Unfortunately, prompting action — that is getting learners to engage our delayed smile sheets — can be particularly difficult when asking for this favor well after learning. Still, there are some things we can do — in addition to the list above — that can make a difference.

  1. Tell learners what you learned from the end-of-learning smile sheet they previously completed.
  2. Ask the instructor who bonded with them to send the request (instead of an unknown person from the learning unit).
  3. Send multiple requests, preferably using a mechanism that only sends these requests to those who still need to complete the survey.
  4. Have the course officially end sometime AFTER the delayed smile sheet is completed, even if that is largely just a perception. Note that multiple-event learning experiences lend themselves to this approach, whereas single-event learning experiences do not.
  5. Share with your learners a small portion of the preliminary data from the delayed smile sheet. “Already, 46% of your fellow learners have completed the survey, with some intriguing tentative results. Indeed, it looks like the most relevant topic as rated by your fellow learners is… and the least relevant is…”
  6. Share with them the names or job titles of some of the people who have completed the survey already.
  7. Share with them the percentage of folks from his/her unit who have responded already or share a comparison across units.

What about INCENTIVES?

When I ask audiences for their ideas for improving responses and increasing response rates, they often mention some sort of incentive, usually based on some sort of lottery or raffle. “If you complete the survey, your name will be submitted to have chance to win the latest tech gadget, a book, time off, lunch with an executive, etc.”

I’m a skeptic. I’m open to being wrong, but I’m still skeptical about the cost/benefit calculation. Certainly for some audiences an incentive will increase rates of completion. Also, for some audiences, the harms that come with incentives may be worth it.

What harms you might ask? When we provide an external incentive, we might be sending a message to some learners that we know the task has no redeeming value or is tedious or difficult. People who see their own motivation as caused by the external incentive are potentially less likely to seriously engage our questions, producing bad data. We’re also not just having an effect on the current smile sheet. When we incentivize people today, they may be less willing next time to engage in answering our questions. They may also be pushed into believing that smile sheets are difficult, worthless, or worse.

Ideally, we’d like our learners to want to provide us with data, to see answering our questions as a worthy and helpful exercise, one that is valuable to them, to us, and to our organization. Incentives push against this vision.

 

Prompting Learning When Our Learners Play Games

 

Another research brief. Answer the question and only then read what the research says:


 

In a recent study with teenagers playing a game to learn history, adding the learning instructions hurt learning outcomes for questions that assessed transfer, but NOT recall. The first choice hurt transfer but not recall. Give yourself some credit if you chose the second or third choices.

Caveats:

  • This is only one study.
  • It was done using only one type of learner.
  • It was done using only one type of learning method.
  • It was done with teenagers.

Important Point:

  • Don’t assume that adding instructions to encourage learning will facilitate learning.

Research:

Hawlitschek, A., & Joeckel, S. (2017). Increasing the effectiveness of digital educational games: The effects of a learning instruction on students’ learning, motivation and cognitive load. Computers in Human Behavior, 72, 79-86.

Doing Research On Our Learning Products

,

The learning profession has been blessed in recent years with a steady stream of scientific research that points to practical recommendations for designers of learning. If you or your organization are NOT hooked into the learning research, find yourself a research translator to help you! Call me, for example!

That’s the good news, but I have bad news for you too. In the old days, it wasn’t hard to create a competitive advantage for your company by staying abreast of the research and using it to design your learning products and services. Pretty soon, that won’t be enough. As the research becomes more widely known, you’ll have to do more to get a competitive advantage. Vendors especially will have to differentiate their products — NOT just by basing them on the research — but also by conducting research (A-B testing at a minimum) on your own products.

I know of at least a few companies right now who are conducting research on their own products. They aren’t advertising their research, because they want to get a jumpstart on the competition. But eventually, they’ll begin sharing what they’ve done.

Do you need an example of a company who’s had their product tested? Check out this page. Scroll down to the bottom and look at the 20 or so research studies that have been done using the product. Looks pretty impressive right?

To summarize, there are at least five benefits to doing research on your own products:

  1. Gain a competitive advantage by learning to make your product better.
  2. Gain a competitive advantage by supporting a high-quality brand image.
  3. Gain a competitive advantage by enabling the creation of unique and potent content marketing.
  4. Gain a competitive advantage by supporting creativity and innovation within your team.
  5. Gain a competitive advantage by creating an engaging and learning-oriented team environment.

Research Reviews of the Spacing Effect

I’ve been following the spacing effect for over a decade, writing a research-to-practice report in 2006, and recommending the spacing effects to my clients and in the guise of subscription learning (threaded microlearning).

One of the fascinating things is that researchers continue to be fascinated with the spacing effect producing about 10 new studies every year and many research reviews.

Here are a list of the research reviews from most recent to earliest.

  • Maddox, G. B. (2016). Understanding the underlying mechanism of the spacing effect in verbal learning: A case for encoding variability and study-phase retrieval. Journal of Cognitive Psychology, 28(6), 684-706.
  • Vlach, H. A. (2014). The spacing effect in children’s generalization of knowledge: Allowing children time to forget promotes their ability to learn. Child Development Perspectives, 8(3), 163-168.
  • Küpper-Tetzel, C. E. (2014). Understanding the distributed practice effect: Strong effects on weak theoretical grounds. Zeitschrift für Psychologie, 222(2), 71-81.
  • Carpenter, S. K. (2014). Spacing and interleaving of study and practice. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.), Applying science of learning in education: Infusing psychological science into the curriculum (pp. 131-141). Washington, DC: Society for the Teaching of Psychology.
  • Toppino, T. C., & Gerbier, E. (2014). About practice: Repetition, spacing, and abstraction. In B. H. Ross (Ed.), The psychology of learning and motivation: Vol. 60. The psychology of learning and motivation (pp. 113-189). San Diego, CA: Elsevier Academic Press.
  • Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), 369-378.
  • Kornmeier, J., & Sosic-Vasic, Z. (2012). Parallels between spacing effects during behavioral and cellular learning. Frontiers in Human Neuroscience, 6, Article ID 203.
  • Delaney, P. F., Verkoeijen, P. P. J. L., & Spirgel, A. (2010). Spacing and testing effects: A deeply critical, lengthy, and at times discursive review of the literature. In B. H. Ross (Ed.), The psychology of learning and motivation: Vol. 53. The psychology of learning and motivation: Advances in research and theory (pp. 63-147).
  • Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354-380.
  • Janiszewski, C., Noel, H., & Sawyer, A. G. (2003). A Meta-analysis of the Spacing Effect in Verbal Learning: Implications for Research on Advertising Repetition and Consumer Memory. Journal of Consumer Research, 30(1), 138-149.
  • Dempster, F. N., & Farris, R. (1990). The spacing effect: Research and practice. Journal of Research & Development in Education, 23(2), 97-101.
  • Underwood, B. J. (1961). Ten years of massed practice on distributed practice. Psychological Review, 68(4), 229-247.
  • Ruch, T. C. (1928). Factors influencing the relative economy of massed and distributed practice in learning. Psychological Review, 35(1), 19-45.