New Meta-Analysis on Debunking — Still an Unclear Way to Potency

,

A new meta-analysis on debunking was released last week, and I was hoping to get clear guidelines on how to debunk misinformation. Unfortunately, the science still seems somewhat equivocal about how to debunk. Either that, or there’s just no magic bullet.

Let’s break this down. We all know misinformation exists. People lie, people get confused and share bad information, people don’t vet their sources, incorrect information is easily spread, et cetera. Debunking is the act of providing information or inducing interactions intended to correct misinformation.

Misinformation is a huge problem in the world today, especially in our political systems. Democracy is difficult if political debate and citizen conversations are infused with bad information. Misinformation is also a huge problem for citizens themselves and for organizations. People who hear false health-related information can make themselves sick. Organizations who have employees who make decisions based on bad information, can hurt the bottom line.

In the workplace learning field, there’s a ton of misinformation that has incredibly damaging effects. People believe in the witchcraft of learning styles, neuroscience snake oil, traditional smile sheets, and all kinds of bogus information.

It would be nice if misinformation could be easily thwarted, but too often it lingers. For example, the idea that people remember 10% of what they read, 20% of what they hear, 30% of what they see, etc., has been around since 1913 if not before, but it still gets passed around every year on bastardized versions of Dale’s Cone.

A meta-analysis is a scientific study that compiles many other scientific studies using advanced statistical procedures to enable overall conclusions to be drawn. The study I reviewed (the one that was made available online last week) is:

Chan, M. S., Jones, C. R., Jamieson, K. H., & Albarracin, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. Available here (if you have journal access: http://journals.sagepub.com/doi/10.1177/0956797617714579).

This study compiled scientific studies that:

  1. First presented people with misinformation (except a control group that got no misinformation).
  2. Then presented them with a debunking procedure.
  3. Then looked at what effect the debunking procedure had on people’s beliefs.

There are three types of effects examined in the study:

  1. Misinformation effect = Difference between the group that just got misinformation and a control group that didn’t get misinformation. This determined how much the misinformation hurt.
  2. Debunking effect = Difference between the group that just got misinformation and a group that got misinformation and later debunking. This determined how much debunking could lesson the effects of the misinformation.
  3. Misinformation-Persistence effect = Difference between the group that got misinformation-and-debunking and the control group that didn’t get misinformation. This determined how much debunking could fully reverse the effects of the misinformation.

They looked at three sets of factors.

First, the study examined what happens when people encounter misinformation. They found that the more people thought of explanations for the false information, the more they would believe this misinformation later, even in the face of debunking. From a practical standpoint then, if people are receiving misinformation, we should hope they don’t think too deeply about it. Of course, this is largely out of our control as learning practitioners, because people come to us after they’ve gotten misinformation. On the other hand, it may provide hints for us as we use knowledge management or social media. The research findings suggest that we might need to intervene immediately when bad information is encountered to prevent people from elaborating on the misinformation.

Second, the meta-analysis examined whether debunking messages that included procedures to induce people to make counter-arguments to the misinformation would outperform debunking messages that did not include such procedures (or that included less potent counter-argument-inducing procedures). They found consistent benefits to these counter-argument inducing procedures. These procedures helped reduce misinformation. This suggests strongly that debunking should induce counter-arguments to the misinformation. And though specific mechanisms for doing this may be difficult to design, it is probably not enough to present the counter-arguments ourselves without getting our learners to fully process the counter-arguments themselves to some sufficient level of mathemagenic (learning-producing) processing.

Third, the meta-analysis looked at whether debunking messages that included explanatory information for why the misinformation was wrong would outperform debunking messages that included just contradictory claims (for example, statements to the effect that the misinformation was wrong). They found mixed results here. Providing debunking messages with explanatory information was more effective in debunking misinformation (getting people to move from being misinformed to being less misinformed), but these more explanatory messages were actually less effective in fully ridding people of the misinformation. This was a conflicting finding and so it’s not clear whether greater explanations make a difference, or how they might be designed to make a difference. One wild conjecture. Perhaps where the explanations can induce relevant counter-arguments to the misinformation, they will be effective.

Overall, I came away disappointed that we haven’t been able to learn more about how to debunk. This is NOT these researchers’ fault. The data is the data. Rather, the research community as a whole has to double down on debunking and persuasion and figure out what works.

People certainly change their minds on heartfelt issues. Just think about the acceptance of gays and lesbians over the last twenty years. Dramatic changes! Many people are much more open and embracing. Well, how the hell did this happen? Some people died out, but many other people’s minds were changed.

My point is that misinformation cannot possibly be a permanent condition and it behooves the world to focus resources on fixing this problem — because it’s freakin’ huge!

————

Note that a review of this research in the New York Times painted this in a more optimistic light.

————

Some additional thoughts (added one day after original post).

To do a thorough job of analyzing any research paradigm, we should, of course, go beyond meta-analyses to the original studies being meta-analyzed. Most of us don’t have time for that, so we often take the short-cut of just reading the meta-analysis or just reading research reviews, etc. This is generally okay, but there is a caveat that we might be missing something important.

One thing that struck me in reading the meta-analysis is that the authors commented on the typical experimental paradigm used in the research. It appeared that the actual experiment might have lasted 30 minutes or less, maybe 60 minutes at most. This includes reading (learning) the misinformation, getting a ten-minute distractor task, and answering a few questions (some treatment manipulations, that is, types of debunking methods; plus the assessment of their final state of belief through answers to questions). To ensure I wasn’t misinterpreting the authors’ message that the experiments were short, I looked at several of the studies compiled in the meta-analysis. The research I looked at used very short experimental sessions. Here is one of the treatments the experimental participants received (it includes both misinformation and a corrective, so it is one of the longer treatments):

Health Care Reform and Death Panels: Setting the Record Straight

By JONATHAN G. PRATT
Published: November 15, 2009

WASHINGTON, DC – With health care reform in full swing, politicians and citizen groups are taking a close look at the provisions in the Affordable Health Care for America Act (H.R. 3962) and the accompanying Medicare Physician Payment Reform Act (H.R. 3961).

Discussion has focused on whether Congress intends to establish “death panels” to determine whether or not seniors can get access to end-of-life medical care. Some have speculated that these panels will force the elderly and ailing into accepting minimal end-of-life care to reduce health care costs. Concerns have been raised that hospitals will be forced to withhold treatments simply because they are costly, even if they extend the life of the patient. Now talking heads and politicians are getting into the act.

Betsy McCaughey, the former Lieutenant Governor of New York State has warned that the bills contain provisions that would make it mandatory that “people in Medicare have a required counseling session that will tell them how to end their life sooner.”

Iowa Senator Chuck Grassley, the ranking Republican member of the Senate Finance Committee, chimed into the debate as well at a town-hall meeting, telling a questioner, “You have every right to fear…[You] should not have a government-run plan to decide when to pull the plug on Grandma.”

However, a close examination of the bill by non-partisan organizations reveals that the controversial proposals are not death panels at all. They are nothing more than a provision that allows Medicare to pay for voluntary counseling.

The American Medical Association and the National Hospice and Palliative Care Organization support the provision. For years, federal laws and policies have encouraged Americans to think ahead about end-of-life decisions.

The bills allow Medicare to pay doctors to provide information about living wills, pain medication, and hospice care. John Rother, executive vice president of AARP, the seniors’ lobby, repeatedly has declared the “death panel” rumors false.

The new provision is similar to a proposal in the last Congress to cover an end-of-life planning consultation. That bill was co-sponsored by three Republicans, including John Isakson, a Republican Senator from Georgia.

Speaking about the end of life provisions, Senator Isakson has said, “It’s voluntary. Every state in America has an end of life directive or durable power of attorney provision… someone said Sarah Palin’s web site had talked about the House bill having death panels on it where people would be euthanized. How someone could take an end of life directive or a living will as that is nuts.”

That’s it. That’s the experimental treatment.

Are we truly to believe that such short exposures are representative of real-world debunking? Surely not! In the real world, people who get misinformation often hold that misinformation over months or years while occasionally thinking about the misinformation again or encountering additional supportive misinformation or non-supportive information that may modify their initial beliefs in the misinformation. This all happens and then we try our debunking treatments.

Finally, it should be emphasized that the meta-analysis also only compiled eight research articles, many using the same (or similar) experimental paradigm. This is further inducement to skepticism. We should be very skeptical of these findings and my plea above for more study of debunking — especially in more ecologically-valid situations — is reinforced!