A new meta-analysis on debunking was released last week, and I was hoping to get clear guidelines on how to debunk misinformation. Unfortunately, the science still seems somewhat equivocal about how to debunk. Either that, or there’s just no magic bullet.
Let’s break this down. We all know misinformation exists. People lie, people get confused and share bad information, people don’t vet their sources, incorrect information is easily spread, et cetera. Debunking is the act of providing information or inducing interactions intended to correct misinformation.
Misinformation is a huge problem in the world today, especially in our political systems. Democracy is difficult if political debate and citizen conversations are infused with bad information. Misinformation is also a huge problem for citizens themselves and for organizations. People who hear false health-related information can make themselves sick. Organizations who have employees who make decisions based on bad information, can hurt the bottom line.
In the workplace learning field, there’s a ton of misinformation that has incredibly damaging effects. People believe in the witchcraft of learning styles, neuroscience snake oil, traditional smile sheets, and all kinds of bogus information.
It would be nice if misinformation could be easily thwarted, but too often it lingers. For example, the idea that people remember 10% of what they read, 20% of what they hear, 30% of what they see, etc., has been around since 1913 if not before, but it still gets passed around every year on bastardized versions of Dale’s Cone.
A meta-analysis is a scientific study that compiles many other scientific studies using advanced statistical procedures to enable overall conclusions to be drawn. The study I reviewed (the one that was made available online last week) is:
Chan, M. S., Jones, C. R., Jamieson, K. H., & Albarracin, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. Available here (if you have journal access: http://journals.sagepub.com/doi/10.1177/0956797617714579).
This study compiled scientific studies that:
- First presented people with misinformation (except a control group that got no misinformation).
- Then presented them with a debunking procedure.
- Then looked at what effect the debunking procedure had on people’s beliefs.
There are three types of effects examined in the study:
- Misinformation effect = Difference between the group that just got misinformation and a control group that didn’t get misinformation. This determined how much the misinformation hurt.
- Debunking effect = Difference between the group that just got misinformation and a group that got misinformation and later debunking. This determined how much debunking could lesson the effects of the misinformation.
- Misinformation-Persistence effect = Difference between the group that got misinformation-and-debunking and the control group that didn’t get misinformation. This determined how much debunking could fully reverse the effects of the misinformation.
They looked at three sets of factors.
First, the study examined what happens when people encounter misinformation. They found that the more people thought of explanations for the false information, the more they would believe this misinformation later, even in the face of debunking. From a practical standpoint then, if people are receiving misinformation, we should hope they don’t think too deeply about it. Of course, this is largely out of our control as learning practitioners, because people come to us after they’ve gotten misinformation. On the other hand, it may provide hints for us as we use knowledge management or social media. The research findings suggest that we might need to intervene immediately when bad information is encountered to prevent people from elaborating on the misinformation.
Second, the meta-analysis examined whether debunking messages that included procedures to induce people to make counter-arguments to the misinformation would outperform debunking messages that did not include such procedures (or that included less potent counter-argument-inducing procedures). They found consistent benefits to these counter-argument inducing procedures. These procedures helped reduce misinformation. This suggests strongly that debunking should induce counter-arguments to the misinformation. And though specific mechanisms for doing this may be difficult to design, it is probably not enough to present the counter-arguments ourselves without getting our learners to fully process the counter-arguments themselves to some sufficient level of mathemagenic (learning-producing) processing.
Third, the meta-analysis looked at whether debunking messages that included explanatory information for why the misinformation was wrong would outperform debunking messages that included just contradictory claims (for example, statements to the effect that the misinformation was wrong). They found mixed results here. Providing debunking messages with explanatory information was more effective in debunking misinformation (getting people to move from being misinformed to being less misinformed), but these more explanatory messages were actually less effective in fully ridding people of the misinformation. This was a conflicting finding and so it’s not clear whether greater explanations make a difference, or how they might be designed to make a difference. One wild conjecture. Perhaps where the explanations can induce relevant counter-arguments to the misinformation, they will be effective.
Overall, I came away disappointed that we haven’t been able to learn more about how to debunk. This is NOT these researchers’ fault. The data is the data. Rather, the research community as a whole has to double down on debunking and persuasion and figure out what works.
People certainly change their minds on heartfelt issues. Just think about the acceptance of gays and lesbians over the last twenty years. Dramatic changes! Many people are much more open and embracing. Well, how the hell did this happen? Some people died out, but many other people’s minds were changed.
My point is that misinformation cannot possibly be a permanent condition and it behooves the world to focus resources on fixing this problem — because it’s freakin’ huge!
————
Note that a review of this research in the New York Times painted this in a more optimistic light.
————
Some additional thoughts (added one day after original post).
To do a thorough job of analyzing any research paradigm, we should, of course, go beyond meta-analyses to the original studies being meta-analyzed. Most of us don’t have time for that, so we often take the short-cut of just reading the meta-analysis or just reading research reviews, etc. This is generally okay, but there is a caveat that we might be missing something important.
One thing that struck me in reading the meta-analysis is that the authors commented on the typical experimental paradigm used in the research. It appeared that the actual experiment might have lasted 30 minutes or less, maybe 60 minutes at most. This includes reading (learning) the misinformation, getting a ten-minute distractor task, and answering a few questions (some treatment manipulations, that is, types of debunking methods; plus the assessment of their final state of belief through answers to questions). To ensure I wasn’t misinterpreting the authors’ message that the experiments were short, I looked at several of the studies compiled in the meta-analysis. The research I looked at used very short experimental sessions. Here is one of the treatments the experimental participants received (it includes both misinformation and a corrective, so it is one of the longer treatments):
Health Care Reform and Death Panels: Setting the Record Straight
By JONATHAN G. PRATT
Published: November 15, 2009WASHINGTON, DC – With health care reform in full swing, politicians and citizen groups are taking a close look at the provisions in the Affordable Health Care for America Act (H.R. 3962) and the accompanying Medicare Physician Payment Reform Act (H.R. 3961).
Discussion has focused on whether Congress intends to establish “death panels” to determine whether or not seniors can get access to end-of-life medical care. Some have speculated that these panels will force the elderly and ailing into accepting minimal end-of-life care to reduce health care costs. Concerns have been raised that hospitals will be forced to withhold treatments simply because they are costly, even if they extend the life of the patient. Now talking heads and politicians are getting into the act.
Betsy McCaughey, the former Lieutenant Governor of New York State has warned that the bills contain provisions that would make it mandatory that “people in Medicare have a required counseling session that will tell them how to end their life sooner.”
Iowa Senator Chuck Grassley, the ranking Republican member of the Senate Finance Committee, chimed into the debate as well at a town-hall meeting, telling a questioner, “You have every right to fear…[You] should not have a government-run plan to decide when to pull the plug on Grandma.”
However, a close examination of the bill by non-partisan organizations reveals that the controversial proposals are not death panels at all. They are nothing more than a provision that allows Medicare to pay for voluntary counseling.
The American Medical Association and the National Hospice and Palliative Care Organization support the provision. For years, federal laws and policies have encouraged Americans to think ahead about end-of-life decisions.
The bills allow Medicare to pay doctors to provide information about living wills, pain medication, and hospice care. John Rother, executive vice president of AARP, the seniors’ lobby, repeatedly has declared the “death panel” rumors false.
The new provision is similar to a proposal in the last Congress to cover an end-of-life planning consultation. That bill was co-sponsored by three Republicans, including John Isakson, a Republican Senator from Georgia.
Speaking about the end of life provisions, Senator Isakson has said, “It’s voluntary. Every state in America has an end of life directive or durable power of attorney provision… someone said Sarah Palin’s web site had talked about the House bill having death panels on it where people would be euthanized. How someone could take an end of life directive or a living will as that is nuts.”
That’s it. That’s the experimental treatment.
Are we truly to believe that such short exposures are representative of real-world debunking? Surely not! In the real world, people who get misinformation often hold that misinformation over months or years while occasionally thinking about the misinformation again or encountering additional supportive misinformation or non-supportive information that may modify their initial beliefs in the misinformation. This all happens and then we try our debunking treatments.
Finally, it should be emphasized that the meta-analysis also only compiled eight research articles, many using the same (or similar) experimental paradigm. This is further inducement to skepticism. We should be very skeptical of these findings and my plea above for more study of debunking — especially in more ecologically-valid situations — is reinforced!
Is elearning effective? As effective as classroom instruction — more or less effective? What about blended learning — when elearning and classroom learning are combined?
These critical questions have now been answered and are available in the research report, Does eLearning Work? What the Scientific Research Says!
In this research review, I looked at meta-analyses and individual research studies, and was able to derive clear conclusions. The report is available for free, it includes an executive summary, and research jargon is kept to a minimum.
Click here to download the report…
Note that the August 10, 2017 version of this report incorrectly cited the Rowland (2014) study in a footnote and omitted it from the list of research citations. These issues were fixed on August 11, 2017. Special thanks to Elizabeth Dalton who notified me of the issues.
As professionals in the learning field, memory is central to our work. If we don’t help our learners preserve their memories (of what they learned), we have not really done our job. I’m oversimplifying here — sometimes we want to guide our learners toward external memory aids instead of memory. But mostly, we aim to support learning and memory.
You might have learned that people who take photographs will remember less than those who did not take photographs. Several research studies showed this (see for example, Henkel, 2014).
The internet buzzed with this information a few years ago:
- The Telegraph — http://www.telegraph.co.uk/news/science/science-news/10507146/Taking-photographs-ruins-the-memory-research-finds.html
- NPR — http://www.npr.org/2014/05/22/314592247/overexposed-camera-phones-could-be-washing-out-our-memories
- Slate — http://www.slate.com/blogs/the_slatest/2013/12/09/a_new_study_finds_taking_photos_hurts_memory_of_the_thing_you_were_trying.html
- CNN — http://www.cnn.com/2013/12/10/health/memory-photos-psychology/index.html
- Fox News — http://www.foxnews.com/health/2013/12/11/taking-pictures-may-impair-memories-study-shows.html
Well, that was then. This is now.
Research Wisdom
There are CRITICAL LESSONS to be learned here — about using science intelligently… with wisdom.
Science is a self-correcting system that, with the arc of time, bends toward the truth. So, at any point in time, when we ask science for its conclusions, it tells us what it knows, while it apologizes for not knowing everything. Scientists can be wrong. Science can take wrong turns on the long road toward better understanding.
Does this mean we should reject scientific conclusions because they can’t guarantee omniscience; they can’t guarantee truth? I’ve written about this in more depth elsewhere, but I’ll say it here briefly — recommendations from science are better than our own intuitions; especially in regards to learning, given all the ways we humans are blind to how learning works.
Memory With Photography
Earlier studies showed that people who photographed images were less able to remember them than people who simply examined the images. Researchers surmised that people who off-loaded their memories to an external memory aid — to the photographs — freed up memory for other things.
We can look back at this now and see that this was a time of innocence; that science had kept some confidences hidden. New research by Barasch, Diehl, Silverman, and Zauberman (2017), found that people “who could freely take photographs during an experience recognized more of what they saw” and that those “with a camera had better recognition of aspects of the scene that they photographed than of aspects they did not photograph.”
Of course, this is just one set of studies… we must be patient with science. More research will be done, and you and will benefit in knowing more than we know now and with more confidence… but this will take time.
What is the difference between the earlier studies and this latest set of studies? As argued by Barasch, Diehl, Silverman, and Zauberman (2017), the older studies did not give people the choice of which objects to photograph. In the words of the researchers, people did not have volitional control of their photographing experience. They didn’t go through the normal process we might go through in our real-world situations, where we must decide what to photograph and determine how to photograph the objects we target (i.e., the angles, borders, focus, etc.).
In a series of four experiments, the new research showed that attention was at the center of the memory effect. Indeed, people taking photographs “recognized more of what they saw and less of what they heard, compared with those who could not take any photographs (I added the bold underlines).
Interestingly, some of the same researchers, just the year before had found that taking photographs actually improved people’s enjoyment of their experiences (Diehl, Zauberman, & Barasch, 2016).
Practical Considerations for Learning Professionals
You might be asking yourself, “How should I handle the research-based recommendations I encounter?” Here is my advice:
- Be skeptical, but not too skeptical.
- Determine whether the research comes from a trusted source. Best sources are top-tier refereed scientific journals — especially where many studies find the same results. Worst sources are survey-based compilations of opinions. Beware of recommendations based on one scientific article. Beware of vendor-sponsored research. Beware of research that is not refereed; that is, not vetted by other researchers.
- Find yourself a trusted research translator. These people — and I count myself among them — have spent enough substantial time exploring the practical aspects of the research that they are liable to have wisdom about what the research means — and what its boundary conditions might be.
- Pay your research translators — so they can continue doing their work.
- Be good and prosper. Use the research in your learning programs and test it. Do good evaluation so you can get valid feedback to make your learning initiatives maximally effective.
Inscribed in My High School Yearbook in 1976
Time it was, and what a time it was, it was
A time of innocence, A time of confidences
Long ago, it must be, I have a photograph
Preserve your memories; They’re all that’s left you
Written by Paul Simon
The Photograph Above
Taken in Glacier National Park, Montana, USA; July 1, 2017
And incidentally, the glaciers are shrinking permanently.
Research Cited
Barasch, A., Diehl, K., Silverman, J., & Zauberman, G. (2017). Photographic Memory: The Effects of Volitional Photo Taking on Memory for Visual and Auditory Aspects of an Experience. Psychological Science, early online publication.
Diehl, K., Zauberman, G., & Barasch, A. (2016). How taking photos increases enjoyment of experiences. Journal of Personality and Social Psychology, 111, 119–140.
Henkel, L. A. (2014). Point-and-shoot memories: The influence of taking photos on memory for a museum tour. Psychological Science, 25, 396–402.
Another research brief. Answer the question and only then read what the research says:
In a recent study with teenagers playing a game to learn history, adding the learning instructions hurt learning outcomes for questions that assessed transfer, but NOT recall. The first choice hurt transfer but not recall. Give yourself some credit if you chose the second or third choices.
Caveats:
- This is only one study.
- It was done using only one type of learner.
- It was done using only one type of learning method.
- It was done with teenagers.
Important Point:
- Don’t assume that adding instructions to encourage learning will facilitate learning.
Research:
Hawlitschek, A., & Joeckel, S. (2017). Increasing the effectiveness of digital educational games: The effects of a learning instruction on students’ learning, motivation and cognitive load. Computers in Human Behavior, 72, 79-86.
The learning profession has been blessed in recent years with a steady stream of scientific research that points to practical recommendations for designers of learning. If you or your organization are NOT hooked into the learning research, find yourself a research translator to help you! Call me, for example!
That’s the good news, but I have bad news for you too. In the old days, it wasn’t hard to create a competitive advantage for your company by staying abreast of the research and using it to design your learning products and services. Pretty soon, that won’t be enough. As the research becomes more widely known, you’ll have to do more to get a competitive advantage. Vendors especially will have to differentiate their products — NOT just by basing them on the research — but also by conducting research (A-B testing at a minimum) on your own products.
I know of at least a few companies right now who are conducting research on their own products. They aren’t advertising their research, because they want to get a jumpstart on the competition. But eventually, they’ll begin sharing what they’ve done.
Do you need an example of a company who’s had their product tested? Check out this page. Scroll down to the bottom and look at the 20 or so research studies that have been done using the product. Looks pretty impressive right?
To summarize, there are at least five benefits to doing research on your own products:
- Gain a competitive advantage by learning to make your product better.
- Gain a competitive advantage by supporting a high-quality brand image.
- Gain a competitive advantage by enabling the creation of unique and potent content marketing.
- Gain a competitive advantage by supporting creativity and innovation within your team.
- Gain a competitive advantage by creating an engaging and learning-oriented team environment.
I’ve been following the spacing effect for over a decade, writing a research-to-practice report in 2006, and recommending the spacing effects to my clients and in the guise of subscription learning (threaded microlearning).
One of the fascinating things is that researchers continue to be fascinated with the spacing effect producing about 10 new studies every year and many research reviews.
Here are a list of the research reviews from most recent to earliest.
- Maddox, G. B. (2016). Understanding the underlying mechanism of the spacing effect in verbal learning: A case for encoding variability and study-phase retrieval. Journal of Cognitive Psychology, 28(6), 684-706.
- Vlach, H. A. (2014). The spacing effect in children’s generalization of knowledge: Allowing children time to forget promotes their ability to learn. Child Development Perspectives, 8(3), 163-168.
- Küpper-Tetzel, C. E. (2014). Understanding the distributed practice effect: Strong effects on weak theoretical grounds. Zeitschrift für Psychologie, 222(2), 71-81.
- Carpenter, S. K. (2014). Spacing and interleaving of study and practice. In V. A. Benassi, C. E. Overson, & C. M. Hakala (Eds.), Applying science of learning in education: Infusing psychological science into the curriculum (pp. 131-141). Washington, DC: Society for the Teaching of Psychology.
- Toppino, T. C., & Gerbier, E. (2014). About practice: Repetition, spacing, and abstraction. In B. H. Ross (Ed.), The psychology of learning and motivation: Vol. 60. The psychology of learning and motivation (pp. 113-189). San Diego, CA: Elsevier Academic Press.
- Carpenter, S. K., Cepeda, N. J., Rohrer, D., Kang, S. H. K., & Pashler, H. (2012). Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction. Educational Psychology Review, 24(3), 369-378.
- Kornmeier, J., & Sosic-Vasic, Z. (2012). Parallels between spacing effects during behavioral and cellular learning. Frontiers in Human Neuroscience, 6, Article ID 203.
- Delaney, P. F., Verkoeijen, P. P. J. L., & Spirgel, A. (2010). Spacing and testing effects: A deeply critical, lengthy, and at times discursive review of the literature. In B. H. Ross (Ed.), The psychology of learning and motivation: Vol. 53. The psychology of learning and motivation: Advances in research and theory (pp. 63-147).
- Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354-380.
- Janiszewski, C., Noel, H., & Sawyer, A. G. (2003). A Meta-analysis of the Spacing Effect in Verbal Learning: Implications for Research on Advertising Repetition and Consumer Memory. Journal of Consumer Research, 30(1), 138-149.
- Dempster, F. N., & Farris, R. (1990). The spacing effect: Research and practice. Journal of Research & Development in Education, 23(2), 97-101.
- Underwood, B. J. (1961). Ten years of massed practice on distributed practice. Psychological Review, 68(4), 229-247.
- Ruch, T. C. (1928). Factors influencing the relative economy of massed and distributed practice in learning. Psychological Review, 35(1), 19-45.
Does giving learners more control of the way they navigate through an elearning program help them or hurt them?
Before I tell you what the research says, challenge yourself with this one-item quiz question:
How did you do? I ask because I want to give you maximum control of this learning experience.
Or maybe, I just like telling a joke. SMILE.
A recent meta-analysis (a scientific study that looks at many other scientific studies) found NO benefit for learner control. And contrary to what we were told by Malcolm Knowles and all those who tried to sell us on the adult-learner-knows-best baloney, the meta-analysis showed slightly more of a tendency for learner control to be beneficial for kids, NOT adults (but still a virtually non-existent benefit).
Our job then as elearning designers is NOT to give up control to our learners, but to design a learning experience that uses proven research-based techniques to guide learners through an effective repertoire of learning experiences. Certainly, the research finding shouldn’t be construed to mean that we should never give learners control, but it does mean that as an over-riding design principle, it’s a bad idea.
Interestingly, the meta-analysis broke down elearning into its methodological parts, and found no benefit to learner control in any of the components.
“The current study, in keeping with the previous meta-analysis (Niemiec et al., 1996), found near zero effects for all components of instruction (pacing, time, sequence, practice, review). Thus, there does not seem to be an advantage to giving the learner control over any particular instructional component.” (p. 404)
Of course, with research as with most things in life, some circumspection should be in order. My first worry is that research on elearning tends to be done on very short learning programs where learner motivation doesn’t really come into play. Who can’t keep attentive for 15 minutes, maintaining their motivation? Some learner control might help in longer learning programs where it might support motivation to engage the learning. Also, we wouldn’t want to throw out the idea of learner control completely. There may be some specific opportunities where it is worthwhile.
The Research Reviewed
Karich, A. C., Burns, M. K., & Maki, K. E. (2014). Updated meta-analysis of learner control within educational technology. Review of Educational Research, 84(3), 392-410.
Remember This!
If you’re developing learning or developing your learning team, don’t forget to seek out trusted research-to-practice experts to help you.
The spacing effect, if not the most studied learning factor, is certainly in the top five. As Harry Bahrick and Lynda Hall said in 2005, “The spacing effect is one of the oldest and best documented phenomena in the history of learning and memory research.”
The spacing effect is the finding that repetitions that are spaced over time produce better long-term remembering than the exact same repetitions spaced over a shorter amount of time or massed all together.
About 10 new scientific studies are carried out each year on the spacing effect (I counted 31 in the last three years). Why such frenzied dedication to exploring the spacing effect? Scientists want to know what causes it! It’s really rather fascinating!
To prepare for my upcoming conference presentation at the UK Learning Technologies conference (my presentation is now available on YouTube by clicking here), where they’ve asked me to speak on the spacing effect as it relates to mobile learning and microlearning, I’m doing another review of the scientific research. Here, on my blog, I’ll share tidbits of my translational research effort, especially when I find articles that are particularly interesting or informative.
In this post, I’m looking at Geoffrey Maddox’s review of the spacing research, which was published just last year in 2016 and is the most recent review available (although given the interest in spacing, there are many reviews in the scientific literature). He does a spectacular job making sense of the many strands of research.
In it, he finds that there are six main theories for why the spacing effect occurs. I’ve simplified his list into five theoretical explanations and I’ve ignored his somewhat jargony labels to help normal folks like me and you grok the meaning.
Five Theoretical Explanations for the Spacing Effect
- Spacing Prompts More Attention
Learners may exert more attention to spaced items (compared with massed items). - Spacing Prompts Retrieval
Learners may be forced to retrieve spaced items (compared with massed items that need no retrieval—because they are still top of mind). - Spacing Prompts More-Difficult Retrieval
Learners may be prompted to engage in more difficult retrieval of spaced items (compared with massed items) and longer-spaced items (compared with shorter-spaced items). - Spacing Involves More Contextual Variability
Learners may create more retrieval routes (or more varied retrieval routes) when prompted with spaced items (compared with massed items). - Spacing Prompts Retrieval and Variability
Spacing benefits learners through both retrieval and variability, but variability, because it induces weaker traces may lead to more retrieval failure, thus lowering retrieval rates when spaced intervals are too long.
Maddox concludes that the only one that even comes close to explaining all the phenomenon in the scientific literature on spacing is the last one, which is really a combination of #2 and #4.
Count me as a skeptic. I just think we may be asking too much to push ourselves toward a unifying theory of spacing at this point. While a ton of research has been done, there are so many aspects to spacing and so much more authentically realistic research left to do that we ought to hold off on tying a pretty bow around one theory or another.
Evidence supporting my skepticism was found in the very next article that I read, where Metcalfe and Xu found more mind-wandering during massed practice than during spaced practice. This would fall into theory #1 above, not 5 (as Maddox recommended) or #2 or #4, which comprise 5.
Practical Implications
This article was not focused on providing practical implications, so it’s probably too much to ask of it. Nevertheless, it does show the complexity of spacing at a cognitive level.
Also, Maddox was pretty clear in describing how robust the scientific research is in terms of the spacing effect. He wrote, “Because of its robustness, the spacing effect has the potential to be applied across a variety of contexts as a way of improving learning and memory.”
He also detailed the ways that the science of spacing is so strong, including the following:
- The spacing effect is: “observed in different animal species,”
- “across the human lifespan”
- “with numerous experimental manipulations”
- “observed with educationally relevant verbal materials”
- “observed in the classroom”
- and observed with memory impaired populations”
We as learning professionals can conclude that the spacing effect (1) is real, (2) that it applies to all human beings, (3) that it is relevant to most situations, (4) that it is a powerful learning factor, and (5) that we ought to be utilizing it in our learning designs!
So folks, as I wrote in 2006, we ought to be figure out ways to Space Learning Over Time, using spaced repetitions, perhaps in a subscription-learning format.
Research Cited:
Bahrick, H. P., & Hall, L. K. (2005). The importance of retrieval failures to long-term retention: A metacognitive explanation of the spacing effect. Journal of Memory and Language, 52, 566-577.
Maddox, G. B. (2016). Understanding the underlying mechanism of the spacing effect in verbal learning: A case for encoding variability and study-phase retrieval. Journal of Cognitive Psychology, 28(6), 684-706.
Metcalfe, J., & Xu, J. (2016). People mind wander more during massed than spaced inductive learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(6), 978-984.
Thalheimer, W. (2006, February). Spacing Learning Events Over Time: What the Research Says. Available at: http://work-learning.com/catalog.html.
As you know, if you’ve dabbled into my work for a few years, I’ve closely followed the research finding, “The Spacing Effect,” both from a research perspective and a practical perspective. Indeed, I was one of the first in the workplace learning field to recognize its practical significance, which I wrote about as early as 2002. In 2006 I published the research-to-practice report entitled Spacing Learning Over Time, which should — if there was justice in the world (LOL) or viable trade organizations (OUCH) — be enshrined in the Learning and Development Hall of Fame. Snickers are welcome. Taza Chocolate even better. A few years ago, still wanting to advocate for the practical use of the spacing effect, I began speaking about Subscription Learning at conferences and I developed a website (SubscriptionLearning.com) to encourage folks in the learning field to utilize the spacing effect in their learning designs. SubscriptionLearning.com is being retired in 2017, as it is no longer needed. Blog posts from the website are incorporated in this blog.
I am grateful to the enlightened organizations who have supported my work over the years and specifically to the individuals who continue to encourage the reading of the 2006 research report. Feel free to share yourself.
Now in 2017, I am grateful to another organization, Learning Technologies (in the UK) who is sponsoring me to speak on the spacing effect at their conference starting in a few weeks. As part of my efforts, I am developing a new presentation and I am updating my research compilation on the spacing effect. Stay tuned to this blog as I’m likely to share a few of my findings as I dig into the research.
Indeed, the research on spacing is some of the most interesting I’ve studied over the years. The first thing that fascinates is that there is so much damn research on the spacing effect, also referred to as spaced learning, distributed practice, and interleaving. In 1992, Bruce and Bahrick counted up the number of scientific studies on spacing and found over 300 articles at that time. Every year, there are more and more scientific articles published on spacing. By my rough count of journal articles cited on PsycINFO (a primary social-science database), over the last three years there have been 31 new articles published on the spacing effect (7 in 2014, 14 in 2015, and 10 in 2016).
One of the main reasons that so many research articles are published on the spacing effect is that the phenomenon is so intriguing. Why would spacing repetitions over time produce so much more remembering than giving the learners the exact same repetitions but simply massing them all at once or spacing them with less time in between? Freakin’ fascinating! So researchers keep digging into the complexities.
Harry Bahrick and Lynda Hall announced in 2005 that, “The spacing effect is one of the oldest and best documented phenomena in the history of learning and memory research.” And, just last year in a scientific review article, Geoffrey Maddox wrote, “Because of its robustness, the spacing effect has the potential to be applied across a variety of contexts as a way of improving learning and memory.”
Stay tuned, as I hope to be be spacing my research compilations over time…
Research
Bahrick, H. P., & Hall, L. K. (2005). The importance of retrieval failures to long-term retention: A metacognitive explanation of the spacing effect. Journal of Memory and Language, 52, 566-577.
Maddox, G. B. (2016). Understanding the underlying mechanism of the spacing effect in verbal learning: A case for encoding variability and study-phase retrieval. Journal of Cognitive Psychology, 28(6), 684-706.
Thalheimer, W. (2006, February). Spacing Learning Events Over Time: What the Research Says. Available at: http://work-learning.com/catalog.html.
You must be logged in to post a comment.