Learning Styles Notion Still Prevalent on Google

, , ,

Two and a half years ago, in writing a blog post on learning styles, I did a Google search using the words “learning styles.” I found that the top 17 search items were all advocating for learning styles, even though there was clear evidence that learning-styles approaches DO NOT WORK.

Today, I replicated that search and found the following in the top 17 search items:

  • 13 advocated/supported the learning-styles idea.
  • 4 debunked it.

That’s progress, but clearly Google is not up to the task of providing valid information on learning styles.

Scientific Research that clearly Debunks the Learning-Styles Notion:

  • Kirschner, P. A. (2017) Stop propagating the learning styles myth. Computers & Education, 106, 166-171.
  • Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The scientific status of learning styles theories. Teaching of Psychology, 42(3), 266-271.
  • Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.
  • Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

Follow the Money

  • Still no one has come forward to prove the benefits of learning styles, even though it’s been over 10 years since $1,000 was offered, and 3 years since $5,000 was offered.

Great 70-20-10 Debate

The Debunker Club (http://www.debunker.club/), one of my hobbies, is hosting a debate about the potency/viability of the 70-20-10 model. For more information, go directly to the Debunker Club’s event page.

New Meta-Analysis on Debunking — Still an Unclear Way to Potency

,

A new meta-analysis on debunking was released last week, and I was hoping to get clear guidelines on how to debunk misinformation. Unfortunately, the science still seems somewhat equivocal about how to debunk. Either that, or there’s just no magic bullet.

Let’s break this down. We all know misinformation exists. People lie, people get confused and share bad information, people don’t vet their sources, incorrect information is easily spread, et cetera. Debunking is the act of providing information or inducing interactions intended to correct misinformation.

Misinformation is a huge problem in the world today, especially in our political systems. Democracy is difficult if political debate and citizen conversations are infused with bad information. Misinformation is also a huge problem for citizens themselves and for organizations. People who hear false health-related information can make themselves sick. Organizations who have employees who make decisions based on bad information, can hurt the bottom line.

In the workplace learning field, there’s a ton of misinformation that has incredibly damaging effects. People believe in the witchcraft of learning styles, neuroscience snake oil, traditional smile sheets, and all kinds of bogus information.

It would be nice if misinformation could be easily thwarted, but too often it lingers. For example, the idea that people remember 10% of what they read, 20% of what they hear, 30% of what they see, etc., has been around since 1913 if not before, but it still gets passed around every year on bastardized versions of Dale’s Cone.

A meta-analysis is a scientific study that compiles many other scientific studies using advanced statistical procedures to enable overall conclusions to be drawn. The study I reviewed (the one that was made available online last week) is:

Chan, M. S., Jones, C. R., Jamieson, K. H., & Albarracin, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, early online publication (print page numbers not yet determined). Available here (if you have journal access: http://journals.sagepub.com/doi/10.1177/0956797617714579).

This study compiled scientific studies that:

  1. First presented people with misinformation (except a control group that got no misinformation).
  2. Then presented them with a debunking procedure.
  3. Then looked at what effect the debunking procedure had on people’s beliefs.

There are three types of effects examined in the study:

  1. Misinformation effect = Difference between the group that just got misinformation and a control group that didn’t get misinformation. This determined how much the misinformation hurt.
  2. Debunking effect = Difference between the group that just got misinformation and a group that got misinformation and later debunking. This determined how much debunking could lesson the effects of the misinformation.
  3. Misinformation-Persistence effect = Difference between the group that got misinformation-and-debunking and the control group that didn’t get misinformation. This determined how much debunking could fully reverse the effects of the misinformation.

They looked at three sets of factors.

First, the study examined what happens when people encounter misinformation. They found that the more people thought of explanations for the false information, the more they would believe this misinformation later, even in the face of debunking. From a practical standpoint then, if people are receiving misinformation, we should hope they don’t think too deeply about it. Of course, this is largely out of our control as learning practitioners, because people come to us after they’ve gotten misinformation. On the other hand, it may provide hints for us as we use knowledge management or social media. The research findings suggest that we might need to intervene immediately when bad information is encountered to prevent people from elaborating on the misinformation.

Second, the meta-analysis examined whether debunking messages that included procedures to induce people to make counter-arguments to the misinformation would outperform debunking messages that did not include such procedures (or that included less potent counter-argument-inducing procedures). They found consistent benefits to these counter-argument inducing procedures. These procedures helped reduce misinformation. This suggests strongly that debunking should induce counter-arguments to the misinformation. And though specific mechanisms for doing this may be difficult to design, it is probably not enough to present the counter-arguments ourselves without getting our learners to fully process the counter-arguments themselves to some sufficient level of mathemagenic (learning-producing) processing.

Third, the meta-analysis looked at whether debunking messages that included explanatory information for why the misinformation was wrong would outperform debunking messages that included just contradictory claims (for example, statements to the effect that the misinformation was wrong). They found mixed results here. Providing debunking messages with explanatory information was more effective in debunking misinformation (getting people to move from being misinformed to being less misinformed), but these more explanatory messages were actually less effective in fully ridding people of the misinformation. This was a conflicting finding and so it’s not clear whether greater explanations make a difference, or how they might be designed to make a difference. One wild conjecture. Perhaps where the explanations can induce relevant counter-arguments to the misinformation, they will be effective.

Overall, I came away disappointed that we haven’t been able to learn more about how to debunk. This is NOT these researchers’ fault. The data is the data. Rather, the research community as a whole has to double down on debunking and persuasion and figure out what works.

People certainly change their minds on heartfelt issues. Just think about the acceptance of gays and lesbians over the last twenty years. Dramatic changes! Many people are much more open and embracing. Well, how the hell did this happen? Some people died out, but many other people’s minds were changed.

My point is that misinformation cannot possibly be a permanent condition and it behooves the world to focus resources on fixing this problem — because it’s freakin’ huge!

————

Note that a review of this research in the New York Times painted this in a more optimistic light.

————

Some additional thoughts (added one day after original post).

To do a thorough job of analyzing any research paradigm, we should, of course, go beyond meta-analyses to the original studies being meta-analyzed. Most of us don’t have time for that, so we often take the short-cut of just reading the meta-analysis or just reading research reviews, etc. This is generally okay, but there is a caveat that we might be missing something important.

One thing that struck me in reading the meta-analysis is that the authors commented on the typical experimental paradigm used in the research. It appeared that the actual experiment might have lasted 30 minutes or less, maybe 60 minutes at most. This includes reading (learning) the misinformation, getting a ten-minute distractor task, and answering a few questions (some treatment manipulations, that is, types of debunking methods; plus the assessment of their final state of belief through answers to questions). To ensure I wasn’t misinterpreting the authors’ message that the experiments were short, I looked at several of the studies compiled in the meta-analysis. The research I looked at used very short experimental sessions. Here is one of the treatments the experimental participants received (it includes both misinformation and a corrective, so it is one of the longer treatments):

Health Care Reform and Death Panels: Setting the Record Straight

By JONATHAN G. PRATT
Published: November 15, 2009

WASHINGTON, DC – With health care reform in full swing, politicians and citizen groups are taking a close look at the provisions in the Affordable Health Care for America Act (H.R. 3962) and the accompanying Medicare Physician Payment Reform Act (H.R. 3961).

Discussion has focused on whether Congress intends to establish “death panels” to determine whether or not seniors can get access to end-of-life medical care. Some have speculated that these panels will force the elderly and ailing into accepting minimal end-of-life care to reduce health care costs. Concerns have been raised that hospitals will be forced to withhold treatments simply because they are costly, even if they extend the life of the patient. Now talking heads and politicians are getting into the act.

Betsy McCaughey, the former Lieutenant Governor of New York State has warned that the bills contain provisions that would make it mandatory that “people in Medicare have a required counseling session that will tell them how to end their life sooner.”

Iowa Senator Chuck Grassley, the ranking Republican member of the Senate Finance Committee, chimed into the debate as well at a town-hall meeting, telling a questioner, “You have every right to fear…[You] should not have a government-run plan to decide when to pull the plug on Grandma.”

However, a close examination of the bill by non-partisan organizations reveals that the controversial proposals are not death panels at all. They are nothing more than a provision that allows Medicare to pay for voluntary counseling.

The American Medical Association and the National Hospice and Palliative Care Organization support the provision. For years, federal laws and policies have encouraged Americans to think ahead about end-of-life decisions.

The bills allow Medicare to pay doctors to provide information about living wills, pain medication, and hospice care. John Rother, executive vice president of AARP, the seniors’ lobby, repeatedly has declared the “death panel” rumors false.

The new provision is similar to a proposal in the last Congress to cover an end-of-life planning consultation. That bill was co-sponsored by three Republicans, including John Isakson, a Republican Senator from Georgia.

Speaking about the end of life provisions, Senator Isakson has said, “It’s voluntary. Every state in America has an end of life directive or durable power of attorney provision… someone said Sarah Palin’s web site had talked about the House bill having death panels on it where people would be euthanized. How someone could take an end of life directive or a living will as that is nuts.”

That’s it. That’s the experimental treatment.

Are we truly to believe that such short exposures are representative of real-world debunking? Surely not! In the real world, people who get misinformation often hold that misinformation over months or years while occasionally thinking about the misinformation again or encountering additional supportive misinformation or non-supportive information that may modify their initial beliefs in the misinformation. This all happens and then we try our debunking treatments.

Finally, it should be emphasized that the meta-analysis also only compiled eight research articles, many using the same (or similar) experimental paradigm. This is further inducement to skepticism. We should be very skeptical of these findings and my plea above for more study of debunking — especially in more ecologically-valid situations — is reinforced!

Research Reflections — Take a Selfie Here; The Examined Life is Worth Living!

,

As professionals in the learning field, memory is central to our work. If we don’t help our learners preserve their memories (of what they learned), we have not really done our job. I’m oversimplifying here — sometimes we want to guide our learners toward external memory aids instead of memory. But mostly, we aim to support learning and memory.

Glacier View

You might have learned that people who take photographs will remember less than those who did not take photographs. Several research studies showed this (see for example, Henkel, 2014).

The internet buzzed with this information a few years ago:

  • The Telegraph — http://www.telegraph.co.uk/news/science/science-news/10507146/Taking-photographs-ruins-the-memory-research-finds.html
  • NPR — http://www.npr.org/2014/05/22/314592247/overexposed-camera-phones-could-be-washing-out-our-memories
  • Slate — http://www.slate.com/blogs/the_slatest/2013/12/09/a_new_study_finds_taking_photos_hurts_memory_of_the_thing_you_were_trying.html
  • CNN — http://www.cnn.com/2013/12/10/health/memory-photos-psychology/index.html
  • Fox News — http://www.foxnews.com/health/2013/12/11/taking-pictures-may-impair-memories-study-shows.html

Well, that was then. This is now.

Research Wisdom

There are CRITICAL LESSONS to be learned here — about using science intelligently… with wisdom.

Science is a self-correcting system that, with the arc of time, bends toward the truth. So, at any point in time, when we ask science for its conclusions, it tells us what it knows, while it apologizes for not knowing everything. Scientists can be wrong. Science can take wrong turns on the long road toward better understanding.

Does this mean we should reject scientific conclusions because they can’t guarantee omniscience; they can’t guarantee truth? I’ve written about this in more depth elsewhere, but I’ll say it here briefly — recommendations from science are better than our own intuitions; especially in regards to learning, given all the ways we humans are blind to how learning works.

Memory With Photography

Earlier studies showed that people who photographed images were less able to remember them than people who simply examined the images. Researchers surmised that people who off-loaded their memories to an external memory aid — to the photographs — freed up memory for other things.

We can look back at this now and see that this was a time of innocence; that science had kept some confidences hidden. New research by Barasch, Diehl, Silverman, and Zauberman (2017), found that people “who could freely take photographs during an experience recognized more of what they saw” and that those “with a camera had better recognition of aspects of the scene that they photographed than of aspects they did not photograph.

Of course, this is just one set of studies… we must be patient with science. More research will be done, and you and will benefit in knowing more than we know now and with more confidence… but this will take time.

What is the difference between the earlier studies and this latest set of studies? As argued by Barasch, Diehl, Silverman, and Zauberman (2017), the older studies did not give people the choice of which objects to photograph. In the words of the researchers, people did not have volitional control of their photographing experience. They didn’t go through the normal process we might go through in our real-world situations, where we must decide what to photograph and determine how to photograph the objects we target (i.e., the angles, borders, focus, etc.).

In a series of four experiments, the new research showed that attention was at the center of the memory effect. Indeed, people taking photographs “recognized more of what they saw and less of what they heard, compared with those who could not take any photographs (I added the bold underlines).

Interestingly, some of the same researchers, just the year before had found that taking photographs actually improved people’s enjoyment of their experiences (Diehl, Zauberman, & Barasch, 2016).

Practical Considerations for Learning Professionals

You might be asking yourself, “How should I handle the research-based recommendations I encounter?” Here is my advice:

  1. Be skeptical, but not too skeptical.
  2. Determine whether the research comes from a trusted source. Best sources are top-tier refereed scientific journals — especially where many studies find the same results. Worst sources are survey-based compilations of opinions. Beware of recommendations based on one scientific article. Beware of vendor-sponsored research. Beware of research that is not refereed; that is, not vetted by other researchers.
  3. Find yourself a trusted research translator. These people — and I count myself among them — have spent enough substantial time exploring the practical aspects of the research that they are liable to have wisdom about what the research means — and what its boundary conditions might be.
  4. Pay your research translators — so they can continue doing their work.
  5. Be good and prosper. Use the research in your learning programs and test it. Do good evaluation so you can get valid feedback to make your learning initiatives maximally effective.

Inscribed in My High School Yearbook in 1976

Time it was, and what a time it was, it was
A time of innocence, A time of confidences
Long ago, it must be, I have a photograph
Preserve your memories; They’re all that’s left you

Written by Paul Simon

The Photograph Above

Taken in Glacier National Park, Montana, USA; July 1, 2017
And incidentally, the glaciers are shrinking permanently.

Research Cited

Barasch, A., Diehl, K., Silverman, J., & Zauberman, G. (2017). Photographic Memory: The Effects of Volitional Photo Taking on Memory for Visual and Auditory Aspects of an Experience. Psychological Science, early online publication.

Diehl, K., Zauberman, G., & Barasch, A. (2016). How taking photos increases enjoyment of experiences. Journal of Personality and Social Psychology, 111, 119–140.

Henkel, L. A. (2014). Point-and-shoot memories: The influence of taking photos on memory for a museum tour. Psychological Science, 25, 396–402.

What’s Wrong With This Picture?

I must be in a bad mood — or maybe I’ve been unlucky in clinking on links — but this graphic is horrifying. Indeed, it’s so obviously flawed that I’m not even going to point out it’s most glaring problem. You decide!

One more editorial comment before the big reveal:  Why, why, why is the gloriously noble and important field of learning besieged by such crap!!!!

 

 

.

.

.

.

.

.

.

.

.

.

Why is the goal of a learning-focused game, “fun?”

The Last Two Decades of Neuroscience Research (via fMRI) Called Into Question!

,

Updated July 11, 2016. An earlier version was more apocalyptic.

==============================

THIS IS HUGE. A large number of studies from the last 15 years of neuroscience research (via fMRI) could be INVALID!

A recent study in the journal PNAS looked at the three most commonly used software packages used with fMRI machines. Where they expected to find a normal familywise error rate of 5%, they found error rates up to 70%.

Here’s what the authors’ wrote:

“Using mass empirical analyses with task-free fMRI data, we have found that the parametric statistical methods used for group fMRI analysis with the packages SPM, FSL, and AFNI can produce FWE-corrected cluster P values that are erroneous, being spuriously low and inflating statistical significance. This calls into question the validity of countless published fMRI studies based on parametric clusterwise inference. It is important to stress that we have focused on inferences corrected for multiple comparisons in each group analysis, yet some 40% of a sample of 241 recent fMRI papers did not report correcting for multiple comparisons (26), meaning that many group results in the fMRI literature suffer even worse false-positive rates than found here (37).”

In a follow-up blog post, the authors estimated that up to 3,500 scientific studies may be affected, which is down from their initial published estimate of 40,000. The discrepancy results because only studies at the edge of statistical reliability are likely to have results that might be affected. For an easy-to-read review of their walk-back, Wired has a nice piece.

The authors also point out that there is more to worry about than those 3,500 studies. An additional 13,000 studies don’t use any statistical correction at all (so they’re not affected by the software glitch reported in the scientific paper). However, these 13,000 studies use an approach that “has familywise error rates well in excess of 50%.” (cited from the blog post)

Here’s what the authors say in their walk-back:

“So, are we saying 3,500 papers are “wrong”? It depends. Our results suggest CDT P=0.01 results have inflated P-values, but each study must be examined… if the effects are really strong, it likely doesn’t matter if the P-values are biased, and the scientific inference will remain unchanged. But if the effects are really weak, then the results might indeed be consistent with noise. And, what about those 13,000 papers with no correction, especially common in the earlier literature? No, they shouldn’t be discarded out of hand either, but a particularly jaded eye is needed for those works, especially when comparing them to new references with improved methodological standards.”

 

Some Perspective

Let’s take a deep breadth here. Science works slowly and we need to see what other experts have to say in the coming months.

The authors reported that there were about 40,000 published studies in the last 15 years that might be affected. Of this amount, only some of 3,500 + 13,000 = 16,500 are affected. That’s 41% of published articles with a potential to have invalid results.

But, of course, in the learning field, we don’t care about all these studies as most of them have very little to do with learning or memory. Indeed, a search of the whole history of PsycINFO (a social-science database) finds a total of 22,347 articles mentioning fMRI at all. Searching for articles that have a learning or memory aspect culls this number down to 7,056. This is a very rough accounting, but it does put the overall findings in some perspective.

As the authors warn, it’s not appropriate to dismiss the validity of all the research articles, even if they’re in one of the suspect groups of studies. Instead, when looking at the potentially-invalidate articles, each one has to be examined individually to know whether it has problems.

Despite these comforting caveats, the findings by the scientists have implications for many neuroscience research studies over the past 15 years (when the bulk of neuroscience research has been done).

On the other hand, there truly haven’t been many neuroscience findings that have much practical relevance to the learning field as of yet. See my review for a critique of overblown claims about neuroscience and learning. Indeed, as I’ve argued elsewhere, neuroscience’s potential to aid learning professionals probably rests in the future. So, being optimistic, maybe these statistical glitches will end up being a good thing. First, perhaps they’ll propel greater scrutiny to research methodologies, improving future neuroscience research. Second, perhaps they’ll put the brakes on the myth-creating industrial complex around neuroscience until we have better data to report and utilize.

Still, a dark cloud of low credibility may settle over the whole neuroscience field itself, hampering researchers from getting funding, and making future research results difficult for practitioners to embrace. Time will tell.

 

Popular Press Articles Citing the Original Article (Published Before the Walk-Backs).

Here are some articles from the scientific press pointing out the potential danger:

  • http://arstechnica.com/science/2016/07/algorithms-used-to-study-brain-activity-may-be-exaggerating-results/
  • http://cacm.acm.org/news/204439-a-bug-in-fmri-software-could-invalidate-15-years-of-brain-research/fulltext
  • http://www.wired.co.uk/article/fmri-bug-brain-scans-results
  • http://www.zmescience.com/medicine/brain-imageflaw-57869/

==============================

Notes:

From Wikipedia (July 11, 2016): “In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.”

Join The Debunker Club in Gathering Intel on Neuroscience-and-Learning

,

Neuroscience and Learning

The Debunker Club, formed to fight myths and misconceptions in the learning field, is currently seeking public comment on the possibility that so-called neuroscience-based recommendations for learning and education are premature, untenable, or invalid.

 

Click here to comment or review the public comments made so far…

 

Click here to join The Debunker Club…

 

People! Don’t Remember 10%, 20%, or 30% of this! Remember it all!

A year and a half ago, three esteemed researchers (and me) published a series of articles debunking the meme that people remember 10% of what they read, 20% of what they hear, and 30% of what they read, etc…

Here was my review of those research articles.

Unfortunately, until now, the articles themselves were not available online. To subscribe to the originating journal, Educational Technology, click here.

Now, we the authors are able to share a copy with you.

 

Click here to get a copy of the four articles…

Brain Based Learning and Neuroscience – What the Research Says!

,

The world of learning and development is on the cusp of change. One of the most promising—and prominent—paradigms comes from neuroscience. Go to any conference today in the workplace learning field and there are numerous sessions on neuroscience and brain-based learning. Vendors sing praises to neuroscience. Articles abound. Blog posts proliferate.

But where are we on the science? Have we gone too far? Is this us, the field of workplace learning, once again speeding headlong into a field of fad and fantasy? Or are we spot-on to see incredible promise in bringing neuroscience wisdom to bear on learning practice? In this article, I will describe where we are with neuroscience and learning—answering that question as it relates to this point in time—in January of 2016.

What We Believe

I’ve started doing a session in conferences and in local trade-association meetings I call The Learning Research Quiz Show. It’s a blast! I ask a series of questions and get audience members to vote on the answer choices. After each question, I briefly state the correct answer and cite research from top-tiered scientific journals. Sometimes I hand out candy to those who are all alone in getting an answer correct, or all alone in being incorrect. It’s a ton of fun! On the other hand, there’s often discomfort in the room to go with the sweet morsels. Some people’s eyes go wide and some people get troubled when their favorite learning approach gets deep-sixed.

The quiz show is a great way to convey a ton of important information, but audience responses are intriguing in and of themselves. The answers people give tell us about their thinking—and, by extension, when compiled over many audiences, people’s answers hint at the current thinking within the learning profession. Let me give you an example related to the topic of brain science.

Overwhelmingly, people in my audiences answer: “C. Research on brain-based learning and neuroscience.” In the workplace learning field, at this point in time, we are sold on neuroscience.

 

What do the Experts Say?

As you might expect, neuroscientists are generally optimistic about neuroscience. But when it comes to how neuroscience might help learning and education, scientists are more circumspect.

Noted author and neuroscientist John Medina, who happens to be a lovely gentleman as well, has said the following as recently as June 2015:

  • “I don’t think brain science has anything to say for business practice.”
  • “We still don’t really know how the brain works.”
  • “The state of our knowledge [of the brain] is childlike.”

Dan Willingham, noted research psychologist, has been writing for many years about the poor track record of bringing neuroscience findings to learning practice.

In 2012 he wrote an article entitled: “Neuroscience Applied to Education: Mostly Unimpressive.” On the other hand, in 2014 he wrote a blog post where he said, “I’ve often written that it’s hard to bring neuroscientific data to bear on issues in education… Hard, but not impossible.” He then went on to discuss how a reading-disability issue related to deficits in the brain’s magnocellular system was informed by neuroscience.

In a 2015 scientific article in the journal Learning, Media and Technology, Harvard researchers Daniel Busso and Courtney Pollack reviewed the research on neuroscience and education and came to these conclusions:

  • “There is little doubt that our knowledge of the developing brain is poised to make important contributions to the lives of parents, educators and policymakers…”
  • “Some have voiced concerns about the viability of educational neuroscience, suggesting that neuroscience can inform education only indirectly…”
  • “Others insist that neuroscience is only one small component of a multi-pronged research strategy to address educational challenges, rather than a panacea…”

Taken together, these conclusions are balanced between the promise of neuroscience and the healthy skepticism of scientists. Note however, that when these researchers talk about the benefits of neuroscience for learning, they see neuroscience applications as happening in the future (perhaps the near future). They do NOT claim that neuroscience has already created a body of knowledge that is applicable to learning and education.

Stanford University researchers Dan Schwartz, Kristen Blair, and Jessica Tsang wrote in 2012 that the most common approach in educational neuroscience tends “to focus on the tails of the distribution; namely, children (and adults) with clinical problems or exceptional abilities.” This work is generally not relevant to workplace learning professionals—as we tend to be more interested in learners with normal cognitive functioning.

Researchers Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof in their book, Urban Myths about Learning and Education, concluded the following:

“In practice, at the moment it is only the insights of cognitive psychology [not neuropsychology] that can be effectively used in education, but even here care needs to be taken. Neurology has the potential to add value to education, but in general there are only two real conclusions we can make at present:

– For the time being, we do not really understand all that much about the brain.
– More importantly, it is difficult to generalize what we do know into a set of concrete precepts of behavior, never mind devise methods for influencing that behavior.”

The bottom line is that neuroscience does NOT, as of yet, have much guidance to provide for learning design in the workplace learning field. This may change in the future, but as of today, we cannot and should not rely on neuroscience claims to guide our learning designs!

 

Are We Drinking the Snake Oil?

Yes, many of us in the workplace learning field have already swallowed the neuroscience elixir. Some of us have gone further, washing down the snake oil with brain-science Kool-Aid—having become gullible adherents to the cult of neuroscience.

My Learning Research Quiz Show is just one piece of evidence of the pied-piper proliferation of brain- science messages. Conferences in the workplace learning field often have keynotes on neuroscience. Many have education sessions that focus on brain science. Articles, blog posts, and infographics balloon with neuroscience recommendations.

Here are some claims that have been made in the workplace learning field within the past few years:

  • “If you want people to learn, retain, and ultimately transfer knowledge to the workplace, it is essential that you understand the ergonomics of the brain.”
  • “The brain is our primary tool for learning. It’s seat of thought, memory, consciousness and emotion. So it only makes sense to match your eLearning design with how the learner’s brain functions.”
  • “Neuroscience changes everything. Neuroscience is exposing more and more about how our brains work. I find it fascinating, and exciting, because most of the theories our industry follows are based on the softer behavioral sciences. We now have researchers in the hard sciences uncovering the wonders of our neuroanatomy.”
  • “Neuroscience Facts You Need to Know: Human attention span – 8.25 seconds. Goldfish attention span – 9 seconds… Based on these facts (and a few others)… you can see why 25% of L&D professionals are integrating neuroscience.”

All of these claims are from vendors trying to get your business—and all of these claims were found near the top of a Google search. Fortunately for you, you’re probably not one of those who is susceptible to such hysterics.

Or are you?

Interestingly, researchers have actually done research on whether people are susceptible to claims based on neuroscience. In 2008, two separate studies showed how neuroscience information could influence people’s perceptions and decision making. McCabe and Castel (2008) found that adding neuroscience images to articles prompted readers to rate the scientific reasoning in those articles more highly than if a bar chart was added or if there was no image added. Weisberg, Keil, Goodstein, Rawson, and Gray (2008) found that adding extraneous neuroscience information to poorly-constructed explanations prompted novices and college students (in a neuroscience class) to rate the explanations as more satisfying than if there was no neuroscience information.

Over the years, the finding that neuroscience images lend credibility to learning materials has been called into question numerous times (Farah & Hook, 2013; Hook & Farah, 2013; Michael, Newman, Vuorre, Cumming, & Garry, 2013; Schweitzer, Baker, & Risko, 2013).

On the other hand, the finding that neuroscience information—in a written form—lends credibility has been supported many times (e.g., Rhodes, Rodriguez, & Shah, 2014; Weisberg, Taylor, & Hopkins, 2015; Fernandez-Duque, Evans, Christian, & Hodges, 2015).

As Busso and Pollack (2015) have concluded:

“Several highly cited studies have shown that superfluous neuroscience information may bias the judgement of non-experts…. However, the idea that neuroscience is uniquely persuasive has been met with little empirical support….”

Based on the research to date, it would appear that we as learning professionals are not likely to be influenced by extraneous neuroscience images, but we are likely to be influenced by neuroscience information—or any information that appears to be scientific. When extraneous neuroscience info is added to written materials, we are more likely to find those materials credible than if no neuroscience information had been added.

 

If the Snake Oil Tastes Good, Does it Matter in Practice?

If we learning professionals are subject to the same human tendencies as our fellow citizens, we’re likely to be susceptible to neuroscience information embedded in persuasive messages. The question then becomes, does this matter in practice? If neuroscience claims influence us, is this beneficial, benign, or dangerous?

Here are some recent quotes from researchers:

  • “Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation.” (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008).
  • “Given the popularity of neuroimaging and the attention it receives in the press, it is important to understand how people are weighting this evidence and how it may or may not affect people’s decisions. While the effect of neuroscience is small in cases of subjective evaluations, its effect on the mechanistic understanding of a phenomenon is compelling.” (Rhodes, Rodriguez, & Shah, 2014)
  • “Since some individuals may use the presence of neuroscience information as a marker of a good explanation…it is imperative to find ways to increase general awareness of the proper role for neuroscience information in explanations of psychological phenomena.” (Weisberg, Taylor, & Hopkins, 2015)
  • “For several decades, myths about the brain — neuromyths — have persisted in schools and colleges, often being used to justify ineffective approaches to teaching. Many of these myths are biased distortions of scientific fact. Cultural conditions, such as differences in terminology and language, have contributed to a ‘gap’ between neuroscience and education that has shielded these distortions from scrutiny.” (Howard-Jones, P. A., 2014).
  • “Powerful, often self-interested, commercial forces serve as mediators between research and practice, and this raises some pressing questions for future work in the field: what does responsible [research-to practice] translation look like?” (Busso and Pollock, 2015).

As these quotations make clear, researchers are concerned that neuroscience claims may push us to make poor learning-design decisions. And, they’re worried that unscrupulous people and enterprises may take advantage—and push poor learning approaches on the unsuspecting.

But is this concern warranted? Is there evidence that neuroscience claims are false, misleading, or irrelevant?

Yes! Neuroscience and brain-science claims are almost always deceptive in one way or another. Here’s a short list of issues:

  • Selling neuroscience and brain science as a panacea.
  • Selling neuroscience and brain science as proven and effective for learning.
  • Portraying standard learning research as neuroscience.
  • When cognitive psychologists portray themselves as neuroscientists.
  • Portraying neuroscience as having already developed a long-list of learning recommendations.
  • Portraying one’s products and/or services as based on neuroscience or brain-science.
  • Portraying personality diagnostics as based on neuroscience.
  • Portraying questionnaire data as diagnostic of neurophysiological functioning.

These neuroscience-for-learning deceptions lead to substantial problems:

  1. They push us away from more potent methods for learning design—methods that are actually proven by substantial scientific research.
  2. They make us believe that we are being effective, lessening our efforts to improve our learning interventions. This is an especially harmful problem in the learning field since rarely are we getting good feedback on our actual successes and failures.
  3. They encourage us to follow the recommendations of charlatans, increasing the likelihood that we are getting bad advice.
  4. They drive us to utilize “neurosciencey” diagnostics that are ineffective and unreliable.
  5. They enable vendors to provide us with poor learning designs—partly due to their own blind spots and partly due to intentional deceptions.

Here is a real-life example:

Over the past several years, a person with a cognitive psychology background has portrayed himself as a neuroscientist (which he is NOT). He has become very popular as a conference speaker—and offers his company’s product as the embodiment of neuropsychology principles. Unfortunately, the principles embodied in his product are NOT from neuroscience, but are from standard learning research. More importantly, the learning designs actually implemented with his product (even when designed by his own company) are ineffective and harmful—because they don’t take into account several other findings from the learning research.

Here is an example of one of the interactions from his company’s product:

This is very poor instructional design. It focuses on trivial information that is NOT related to the main learning points. Anybody who knows the learning research—even a little bit—should know that focusing on trivial information is (a) a waste of our learners’ limited attention, (b) a distraction away from the main points, and (c) potentially harmful in encouraging learners to process future learning material in a manner that guides their attention to details and away from more important ideas.

This is just one example of many that I might have used. Unfortunately, we in the learning field are seeing more and more misapplications of neuroscience.

 

Falsely Calling Learning Research Neuroscience

The biggest misappropriation of neuroscience in workplace learning is found in how vendors are relabeling standard learning research as neuroscience. The following graphic is a perfect example.

 

I’ve grayed out the detailed verbiage in the image above to avoid implicating the company who put this forward. My goal is not to finger one vendor, but to elucidate the broader problem. Indeed, this is just one example of hundreds that are easily available in our field.

Note how the vendor talks about brain science but then points to two research findings that were elucidated NOT by neuroscience, but by standard learning research. Both the spacing effect and the retrieval-practice effect have been long known – certainly before neuroscience became widely researched.

Here is another example, also claiming that the spacing effect is a neuroscience finding:

Again, I’m not here to skewer the purveyors of these examples, although I do shake my head in dismay when they are portrayed as neuroscience findings. In general, they are not based on neuroscience, they are based on behavioral and cognitive research.

Below is a timeline that demonstrates that neuroscience was NOT the source for the findings related to the spacing effect or retrieval practice.

You’ll notice in the diagram that one of the key tools used by neuroscientists to study the intersection between learning and the brain wasn’t even utilized widely until the early 2000’s, whereas the research on retrieval practice and spacing was firmly established prior to 1990.

 

Conclusion

The field of workplace learning—and the wider education field—have fallen under the spell of neuroscience (aka brain-science) recommendations. Unfortunately, neuroscience has not yet created a body of proven recommendations. While offering great promise for the future, as of this writing—in January 2016—most learning professionals would be better off relying on proven learning recommendations from sources like Brown, Roediger, and McDaniel’s book Make It Stick; by Benedict Carey’s book How We Learn; and by Julie Dirksen’s book Design for How People Learn.

As learning professionals, we must be more skeptical of neuroscience claims. As research and real-world experience has shown, such claims can persuade us toward ineffective learning designs and unscrupulous vendors and consultants.

Our trade associations and industry thought leaders need to take a stand as well. Instead of promoting neuroscience claims, they ought to voice a healthy skepticism.

 

Post Script

This article took a substantial amount of time to research and write. It has been provided for free as a public service. If you’d like to support the author, please consider hiring him as a consultant or speaker. Dr. Will Thalheimer is available at info@worklearning.com and at 617-718-0767.

 

Also of Interest

 

Research Citations

Bjork, R. A. (1988). Retrieval practice and the maintenance of knowledge. In M. M. Gruneberg, P. E. Morris, & R. N. Sykes (Eds.), Practical aspects of memory: Current research and issues, Vol. 1. Memory in everyday life (pp. 396-401). Oxford, England: John Wiley.

Bruce, D., & Bahrick, H. P. (1992). Perceptions of past research. American Psychologist, 47(2), 319-328.

Busso, D. S., & Pollack, C. (2015). No brain left behind: Consequences of neuroscience discourse for education. Learning, Media and Technology, 40(2), 168-186.

Farah, M. J., & Hook, C. J. (2013). The seductive allure of “seductive allure”. Perspectives on Psychological Science, 8(1), 88-90. http://dx.doi.org/10.1177/1745691612469035

Fernandez-Duque, D., Evans, J., Christian, C., & Hodges, S. D. (2015). Superfluous neuroscience information makes explanations of psychological phenomena more appealing. Journal of Cognitive Neuroscience, 27(5), 926-944. http://dx.doi.org/10.1162/jocn_a_00750

Gordon, K. (1925). Class results with spaced and unspaced memorizing. Journal of Experimental Psychology, 8, 337-343.

Gotz, A., & Jacoby, L. L. (1974). Encoding and retrieval processes in long-term retention. Journal of Experimental Psychology, 102(2), 291-297.

Hook, C. J., & Farah, M. J. (2013). Look again: Effects of brain images and mind–brain dualism on lay evaluations of research. Journal of Cognitive Neuroscience, 25(9), 1397-1405. http://dx.doi.org/10.1162/jocn_a_00407

Howard-Jones, P.A. (2014). Neuroscience and education: myths and messages. Nature Reviews Neuroscience, 15, 817-824. Available at: http://www.nature.com/nrn/journal/v15/n12/full/nrn3817.html.

Jones, H. E. (1923-1924). Experimental studies of college teaching:
The effect of examination on permanence of learning. Archives of Psychology, 10, 1-70.

Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M. (2013). On the (non)persuasive power of a brain image. Psychonomic Bulletin & Review, 20(4), 720-725.

Rhodes, R. E., Rodriguez, F., & Shah, P. (2014). Explaining the alluring influence of neuroscience information on scientific reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1432-1440. http://dx.doi.org/10.1037/a0036844

Ruch, T. C. (1928). Factors influencing the relative economy of massed and distributed practice in learning. Psychological Review, 35, 19-45.

Schweitzer, N. J., Baker, D. A., & Risko, E. F. (2013). Fooled by the brain: Re-examining the influence of neuroimages. Cognition, 129(3), 501-511. http://dx.doi.org/10.1016/j.cognition.2013.08.009

Weisberg, D. S., Taylor, J. C. V., & Hopkins, E. J. (2015). Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making, 10(5), 429-441.

Weisberg, Deena Skolnick; Keil, Frank C.; Goodstein, Joshua; Rawson, Elizabeth; Gray, Jeremy R. (2008). The seductive allure of neuroscience explanations. Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477.

Zhao, X., Wang, C., Liu, Q., Xiao, X., Jiang, T., Chen, C., & Xue, G. (2015). Neural mechanisms of the spacing effect in episodic memory: A parallel EEG and fMRI study. Cortex: A Journal Devoted to the Study of the Nervous System and Behavior, 69, 76-92. http://dx.doi.org/10.1016/j.cortex.2015.04.002

The Two-World Theory of Workplace Learning — Critiqued!

, ,

Today, industry luminary and social-media advocate Jane Hart wrote an incendiary blog post claiming that “the world of L&D [Learning and Development] is splitting in two.” According to Jane there are good guys and bad guys.

The bad guys are the “Traditionalists.” Here is some of what Jane says about them:

  • They cling onto 20th century views of Training & Development.”
  • They believe they know what is best for their people.”
  • They disregard the fact that most people are bored to tears sitting in a classroom or studying an e-learning course at their desktop.”
  • They miss the big picture – the fact that learning is much more than courses, but involves continuously acquiring new knowledge and skills as part of everyday work.”
  • They don’t understand that the world has changed.”

Fighting words? Yes! Insulting words? Yes! Painting with too broad a brush? Yes! Maybe just to make a point? Probably!

Still, Jane’s message is clear. Traditionalists are incompetent fools who must be eradicated because of the evil they are doing.

Fortunately, galloping in on white horses we have “Modern Workplace Learning (MWL) practitioners.” These enlightened souls are doing the following, according to Jane:

  • “They are rejecting the creation of expensive, sophisticated e-learning content and preferring to build short, flexible, modern resources (where required) that people can access when they need them. AND they are also encouraging social content (or employee-generated content) – particularly social video – because they know that people know best what works for them.”
  • They are ditching their LMS (or perhaps just hanging on to it to manage some regulatory training) – because they recognise it is a white elephant – and it doesn’t help them understand the only valid indicator of learning success, how performance has changed and improved.”
  • They are moving to a performance-driven world – helping groups find their own solutions to problems – ones that they really need, will value, and actually use, and recognise that these solutions are often ones they organise and manage themselves.”
  • They are working with managers to help them develop their people on the ground – and see the success of these initiatives in terms of impact on job performance.”
  • They are helping individuals take responsibility for their own learning and personal development – so that they continuously grow and improve, and hence become valuable employees in the workplace.”
  • They are supporting teams as they work together using enterprise social platforms – in order to underpin the natural sharing within the group, and improve team learning.” 

Points of Agreement

I agree with Jane in a number of ways. Many of the practices we use in workplace learning are ineffective.

Here are some points of agreement:

  1. Too much of our training is ineffective!
  2. Too often training and/or elearning are seen as the only answer!
  3. Too often we don’t think of how we, as learning professionals, can leverage on the job learning.
  4. Too often we default to solutions that try to support performance primarily by helping people learn — when performance assistance would be preferable.
  5. Too often we believe that we have to promote an approved organizational knowledge, when we might be better off to let our fellow workers develop and share their own knowledge.
  6. Too often we don’t utilize new technologies in an effort to provide more effective learning experiences.
  7. Too often we don’t leverage managers to support on-the-job learning.
  8. Too often we don’t focus on how to improve performance.

Impassioned Disagreement

As someone who has enjoyed the stage with Jane in the past, and who knows that she’s an incredibly lovely person, I doubt that she means to cast aspersions on a whole cohort of dedicated learning-and-performance professionals.

Where I get knocked off my saddle is the oversimplifications encouraged in the long-running debate between the traditionalist black hats and the informal-learning-through-social-media white hats! Pitting these groups against each other is besides the point!

I remember not too long ago when it was claimed that “training is dead,” that “training departments will disappear,” that “all learning is social,” that “social-media is the answer,” etc…

What is often forgotten is that the only thing that really matters is the human cognitive architecture. If our learning events and workplace situations don’t align with that architecture, learning will suffer.

Oversimplifications that Hurt the Learning Field

  1. Learners know how they learn best so we should let them figure it out.
    Learners, as research shows, often do NOT know how they learn best, so it may be counterproductive not to figure out ways to support them in learning.
  2. Learning can be shortened because all learners need to do is look it up.
    Sometimes learners have a known learning need that can be solved with a quick burst of information. BUT NOT ALL LEARNING is like this! Much of learning requires a deeper, longer experience. Much of learning requires more practice, more practical experience, etc. Because of these needs, much of learning requires support from honest-to-goodness learning professionals.
  3. All training and elearning is boring!
    Really? This is obviously NOT true, even if much of it could be lots better.
  4. That people can always be trusted to create their own content!
    This is sometimes true and sometimes not. Indeed, sometimes people get stuff wrong (sometimes dangerously wrong). Sometimes experts actually have expertise that us normal people don’t have.
  5. That using some sort of enterprise social platform is always effective, or is always more effective, or is easy to use to create successful learning.
    Really? Haven’t you heard more than one or two horror stories — or failed efforts? Wiki’s that weren’t populated. Blogs that fizzled. SharePoint sites that were isolated from users who could use the information. Forums where less than 1% of folks are involved. Et cetera… And let’s not forget, these social-learning platforms tend to be much better at just-in-time learning than in long-term deeper learning (not totally, but usually).
  6. That on-the-job learning is easy to leverage.
    Let’s face it, formal training is MUCH EASIER to leverage than on-the-job learning. On-the-job learning is messy and hard to reach. It’s also hard to understand all the forces involved in on-the-job learning. And what’s ironic is that there is already a group that is in a position to influence on-the-job learning. The technical term is “managers.”
  7. Crowds of people always have more wisdom than single individuals.
    This may be one of the stupidest memes floating around our field right now. Sounds sexy. Sounds right. But not when you look into the world around us. I might suggest recent presidential candidate debates here in the United States as evidence. Clearly, the smartest ideas don’t always rise to prominence!
  8. Traditional learning professionals have nothing of value to offer.
    Since I’m on the front lines in stating that our field is under-professionalized, I probably am the last one who should be critiquing this critique, but it strikes me as a gross simplification — if not grossly unfair. Human learning is exponentially more complex than rocket science, so none of us have a monopoly on learning wisdom. I’m a big proponent of research-based and evidence-based practice, and yet neither research nor other forms of evidence are always omniscient. Almost every time I teach, talk to clients, read a book, read a research article, or read the newspaper, I learn more about learning. I’ve learned a ton from traditional learning professionals. I’ve also learned a ton from social-learning advocates.

 

Summary

In today’s world, there are simply too many echo-chambers — places which are comfortable, which reinforce our preconceptions, which encourage us to demonize and close off avenues to our own improvement.

We in the learning field need to leave echo-chambers to our political brethren where they will do less damage (Ha!). We have to test our assumptions, utilize the research, and develop effective evaluation tools to really test the success of our learning interventions. We have to be open, but not too-easily hoodwinked by claims and shared perceptions.

Hail to the traditionalists and the social-learning evangelists!

 

Follow-up!

Clark Quinn wrote an excellent blog post to reconcile the visions promoted by Jane and Will.

 

Share!

If you want to share this discussion with others, here are the links:

  • Jane’s Provocative Blog Post:
    • http://www.c4lpt.co.uk/blog/2015/11/12/the-ld-world-is-splitting-in-two/
  • Will’s Spirited Critique:
    • http://www.willatworklearning.com/2015/11/the-two-world-theory-of-workplace-learning-critiqued.html
  • Clark’s Reconciliation:
    • http://blog.learnlets.com/?p=4655#comment-821615