Updated on March 29, 2018.
Originally posted on January 5, 2016.
=====================

The world of learning and development is on the cusp of change. One of the most promising—and prominent—paradigms comes from neuroscience. Go to any conference today in the workplace learning field and there are numerous sessions on neuroscience and brain-based learning. Vendors sing praises to neuroscience. Articles abound. Blog posts proliferate.

But where are we on the science? Have we gone too far? Is this us, the field of workplace learning, once again speeding headlong into a field of fad and fantasy? Or are we spot-on to see incredible promise in bringing neuroscience wisdom to bear on learning practice? In this article, I will describe where we are with neuroscience and learning—answering that question as it relates to this point in time—in March of 2018.

What We Believe

I’ve started doing a session in conferences and in local trade-association meetings I call The Learning Research Quiz Show. It’s a blast! I ask a series of questions and get audience members to vote on the answer choices. After each question, I briefly state the correct answer and cite research from top-tiered scientific journals. Sometimes I hand out candy to those who are all alone in getting an answer correct, or all alone in being incorrect. It’s a ton of fun! On the other hand, there’s often discomfort in the room to go with the sweet morsels. Some people’s eyes go wide and some people get troubled when their favorite learning approach gets deep-sixed.

The quiz show is a great way to convey a ton of important information, but audience responses are intriguing in and of themselves. The answers people give tell us about their thinking—and, by extension, when compiled over many audiences, people’s answers hint at the current thinking within the learning profession. Let me give you an example related to the topic of brain science.

Overwhelmingly, people in my audiences answer: “C. Research on brain-based learning and neuroscience.” In the workplace learning field, at this point in time, we are sold on neuroscience.

 

What do the Experts Say?

As you might expect, neuroscientists are generally optimistic about neuroscience. But when it comes to how neuroscience might help learning and education, scientists are more circumspect.

Noted author and neuroscientist John Medina, who happens to be a lovely gentleman as well, has said the following as recently as January 2018. I originally saw him say these things in June 2015:

  • “I don’t think brain science has anything to say for business practice.”
  • “We still don’t really know how the brain works.”
  • “The state of our knowledge [of the brain] is childlike.”

Dan Willingham, noted research psychologist, has been writing for many years about the poor track record of bringing neuroscience findings to learning practice.

In 2012 he wrote an article entitled: “Neuroscience Applied to Education: Mostly Unimpressive.” On the other hand, in 2014 he wrote a blog post where he said, “I’ve often written that it’s hard to bring neuroscientific data to bear on issues in education… Hard, but not impossible.” He then went on to discuss how a reading-disability issue related to deficits in the brain’s magnocellular system was informed by neuroscience.

In a 2015 scientific article in the journal Learning, Media and Technology, Harvard researchers Daniel Busso and Courtney Pollack reviewed the research on neuroscience and education and came to these conclusions:

  • “There is little doubt that our knowledge of the developing brain is poised to make important contributions to the lives of parents, educators and policymakers…”
  • “Some have voiced concerns about the viability of educational neuroscience, suggesting that neuroscience can inform education only indirectly…”
  • “Others insist that neuroscience is only one small component of a multi-pronged research strategy to address educational challenges, rather than a panacea…”

In a 2016 article in the world-renowned journal, Psychological Review, neuroscientist and cognitive psychologist Jeffrey Bowers concluded the following: “There are no examples of novel and useful suggestions for teaching based on neuroscience thus far.” Critiquing Bower’s conclusions, neuroscientists Paul Howard-Jones, Sashank Varma, Daniel Ansari, Brian Butterworth, Bert De Smedt, Usha Goswami, Diana Laurillard, and Michael S. C. Thomas wrote that, “Behavioral and neural data can inform our understanding of learning and so, in turn, [inform] choices in educational practice and the design of educational contexts…” and “Educational Neuroscience does not espouse a direct link from neural measurement to classroom practice.” Neuroscientist John Gabrieli added: “Educational neuroscience may be especially pertinent for the many children with brain differences that make educational progress difficult in the standard curriculum…” “It is less clear at present how educational neuroscience would translate for more typical students, with perhaps a contribution toward individualized learning.” In 2017, Gabrieli gave a keynote on how neuroscience is not ready for education.

Taken together, these conclusions are balanced between the promise of neuroscience and the healthy skepticism of scientists. Note however, that when these researchers talk about the benefits of neuroscience for learning, they see neuroscience applications as happening in the future (perhaps the near future), and augmenting traditional sources of research knowledge (those not based in neuroscience). They do NOT claim that neuroscience has already created a body of knowledge that is applicable to learning and education.

Stanford University researchers Dan Schwartz, Kristen Blair, and Jessica Tsang wrote in 2012 that the most common approach in educational neuroscience tends “to focus on the tails of the distribution; namely, children (and adults) with clinical problems or exceptional abilities.” This work is generally not relevant to workplace learning professionals—as we tend to be more interested in learners with normal cognitive functioning.

Researchers Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof in their book, Urban Myths about Learning and Education, concluded the following:

“In practice, at the moment it is only the insights of cognitive psychology [not neuropsychology] that can be effectively used in education, but even here care needs to be taken. Neurology has the potential to add value to education, but in general there are only two real conclusions we can make at present:

– For the time being, we do not really understand all that much about the brain.
– More importantly, it is difficult to generalize what we do know into a set of concrete precepts of behavior, never mind devise methods for influencing that behavior.”

The bottom line is that neuroscience does NOT, as of yet, have much guidance to provide for learning design in the workplace learning field. This may change in the future, but as of today, we cannot and should not rely on neuroscience claims to guide our learning designs!

 

Neuroscience Research Flawed

In 2016, researchers found a significant flaw in the software used in a large percentage of neuroscience research, calling the findings of neuroscience research into question (Eklund, Nichols, & Knuttson, 2016). Even as recently as February of 2018, it wasn’t clear whether neuroscience data was being properly processed (Han & Park, 2018).

Neuroscience is done using imaging techniques like fMRI, PET, SPECT, and EEG. Functional Magnetic Resonance Imagining (fMRI) is by far the most common method. Basically, fMRI is like taking a series of photos of brain activity by looking at blood flow. Because there tends to be “noise” in these images—that is false signals—software is used to ensure that brain activity is really in evidence where the signals say there is activity. Unfortunately, the software used before 2016 to differentiate between signal and noise was severally flawed, causing up to 70% false positives when 5% was expected (Eklund, Nichols, & Knuttson, 2016). As Wired Magazine wrote in a headline, “Bug in fMRI sofware calls 15 years of research into question.” Furthermore, it’s still not clear that corrective measures are being properly utilized (Han & Park, 2018).

The problems with neuroscience imaging were most provocatively illustrated in a 2010 article in the Journal of Serendipitous and Unexpected Results, that showed fMRI brain activation in a dead salmon—where none would be expected (obviously). This article was reviewed in a 2012 post on Scientific American.

 

Are We Drinking the Snake Oil?

Yes, many of us in the workplace learning field have already swallowed the neuroscience elixir. Some of us have gone further, washing down the snake oil with brain-science Kool-Aid—having become gullible adherents to the cult of neuroscience.

My Learning Research Quiz Show is just one piece of evidence of the pied-piper proliferation of brain- science messages. Conferences in the workplace learning field often have keynotes on neuroscience. Many have education sessions that focus on brain science. Articles, blog posts, and infographics balloon with neuroscience recommendations.

Here are some claims that have been made in the workplace learning field within the past few years:

  • “If you want people to learn, retain, and ultimately transfer knowledge to the workplace, it is essential that you understand the ergonomics of the brain.”
  • “The brain is our primary tool for learning. It’s seat of thought, memory, consciousness and emotion. So it only makes sense to match your eLearning design with how the learner’s brain functions.”
  • “Neuroscience changes everything. Neuroscience is exposing more and more about how our brains work. I find it fascinating, and exciting, because most of the theories our industry follows are based on the softer behavioral sciences. We now have researchers in the hard sciences uncovering the wonders of our neuroanatomy.”
  • “Neuroscience Facts You Need to Know: Human attention span – 8.25 seconds. Goldfish attention span – 9 seconds… Based on these facts (and a few others)… you can see why 25% of L&D professionals are integrating neuroscience.”

All of these claims are from vendors trying to get your business—and all of these claims were found near the top of a Google search. Fortunately for you, you’re probably not one of those who is susceptible to such hysterics.

Or are you?

Interestingly, researchers have actually done research on whether people are susceptible to claims based on neuroscience. In 2008, two separate studies showed how neuroscience information could influence people’s perceptions and decision making. McCabe and Castel (2008) found that adding neuroscience images to articles prompted readers to rate the scientific reasoning in those articles more highly than if a bar chart was added or if there was no image added. Weisberg, Keil, Goodstein, Rawson, and Gray (2008) found that adding extraneous neuroscience information to poorly-constructed explanations prompted novices and college students (in a neuroscience class) to rate the explanations as more satisfying than if there was no neuroscience information.

Over the years, the finding that neuroscience images lend credibility to learning materials has been called into question numerous times (Farah & Hook, 2013; Hook & Farah, 2013; Michael, Newman, Vuorre, Cumming, & Garry, 2013; Schweitzer, Baker, & Risko, 2013).

On the other hand, the finding that neuroscience information—in a written form—lends credibility has been supported many times (e.g., Rhodes, Rodriguez, & Shah, 2014; Weisberg, Taylor, & Hopkins, 2015; Fernandez-Duque, Evans, Christian, & Hodges, 2015).

In 2017, a research study found that adding both irrelevant neuroscience information and irrelevant brain images pushed learners to rate learning material as having more credibility (Im, Varna, & Varna, 2017).

As Busso and Pollack (2015) have concluded:

“Several highly cited studies have shown that superfluous neuroscience information may bias the judgement of non-experts…. However, the idea that neuroscience is uniquely persuasive has been met with little empirical support….”

Based on the research to date, it would appear that we as learning professionals are not likely to be influenced by extraneous neuroscience images on their own, but we are likely to be influenced by neuroscience information—or any information that appears to be scientific. When extraneous neuroscience info is added to written materials, we are more likely to find those materials credible than if no neuroscience information had been added.

 

If the Snake Oil Tastes Good, Does it Matter in Practice?

If we learning professionals are subject to the same human tendencies as our fellow citizens, we’re likely to be susceptible to neuroscience information embedded in persuasive messages. The question then becomes, does this matter in practice? If neuroscience claims influence us, is this beneficial, benign, or dangerous?

Here are some recent quotes from researchers:

  • “Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation.” (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008).
  • “Given the popularity of neuroimaging and the attention it receives in the press, it is important to understand how people are weighting this evidence and how it may or may not affect people’s decisions. While the effect of neuroscience is small in cases of subjective evaluations, its effect on the mechanistic understanding of a phenomenon is compelling.” (Rhodes, Rodriguez, & Shah, 2014)
  • “Since some individuals may use the presence of neuroscience information as a marker of a good explanation…it is imperative to find ways to increase general awareness of the proper role for neuroscience information in explanations of psychological phenomena.” (Weisberg, Taylor, & Hopkins, 2015)
  • “For several decades, myths about the brain — neuromyths — have persisted in schools and colleges, often being used to justify ineffective approaches to teaching. Many of these myths are biased distortions of scientific fact. Cultural conditions, such as differences in terminology and language, have contributed to a ‘gap’ between neuroscience and education that has shielded these distortions from scrutiny.” (Howard-Jones, P. A., 2014).
  • “Powerful, often self-interested, commercial forces serve as mediators between research and practice, and this raises some pressing questions for future work in the field: what does responsible [research-to practice] translation look like?” (Busso and Pollock, 2015).

As these quotations make clear, researchers are concerned that neuroscience claims may push us to make poor learning-design decisions. And, they’re worried that unscrupulous people and enterprises may take advantage—and push poor learning approaches on the unsuspecting.

But is this concern warranted? Is there evidence that neuroscience claims are false, misleading, or irrelevant?

Yes! Neuroscience and brain-science claims are almost always deceptive in one way or another. Here’s a short list of issues:

  • Selling neuroscience and brain science as a panacea.
  • Selling neuroscience and brain science as proven and effective for learning.
  • Portraying standard learning research as neuroscience.
  • When cognitive psychologists portray themselves as neuroscientists.
  • Portraying neuroscience as having already developed a long-list of learning recommendations.
  • Portraying one’s products and/or services as based on neuroscience or brain-science.
  • Portraying personality diagnostics as based on neuroscience.
  • Portraying questionnaire data as diagnostic of neurophysiological functioning.

These neuroscience-for-learning deceptions lead to substantial problems:

  1. They push us away from more potent methods for learning design—methods that are actually proven by substantial scientific research.
  2. They make us believe that we are being effective, lessening our efforts to improve our learning interventions. This is an especially harmful problem in the learning field since rarely are we getting good feedback on our actual successes and failures.
  3. They encourage us to follow the recommendations of charlatans, increasing the likelihood that we are getting bad advice.
  4. They drive us to utilize “neurosciencey” diagnostics that are ineffective and unreliable.
  5. They enable vendors to provide us with poor learning designs—partly due to their own blind spots and partly due to intentional deceptions.

Here is a real-life example:

Over the past several years, a person with a cognitive psychology background has portrayed himself as a neuroscientist (which he is NOT). He has become very popular as a conference speaker—and offers his company’s product as the embodiment of neuropsychology principles. Unfortunately, the principles embodied in his product are NOT from neuroscience, but are from standard learning research. More importantly, the learning designs actually implemented with his product (even when designed by his own company) are ineffective and harmful—because they don’t take into account several other findings from the learning research.

Here is an example of one of the interactions from his company’s product:

This is very poor instructional design. It focuses on trivial information that is NOT related to the main learning points. Anybody who knows the learning research—even a little bit—should know that focusing on trivial information is (a) a waste of our learners’ limited attention, (b) a distraction away from the main points, and (c) potentially harmful in encouraging learners to process future learning material in a manner that guides their attention to details and away from more important ideas.

This is just one example of many that I might have used. Unfortunately, we in the learning field are seeing more and more misapplications of neuroscience.

 

Falsely Calling Learning Research Neuroscience

The biggest misappropriation of neuroscience in workplace learning is found in how vendors are relabeling standard learning research as neuroscience. The following graphic is a perfect example.

 

I’ve grayed out the detailed verbiage in the image above to avoid implicating the company who put this forward. My goal is not to finger one vendor, but to elucidate the broader problem. Indeed, this is just one example of hundreds that are easily available in our field.

Note how the vendor talks about brain science but then points to two research findings that were elucidated NOT by neuroscience, but by standard learning research. Both the spacing effect and the retrieval-practice effect have been long known – certainly before neuroscience became widely researched.

Here is another example, also claiming that the spacing effect is a neuroscience finding:

Again, I’m not here to skewer the purveyors of these examples, although I do shake my head in dismay when they are portrayed as neuroscience findings. In general, they are not based on neuroscience, they are based on behavioral and cognitive research.

Below is a timeline that demonstrates that neuroscience was NOT the source for the findings related to the spacing effect or retrieval practice.

You’ll notice in the diagram that one of the key tools used by neuroscientists to study the intersection between learning and the brain wasn’t even utilized widely until the early 2000’s, whereas the research on retrieval practice and spacing was firmly established prior to 1990.

 

Conclusion

The field of workplace learning—and the wider education field—have fallen under the spell of neuroscience (aka brain-science) recommendations. Unfortunately, neuroscience has not yet created a body of proven recommendations. While offering great promise for the future, as of this writing—in January 2016—most learning professionals would be better off relying on proven learning recommendations from sources like Brown, Roediger, and McDaniel’s book Make It Stick; by Benedict Carey’s book How We Learn; and by Julie Dirksen’s book Design for How People Learn.

As learning professionals, we must be more skeptical of neuroscience claims. As research and real-world experience has shown, such claims can persuade us toward ineffective learning designs and unscrupulous vendors and consultants.

Our trade associations and industry thought leaders need to take a stand as well. Instead of promoting neuroscience claims, they ought to voice a healthy skepticism.

 

Post Script

This article took a substantial amount of time to research and write. It has been provided for free as a public service. If you’d like to support the author, please consider hiring him as a consultant or speaker. Dr. Will Thalheimer is available at info@worklearning.com and at 617-718-0767.

 

Also of Interest

 

Research Citations

Bennett, C. M., Baird, A. A., Miller, M. B., Wolford, G. L. (2010) “Neural correlates of interspecies perspective taking in the post-mortem atlantic salmon: An argument for multiple comparisons correction,” Journal of Serendipitous and Unexpected Results, 1 (1), 1-5.

Bjork, R. A. (1988). Retrieval practice and the maintenance of knowledge. In M. M. Gruneberg, P. E. Morris, & R. N. Sykes (Eds.), Practical aspects of memory: Current research and issues, Vol. 1. Memory in everyday life (pp. 396-401). Oxford, England: John Wiley.

Bowers, J. S. (2016). The practical and principled problems with educational neuroscience. Psychological Review, 123(5), 600-612.

Bruce, D., & Bahrick, H. P. (1992). Perceptions of past research. American Psychologist, 47(2), 319-328.

Busso, D. S., & Pollack, C. (2015). No brain left behind: Consequences of neuroscience discourse for education. Learning, Media and Technology, 40(2), 168-186.

Eklund A., Nichols T. E., Knutsson H. (2016). Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Science, 113, 7900–7905.

Farah, M. J., & Hook, C. J. (2013). The seductive allure of “seductive allure”. Perspectives on Psychological Science, 8(1), 88-90. http://dx.doi.org/10.1177/1745691612469035

Fernandez-Duque, D., Evans, J., Christian, C., & Hodges, S. D. (2015). Superfluous neuroscience information makes explanations of psychological phenomena more appealing. Journal of Cognitive Neuroscience, 27(5), 926-944. http://dx.doi.org/10.1162/jocn_a_00750

Gabrieli, J. D. E. (2016). The promise of educational neuroscience: Comment on Bowers (2016). Psychological Review, 123(5), 613-619.

Gordon, K. (1925). Class results with spaced and unspaced memorizing. Journal of Experimental Psychology, 8, 337-343.

Gotz, A., & Jacoby, L. L. (1974). Encoding and retrieval processes in long-term retention. Journal of Experimental Psychology, 102(2), 291-297.

Han, H., & Park, J. (2018). Using SPM 12’s Second-level Bayesian Inference Procedure for fMRI Analysis: Practical Guidelines for End Users. Frontiers in Neuroinfomatics, 12, February 2.

Hook, C. J., & Farah, M. J. (2013). Look again: Effects of brain images and mind–brain dualism on lay evaluations of research. Journal of Cognitive Neuroscience, 25(9), 1397-1405. http://dx.doi.org/10.1162/jocn_a_00407

Howard-Jones, P.A. (2014). Neuroscience and education: myths and messages. Nature Reviews Neuroscience, 15, 817-824. Available at: http://www.nature.com/nrn/journal/v15/n12/full/nrn3817.html.

Howard-Jones, P. A., Varma, S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U.,  Laurillard, D., & Thomas, M. S. C. (2016). The principles and practices of educational neuroscience: Comment on Bowers (2016). Psychological Review, 123(5), 620-627.

Im, S-h., Varma, K., Varma, S. (2017). Extending the seductive allure of neuroscience explanations effect to popular articles about educational topics. British Journal of Educational Psychology, 87, 518–534.

Jones, H. E. (1923-1924). Experimental studies of college teaching: The effect of examination on permanence of learning. Archives of Psychology, 10, 1-70.

Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M. (2013). On the (non)persuasive power of a brain image. Psychonomic Bulletin & Review, 20(4), 720-725.

Rhodes, R. E., Rodriguez, F., & Shah, P. (2014). Explaining the alluring influence of neuroscience information on scientific reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1432-1440. http://dx.doi.org/10.1037/a0036844

Ruch, T. C. (1928). Factors influencing the relative economy of massed and distributed practice in learning. Psychological Review, 35, 19-45.

Schweitzer, N. J., Baker, D. A., & Risko, E. F. (2013). Fooled by the brain: Re-examining the influence of neuroimages. Cognition, 129(3), 501-511. http://dx.doi.org/10.1016/j.cognition.2013.08.009

Weisberg, D. S., Taylor, J. C. V., & Hopkins, E. J. (2015). Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making, 10(5), 429-441.

Weisberg, Deena Skolnick; Keil, Frank C.; Goodstein, Joshua; Rawson, Elizabeth; Gray, Jeremy R. (2008). The seductive allure of neuroscience explanations. Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477.

Zhao, X., Wang, C., Liu, Q., Xiao, X., Jiang, T., Chen, C., & Xue, G. (2015). Neural mechanisms of the spacing effect in episodic memory: A parallel EEG and fMRI study. Cortex: A Journal Devoted to the Study of the Nervous System and Behavior, 69, 76-92. http://dx.doi.org/10.1016/j.cortex.2015.04.002

Today, industry luminary and social-media advocate Jane Hart wrote an incendiary blog post claiming that “the world of L&D [Learning and Development] is splitting in two.” According to Jane there are good guys and bad guys.

The bad guys are the “Traditionalists.” Here is some of what Jane says about them:

  • They cling onto 20th century views of Training & Development.”
  • They believe they know what is best for their people.”
  • They disregard the fact that most people are bored to tears sitting in a classroom or studying an e-learning course at their desktop.”
  • They miss the big picture – the fact that learning is much more than courses, but involves continuously acquiring new knowledge and skills as part of everyday work.”
  • They don’t understand that the world has changed.”

Fighting words? Yes! Insulting words? Yes! Painting with too broad a brush? Yes! Maybe just to make a point? Probably!

Still, Jane’s message is clear. Traditionalists are incompetent fools who must be eradicated because of the evil they are doing.

Fortunately, galloping in on white horses we have “Modern Workplace Learning (MWL) practitioners.” These enlightened souls are doing the following, according to Jane:

  • “They are rejecting the creation of expensive, sophisticated e-learning content and preferring to build short, flexible, modern resources (where required) that people can access when they need them. AND they are also encouraging social content (or employee-generated content) – particularly social video – because they know that people know best what works for them.”
  • They are ditching their LMS (or perhaps just hanging on to it to manage some regulatory training) – because they recognise it is a white elephant – and it doesn’t help them understand the only valid indicator of learning success, how performance has changed and improved.”
  • They are moving to a performance-driven world – helping groups find their own solutions to problems – ones that they really need, will value, and actually use, and recognise that these solutions are often ones they organise and manage themselves.”
  • They are working with managers to help them develop their people on the ground – and see the success of these initiatives in terms of impact on job performance.”
  • They are helping individuals take responsibility for their own learning and personal development – so that they continuously grow and improve, and hence become valuable employees in the workplace.”
  • They are supporting teams as they work together using enterprise social platforms – in order to underpin the natural sharing within the group, and improve team learning.” 

Points of Agreement

I agree with Jane in a number of ways. Many of the practices we use in workplace learning are ineffective.

Here are some points of agreement:

  1. Too much of our training is ineffective!
  2. Too often training and/or elearning are seen as the only answer!
  3. Too often we don’t think of how we, as learning professionals, can leverage on the job learning.
  4. Too often we default to solutions that try to support performance primarily by helping people learn — when performance assistance would be preferable.
  5. Too often we believe that we have to promote an approved organizational knowledge, when we might be better off to let our fellow workers develop and share their own knowledge.
  6. Too often we don’t utilize new technologies in an effort to provide more effective learning experiences.
  7. Too often we don’t leverage managers to support on-the-job learning.
  8. Too often we don’t focus on how to improve performance.

Impassioned Disagreement

As someone who has enjoyed the stage with Jane in the past, and who knows that she’s an incredibly lovely person, I doubt that she means to cast aspersions on a whole cohort of dedicated learning-and-performance professionals.

Where I get knocked off my saddle is the oversimplifications encouraged in the long-running debate between the traditionalist black hats and the informal-learning-through-social-media white hats! Pitting these groups against each other is besides the point!

I remember not too long ago when it was claimed that “training is dead,” that “training departments will disappear,” that “all learning is social,” that “social-media is the answer,” etc…

What is often forgotten is that the only thing that really matters is the human cognitive architecture. If our learning events and workplace situations don’t align with that architecture, learning will suffer.

Oversimplifications that Hurt the Learning Field

  1. Learners know how they learn best so we should let them figure it out.
    Learners, as research shows, often do NOT know how they learn best, so it may be counterproductive not to figure out ways to support them in learning.
  2. Learning can be shortened because all learners need to do is look it up.
    Sometimes learners have a known learning need that can be solved with a quick burst of information. BUT NOT ALL LEARNING is like this! Much of learning requires a deeper, longer experience. Much of learning requires more practice, more practical experience, etc. Because of these needs, much of learning requires support from honest-to-goodness learning professionals.
  3. All training and elearning is boring!
    Really? This is obviously NOT true, even if much of it could be lots better.
  4. That people can always be trusted to create their own content!
    This is sometimes true and sometimes not. Indeed, sometimes people get stuff wrong (sometimes dangerously wrong). Sometimes experts actually have expertise that us normal people don’t have.
  5. That using some sort of enterprise social platform is always effective, or is always more effective, or is easy to use to create successful learning.
    Really? Haven’t you heard more than one or two horror stories — or failed efforts? Wiki’s that weren’t populated. Blogs that fizzled. SharePoint sites that were isolated from users who could use the information. Forums where less than 1% of folks are involved. Et cetera… And let’s not forget, these social-learning platforms tend to be much better at just-in-time learning than in long-term deeper learning (not totally, but usually).
  6. That on-the-job learning is easy to leverage.
    Let’s face it, formal training is MUCH EASIER to leverage than on-the-job learning. On-the-job learning is messy and hard to reach. It’s also hard to understand all the forces involved in on-the-job learning. And what’s ironic is that there is already a group that is in a position to influence on-the-job learning. The technical term is “managers.”
  7. Crowds of people always have more wisdom than single individuals.
    This may be one of the stupidest memes floating around our field right now. Sounds sexy. Sounds right. But not when you look into the world around us. I might suggest recent presidential candidate debates here in the United States as evidence. Clearly, the smartest ideas don’t always rise to prominence!
  8. Traditional learning professionals have nothing of value to offer.
    Since I’m on the front lines in stating that our field is under-professionalized, I probably am the last one who should be critiquing this critique, but it strikes me as a gross simplification — if not grossly unfair. Human learning is exponentially more complex than rocket science, so none of us have a monopoly on learning wisdom. I’m a big proponent of research-based and evidence-based practice, and yet neither research nor other forms of evidence are always omniscient. Almost every time I teach, talk to clients, read a book, read a research article, or read the newspaper, I learn more about learning. I’ve learned a ton from traditional learning professionals. I’ve also learned a ton from social-learning advocates.

 

Summary

In today’s world, there are simply too many echo-chambers — places which are comfortable, which reinforce our preconceptions, which encourage us to demonize and close off avenues to our own improvement.

We in the learning field need to leave echo-chambers to our political brethren where they will do less damage (Ha!). We have to test our assumptions, utilize the research, and develop effective evaluation tools to really test the success of our learning interventions. We have to be open, but not too-easily hoodwinked by claims and shared perceptions.

Hail to the traditionalists and the social-learning evangelists!

 

Follow-up!

Clark Quinn wrote an excellent blog post to reconcile the visions promoted by Jane and Will.

 

Share!

If you want to share this discussion with others, here are the links:

  • Jane’s Provocative Blog Post:
    • http://www.c4lpt.co.uk/blog/2015/11/12/the-ld-world-is-splitting-in-two/
  • Will’s Spirited Critique:
    • http://www.willatworklearning.com/2015/11/the-two-world-theory-of-workplace-learning-critiqued.html
  • Clark’s Reconciliation:
    • http://blog.learnlets.com/?p=4655#comment-821615

 

 

Just last month at the Debunker Club, we debunked the learning-styles approach to learning design based on our previous compilation of learning-styles debunking resources.

Now, there’s a new research review by Daniel Willingham, debunker extraordinaire, and colleagues.

Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The scientific status of learning styles theories. Teaching of Psychology, 42(3), 266-271. http://dx.doi.org/10.1177/0098628315589505

Here’s what they tried to do in the article, in their own words:

“The purpose of this article is to (a) clarify what learning styles theories claim and distinguish them from theories of ability, (b) summarize empirical research pertaining to learning styles, and (c) provide suggestions for practice and implications supported by empirical research.”

The distinction between abilities and styles is important to the authors:

“The two are often confused, but the distinction is important. It is relatively uncontroversial that cognitive ability is multifaceted (e.g., verbal ability and facility with space have distinct cognitive bases), and it is uncontroversial that individuals vary in these abilities. For ‘‘styles’’ to add any value to an account of human cognition and learning, it must mean something other than what ability means. While styles refer to how one does things, abilities concern how well one does them.”

Predictions from learning-styles theory:

“Learning styles theories make two straightforward predictions. First, a learning style is proposed to be a consistent attribute of an individual, thus, a person’s learning style should be constant across situations. Consequently, someone considered an auditory learner would learn best through auditory processes regardless of the subject matter (e.g., science, literature, or mathematics) or setting (e.g., school, sports practice, or work). Second,  cognitive function should be more effective when it is consistent with a person’s preferred style; thus, the visual learner should remember better (or problem-solve better, or attend better) with visual materials than with other materials.”

Results: Are these learning-styles predictions validated by the research?:

“No. Several reviews that span decades have evaluated the literature on learning styles (e.g., Arter & Jenkins, 1979; Kampwirth & Bates, 1980; Kavale & Forness, 1987; Kavale, Hirshoren, & Forness, 1998; Pashler et al., 2009; Snider, 1992; Stahl, 1999; Tarver & Dawson, 1978), and each has drawn the conclusion that there is no viable evidence to support the theory. Even a recent review intended to be friendly to theories of learning styles (Kozhevnikov, Evans, & Kosslyn, 2014) failed to claim that this prediction of the theory has empirical support. The lack of supporting evidence is especially unsurprising in light of the unreliability of most instruments used to identify learners’ styles (for a review, see Coffield et al., 2004).”

 

 

John Medina, author of Brain Rules, and Development Molecular Biologist at University of Washington/ Seattle Pacific University, was today’s keynote speaker at PCMA’s Education Conference in Fort Lauderdale, Florida.

He did a great job in the keynote, well organized and with oodles of humor, but what struck me was that even though the guy is a real neuroscientist, he is very clear in stating the limitations of our understanding of the brain. Here are some direct quotes from his keynote, as I recorded them in my notes:

“I don’t think brain science has anything to say for business practice.”

“We still don’t really know how the brain works.”

“The state of our knowledge [of the brain] is childlike.”

“The human brain was not built to learn. It was built to survive.”

Very refreshing! Especially in an era where conference sessions, white papers, and trade-industry publications are oozing with brain science bromides, neuroscience snake oil, and unrepentant con artists who, in the interest of taking money from fools, corral the sheep of the learning profession into all manner of poor purchasing decisions.

The Debunker Club is working on a resource page to combat the learning myth, “Neuroscience (Brain Science) Trumps Other Sources of Knowledge about Learning,” and John Medina gives us more ammunition against the silliness.

In addition to John’s keynote, I enjoyed eating lunch with him. He’s a fascinating man, wicked knowledgeable about a range of topics, funny, and kind to all (as I found out as he developed a deep repartee with the guy who served our food). Thanks John for a great time at lunch!

One of the topics we talked about was the poor record researchers have in getting their wisdom shared with real citizens. John believes researchers, who often get research funding from taxpayer money, have a moral obligation to share what they’ve learned with the public.

I shared my belief that one of the problems is that there is no funding stream for research translators. The academy often frowns on professors who attempt to share their knowledge with lay audiences. Calls of “selling out” are rampant. You can read my full thoughts on the need for research translators at a blog post I wrote early this year.

Later in the day at the conference, John was interviewed in a session by Adrian Segar, an expert on conference and meeting design. Again, John shined as a deep and thoughtful thinker — and refreshingly, as I guy who is more than willing to admit when he doesn’t know and/or when the science is not clear.

To check out or buy the latest version of Brain Rules, click on the image below:

 

 

 

 

Like an arrow through the heart! Max Roser's tweet really kills. With his one tweet he spread false, misleading, and dangerous information to thousands — maybe tens of thousands — of people across the globe. 223 Retweets. 198 Favorites. Egads. What a disaster!

MaxCRoser Bogus Pyramid

More reason The Debunker Club is needed. More reason for you to help debunk these myths!

Please, someone, please send me a tranquilizer…

 

And apologies to Max. I'm sure he's a fine human being. And of course he's not the only one who sends this bad information around (we have evidence that 223 others followed his lead), so I shouldn't really be focused on the poor lad….

But cripes! How do people get through the education system not knowing not to pass information around without at least a hint of skepticism, without checking sources, without taking a breath of oxygenated air…

Here's the evidence that this information is bogus: http://www.willatworklearning.com/2015/01/mythical-retention-data-the-corrupted-cone.html

 

 

 

 

To honor David Letterman soon after his sign off, I’ll use his inverted top-10 design.

The following represent the Top 10 Reasons to Write a Blog Post Debunking the Learning Styles Myth:

10. Several scientific review articles have been published showing that using learning styles to design learning produces no appreciable benefits. See The Debunker Club resource page on learning styles.

9. If you want to help your readers create the most effective learning interventions, you’d do better focusing on other design principles, for example those put forth in the Serious eLearning Manifesto, the Decisive Dozen Research, the Training Maximizers Model, or the books Make It Stick, How We Learn, or Design for How People Learn.

8. There are already great videos debunking the learning-styles myth (Tesia Marshik, Daniel Willingham), so you’re better off spreading the word through your own blog network; through Twitter, Hangouts, and LinkedIn; and with your colleagues at work.

7. The learning styles myth is so pervasive that the first 17 search topics on Google (as of June 1, 2015) continue to encourage the learning styles idea — even though it is harmful to learners and wasteful as a learning method. Just imagine how many lives you would touch if your blog post jumped into the top searches.

6. It’s a total embarrassment to the learning fields (the K-12 education field, the workplace training field, higher education). We as members of those fields need to get off our asses and do something. Haven’t teachers suffered enough blows to their reputation than to have to absorb a pummeling from articles like those in The New York Times and Wired Magazine? Haven’t instructional designers and trainers been buffeted enough by calls for their inability to maximize learning results?

5. Isn’t it about time that we professionals took back our field from vendors and those in the commercial industrial complex who only want to make a buck, who don’t care about the learners, who don’t care about the science, who don’t care about anything but their own special interests? Do what is right! Get off the mat and put a fist in the mouth of the learning-styles industrial complex!

4. Write a blog post on the learning-styles myth because you can have a blast with over-the-top calls to action, like one I just wrote in #5 above. Boy that was fun!

3. There’s some evidence that directly confronting advocates of strong ideas — like learning-styles true believers — will only make them more resistant in their unfounded beliefs. See the Debunkers Handbook for details. Therefore, our best efforts may be to focus not on the true believers, but on the general population. In this, our goal should be to create a climate of skepticism in terms of learning styles. You can directly help in this effort by writing a blog post, by taking to Twitter and LinkedIn, by sharing with your colleagues and friends.

2. Because you’re a professional.

1. Because the learning-styles idea is a myth.

Insert uplifting music here…

June is Debunk Learning Styles Month in the learning field!

One of the most ubiquitous myths in the world today, learning styles has risen to a crescendo within the workplace learning field and in education as well. The idea is that if you diagnose learners on their learning styles and then tailor learning methods to the different style — that learning results will improve.

It’s a widespread belief, but it’s actually false. Research evidence suggests that using learning styles to guide learning design does not improve learning results.

The good news is that there are several solid research reviews that demonstrate this. Indeed, The Debunker Club, which I organize, has compiled some excellent resources for folks who want to see the evidence.

To see The Debunker Club resource page on learning styles, click here.

To join The Debunker Club in debunking learning styles now (June 2015), click here.

To become a member of The Debunker Club, click here.

 

 

There’s too much crap floating around the learning and education fields; too many myths, misconceptions, and maladaptive learning designs. I started The Debunker Club and have been working to debunk myths for many years, so I’m passionate about the need for more debunking. The need is great and the danger to learning and learners is dire.

Fortunately, entering the world is a great new book by three researchers, Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof. Their book is titled, Urban Myths about Learning and Education, and it’s jam packed with a list of 35 myths that plague our field.

You can buy it through Amazon by clicking the image below.
 



 
The book is sure to have a major impact in the education and training fields.

Partial List of Topics

Here is a list of the SOME of the myths they debunk*:

  • Learning styles
  • The bastardized version of Dale’s Cone
  • 70, 20, 10
  • No need for knowledge if we can look everything up
  • Discovery learning is best
  • Problem-based learning
  • School kills creativity
  • 93% of communication is non-verbal
  • We use only 10% of our brains
  • You can train your brain with brain games
  • We think most clearly when we’re under pressure
  • Neuroscience provides helpful recommendations
  • New technology is causing a revolution in education
  • The internet belongs on the classroom
  • The internet makes us dumber
  • Class size doesn’t matter

*Note that “debunk” doesn’t necessarily mean to rule out completely! Often the authors find supporting evidence for some of the claims, or partial evidence, or they highlight boundary conditions.

Quotes from the Book

To give you a sense of the book, here are some quotes:

On Learning Styles:

“Though appealing, no solid evidence exists showing that there is any benefit in adapting and designing education and instruction to these so-called styles. It may even  e the case that in doing so, administrators, teachers, parents and even learners are negatively influencing the learning process and the products of education and instruction.”

 

On the Too-Ready Belief in Neuroscience:

“In practice, at the moment it is only the insights of cognitive psychology [not neuropsychology] that can be effectively used in education, but even here care needs to be taken. Neurology has the potential to add value to education, but in general there are only two real conclusions we can make at present:

– For the time being, we do not really understand all that much about the brain.
– More importantly, it is difficult to generalize what we do know into a set of concrete precepts of behavior, never mind devise methods for influencing that behavior.”

 

On the 70-20-10 Rule

“Informal learning is certainly very important, but we could find no evidence in the scientific literature to support the ratio of 70% information learning, 20% learning from others, and 10% formal learning.”

 

On Problem-Based Learning

“The use of problem-based learning to learn new content does not have a positive learning effect. But there is a positive learning effect if you use problem-based learning to further explore and remember something that the learner already knows.”

 

On Class Size

“Some studies show that smaller classes are not necessarily better, but that is just a part of the story. The quality of the teachers seems to be more important than class size, but other studies do suggest that smaller classes also seem to have performed better.”

Strengths of the Book

  • After each of the 35 myths, the authors write a short conclusion that very clearly and succinctly sums up their findings. This is very helpful.
  • The 35 myths are almost all very well known and important issues that need a research-based commentary.
  • The authors appear to have done their homework in researching the topics in the book. Certainly in the areas of research that I know best, their findings are consistent with my reading of the research.
  • The authors weigh complicated evidence in a manner that is fair and thoughtful.

Weaknesses of the Book

  • While the authors have designed the book specifically to reach practitioners (teachers, trainers, instructional designers, professors, and other learning professionals), they too often fall into the trap of using research jargon, which will make it difficult for some of their intended audience to fully comprehend some of the finer points of the book.

To Buy the Book, or Not?

Absolutely! Buy the book now! Occasionally, you might have some trouble with the jargon, but the most important messages will come through loud and clear.

This is a great book to peruse in short bursts. Each myth has its own chapter, which can be quickly read and deciphered. A great book to keep on your desk, in the bathroom, or on your cell phone. I’m loving it on my phone’s Kindle reader.

You can buy it on Amazon right now.

 

The Danger

Have you ever seen the following “research” presented to demonstrate some truth about human learning?

Unfortunately, all of the above diagrams are evangelizing misleading information. Worse, these fabrications have been rampant over the last two or three decades—and seem to have accelerated during the age of the internet. Indeed, a Google image search for “Dale’s Cone” produces about 80% misleading information, as you can see below from a recent search.

Search 2015:

 

Search 2017:

 

This proliferation is a truly dangerous and heinous result of incompetence, deceit, confirmatory bias, greed, and other nefarious human tendencies.

It is also hurting learners throughout the world—and it must be stopped. Each of us has a responsibility in this regard.

 

New Research

Fortunately, a group of tireless researchers—who I’ve had the honor of collaborating with—has put a wooden stake through the dark heart of this demon. In the most recent addition of the scientific journal Educational Technology, Deepak Subramony, Michael Molenda, Anthony Betrus, and I (my contribution was small) produced four articles on the dangers of this misinformation and the genesis of it. After working separately over the years to debunk this bit of mythology, the four of us have come together in a joint effort to rally the troops—people like you, dedicated professionals who want to create the best outcomes for your learners.

Here are the citations for the four articles. Later, I will have a synopsis of each article.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

Many thanks to Lawrence Lipsitz, the editor of Educational Technology, for his support, encouragement, and efforts in making this possible!

To get a copy of the “Special Issue” or to subscribe to Educational Technology, go to this website. (Note, 2017: I don’t think the journal is being published anymore.)

 

The Background

There are two separate memes we are debunking, what we’ve labeled (1) the mythical retention chart and (2) the corruption of Dale’s Cone of Experience. As you will see—or might have noticed in the images I previously shared—the two have often be comingled.

Here is an example of the mythical retention chart:

 

Oftentimes though, this is presented in text:

“People Remember:

  • 10 percent of what they read;
  • 20 percent of what they hear;
  • 30 percent of what they see;
  • 50 percent of what they see and hear;
  • 70 percent of what they say; and
  • 90 percent of what they do and say

Note that the numbers proffered are not always the same, nor are the factors alleged to spur learning. So, for example, you can see that on the graphic, people are said to remember 30 percent of what they hear, but in the text, the percentage is 20 percent. In the graphic, people remember 80 percent when they are collaborating, but in the text they remember 70% of what they SAY. I’ve looked at hundreds of examples, and the variety is staggering.

Most importantly, the numbers do NOT provide good guidance for learning design, as I will detail later.

Here is a photocopied image of the original Dale’s Cone:

Edgar Dale (1900-1985) was an American educator who is best known for developing “Dale’s Cone of Experience” (the cone above) and for his work on how to incorporate audio-visual materials into the classroom learning experience. The image above was photocopied directly from his book, Audio-visual methods in teaching (from the 1969 edition).

 

You’ll note that Dale included no numbers in his cone. He also warned his readers not to take the cone too literally.

Unfortunately, someone somewhere decided to add the misleading numbers. Here are two more examples:

 

I include these two examples to make two points. First, note how one person clearly stole from the other one. Second, note how sloppy these fabricators are. They include a Confucius quote that directly contradicts what the numbers say. On the left side of the visuals, Confucius is purported to say that hearing is better than seeing, while the numbers on the right of the visuals say that seeing is better than hearing. And, by the way, Confucius did not actually say what he is being alleged to have said! What seems clear from looking at these and other examples is that people don’t do their due diligence—their ends seems to justify their means—and they are damn sloppy, suggesting that they don’t think their audiences will examine their arguments closely.

By the way, these deceptions are not restricted to the English-speaking world:

 

Intro to the Special Issue of Educational Technology

As Deepak Subramony and Michael Molenda say in the introduction to the Special Issue of Educational Technology, the four articles presented seek to provide a “comprehensive and complete analysis of the issues surrounding these tortured constructs.” They also provide “extensive supporting material necessary to present a comprehensive refutation of the aforementioned attempts to corrupt Dale’s original model.”

In the concluding notes to the introduction, Subramony and Molenda leave us with a somewhat dystopian view of information trajectory in the internet age. “In today’s Information Age it is immensely difficult, if not practically impossible, to contain the spread of bad ideas within cyberspace. As we speak, the corrupted cone and its attendant “data” are akin to a living organism—a virtual 21st century plague—that continues to spread and mutate all over the World Wide Web, most recently to China. It therefore seems logical—and responsible—on our part that we would ourselves endeavor to continue our efforts to combat this vexing misinformation on the Web as well.”

Later, I will provide a section on what we can all do to help debunk the myths and inaccuracies imbedded in these fabrications.

Now, I provide a synopsis of each article in the Special Edition.


Synopsis of First Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

The authors point out that, “Learners—both face-to-face and distant—in classrooms, training centers, or homes are being subjected to lessons designed according to principles that are both unreliable and invalid. In any profession this would be called malpractice.” (p. 6).

The article makes four claims.

Claim 1: The Data in the Retention Chart is Not Credible

First, there is no body of research that supports the data presented in the many forms of the retention chart. That is, there is no scientific data—or other data—that supports the claim that People Remember some percentage of what they learned. Interestingly, where people have relied on research citations from 1943, 1947, 1963, and 1967 as the defining research when they cite the source of their data, the numbers—10%, 20%, 30% and so on—actually appeared as early as 1914 and 1922—when they were presented as information long known. A few years ago, I compiled research on actual percentages of remembering. You can access it here.

Second, the fact that the numbers all are divisible by 5 or 10 makes it obvious to anyone who has done research that these are not numbers derived by actual research. Human variability precludes round numbers. In addition, as pointed out as early at 1978 by Dwyer, there is the question of how the data were derived—what were learners actually asked to do? Note for example that the retention chart data always measures—among other things—how much people remember by reading, hearing, and seeing. How people could read without seeing is an obvious confusion. What are people doing when they only see and don’t read or listen? Also problematic is how you’d create a fair test to compare situations where learners listened or watched something. Are they tested on different tests (one where they see and one where they listen), which seems to allow bias or are they tested on the same test, in which case on group would be at a disadvantage because they aren’t taking a test in the same context in which they learned.

Third, the data portrayed don’t relate to any other research in the scientific literature on learning. As the authors write, “There is within educational psychology a voluminous literature on remembering and learning from various mediated experiences. Nowhere in this literature is there any summary of findings that remotely resembles the fictitious retention chart.” (p. 8)

Finally, as the author’s say, “Making sense of the retention chart is made nearly impossible by the varying presentations of the data, the numbers in the chart being a moving target, altered by the users to fit their individual biases about desirable training methods.” (p. 9).

Claim 2: Dale’s Cone is Misused.

Dale’s Cone of Experience is a visual depiction that portrays more concrete learning experiences at the bottom of the cone and more abstract experiences at the top of the cone. As the authors write, “The cone shape was meant to convey the gradual loss of sensory information” (p. 9) in the learning experiences as one moved from lower to higher levels on the cone.

“The root of all the perversions of the Cone is the assumption that the Cone is meant to be a prescriptive guide. Dale definitely intended the Cone to be descriptive—a classification system, not a road map for lesson planning.” (p. 10)

Claim 3: Combining the Retention Chart Data with Dale’s Cone

“The mythical retention data and the concrete-to-abstract cone evolved separately throughout the 1900’s, as illustrated in [the fourth article] ‘Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone.’ At some point, probably around 1970, some errant soul—or perhaps more than one person—had the regrettable idea of overlaying the dubious retention data on top of Dale’s Cone of Experience.” (p. 11). We call this concoction the corrupted cone.

“What we do know is that over the succeeding years [after the original corruption] the corrupted cone spread widely from one source to another, not in scholarly publications—where someone might have asked hard questions about sources—but in ephemeral materials, such as handouts and slides used in teaching or manuals used in military or corporate training.” (p. 11-12).

“With the growth of the Internet, the World Wide Web, after 1993 this attractive nuisance spread rapidly, even virally. Imagine the retention data as a rapidly mutating virus and Dale’s Cone as a host; then imagine the World Wide Web as a bathhouse. Imagine the variety of mutations and their resistance to antiviral treatment. A Google Search in 2014 revealed 11,000 hits for ‘Dale’s Cone,’ 14,500 for ‘Cone of Learning,’ and 176,000 for ‘Cone of Experience.’ And virtually all of them are corrupted or fallacious representations of the original Dale’s cone. It just might be the most widespread pedagogical myth in the history of Western civilization!” (p. 11).

Claim 4: Murky Provenance

People who present the fallacious retention data and/or the corrupted cone often cite other sources—that might seem authoritative. Dozens of attributions have been made over the years, but several sources appear over and over, including the following:

  • Edgar Dale
  • Wiman & Meierhenry
  • Bruce Nyland
  • Various oil companies (Mobil, Standard Oil, Socony-Vacuum Oil, etc.)
  • NTL Institute
  • William Glasser
  • British Audio-Visual Society
  • Chi, Bassok, Lewis, Reimann, & Glaser (1989).

Unfortunately, none of these sources are real sources. They are false.

Conclusion:

“The retention chart cannot be supported in terms of scientific validity or logical interpretability. The Cone of Experience, created by Edgar Dale in 1946, makes no claim of scientific grounding, and its utility as a prescriptive theory is thoroughly unjustified.” (p. 15)

“No qualified scholar would endorse the use of this mish-mash as a guide to either research or design of learning environments. Nevertheless, [the corrupted cone] obviously has an allure that surpasses logical considerations. Clearly, it says something that many people want to hear. It reduces the complexity of media and method selection to a simple and easy to remember formula. It can thus be used to support a bias toward whatever learning methodology might be in vogue. Users seem to employ it as pseudo-scientific justification for their own preferences about media and methods.” (p. 15)


Synopsis of Second Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

The authors point to earlier attempts to debunk the mythical retention data and the corrupted cone. “Critics have been attempting to debunk the mythical retention chart at least since 1971. The earliest critics, David Curl and Frank Dwyer, were addressing just the retention data.  Beginning around 2002, a new generation of critics has taken on the illegitimate combination of the retention chart and Edgar Dale’s Cone of Experience – the corrupted cone.” (p. 17).

Interestingly, we only found two people who attempted to debunk the retention “data” before 2000. This could be because we failed to find other examples that existed, or it might just be because there weren’t that many examples of people sharing the bad information.

Starting in about 2002, we noticed many sources of refutation. I suspect this has to do with two things. First, it is easier to quickly search human activity in the internet age, giving an advantage in seeking examples. Second, the internet also makes it easier for people to post the erroneous information and share it to a universal audience.

The bottom line is that there have been a handful of people—in addition to the four authors—who have attempted to debunk the bogus information.


Synopsis of Third Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

The authors of the article provide a series of brief synopses of the major players who have been cited as sources of the bogus data and corrupted visualizations. The goal here is to give you—the reader—additional information so you can make your own assessment of the credibility of the research sources provided.

Most people—I suspect—will skim through this article with a modest twinge of voyeuristic pleasure. I did.


Synopsis of Fourth Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

The authors present a decade-by-decade outline of examples of the reporting of the bogus information—From 1900 to the 2000s. The outline represents great detective work by my co-authors, who have spent years and years searching databases, reading articles, and reaching out to individuals and institutions in search of the genesis and rebirth of the bogus information. I’m in continual awe of their exhaustive efforts!

The timeline includes scholarly work such as the “Journal of Education,” numerous books, academic courses, corporate training, government publications, military guidelines, etc.

The breadth and depth of examples demonstrates clearly that no area of the learning profession has been immune to the disease of poor information.


Synopsis of the Exhibits:

The authors catalog 16 different examples of the visuals that have been used to convey the mythical retention data and/or the corrupted cone. They also present about 25 text examples.

The visual examples are black-and-white canonical versions, and given these limitations, can’t convey the wild variety of examples available now on the internet. Still, they show in their variety just how often people have modified Dale’s Cone to support their own objectives.


My Conclusions, Warnings, and Recommendations

The four articles in the special edition of Educational Technology represent a watershed moment in the history of misinformation in the learning profession. The articles utilize two examples—the mythical retention data (“People remember 10%, 20%, 30%…”) and the numerical corruptions of Dale’s Cone—and demonstrate the following:

  1. There are definitively-bogus data sources floating around the learning profession.
  2. These bogus information sources damage the effectiveness of learning and hurt learners.
  3. Authors of these bogus examples do not do their due diligence in confirming the validity of their research sources. They blithely reproduce sources or augment them before conveying them to others.
  4. Consumers of these bogus information sources do not do their due diligence in being skeptical, in expecting and demanding validated scientific information, in pushing back against those who convey weak information.
  5. Those who stand up publically to debunk such misinformation—though nobly fighting a good fight—do not seem to be winning the war against this misinformation.
  6. More must be done if we are to limit the damage.

Some of you may chaff at my tone here, and if I had more time I might have been able to be more careful in my wording. But still, this stuff matters! Moreover, these articles focus only on two examples of bogus memes in the learning field. There are many more! Learning styles anyone?

Here is what you can do to help:

  1. Be skeptical.
  2. When conveying or consuming research-based information, check the actual source. Does it say what it is purported to say? Is it a scientifically-validated source? Are there corroborating sources?
  3. Gently—perhaps privately—let conveyors of bogus information know that they are conveying bogus information. Show them your sources so they can investigate for themselves.
  4. When you catch someone conveying bogus information, make note that they may be the kind of person who is lazy or corrupt in the information they convey or use in their decision making.
  5. Punish, sanction, or reprimand those in your sphere of influence who convey bogus information. Be fair and don’t be an ass about it.
  6. Make or take opportunities to convey warnings about the bogus information.
  7. Seek out scientifically-validated information and the people and institutions who tend to convey this information.
  8. Document more examples.

To this end, Anthony Betrus—on behalf of the four authors—has established www.coneofexperience.com. The purpose of this website is to provide a place for further exploration of the issues raised in the four articles. It provides the following:

  • Series of timelines
  • Links to other debunking attempts
  • Place for people to share stories about their experience with the bogus data and visuals.

The learning industry also has responsibilities.

  1. Educational institutions must ensure that validated information is more likely to be conveyed to their students, within the bounds of academic freedom…of course.
  2. Educational institutions must teach their students how to be good consumers of “research,” “data,” and information (more generally).
  3. Trade organizations must provide better introductory education for their members; more myth-busting articles, blog posts, videos, etc.; and push a stronger evidence-based-practice agenda.
  4. Researchers have to partner with research translators more often to get research-based information to real-world practitioners.

Links:

 

 

In an article by Farhad Manjoo in the New York Times reports on Google's efforts to improve diversity. This is a compendable effort.

I was struck that while Google was utilizing scientists to devise the content of a diversity training program, it didn't seem to be utilizing research on the learning-to-performance process at all. It could be that Manjoo left it out of the article, or it could be that Google is missing the boat. Here's my commentary:

Dear Farhad,

Either this article is missing vital information–or Google, while perhaps using research on unconscious biases, is completely failing to utilize research-based best practices in learning-to-performance design. Ask almost any thought leader in the training-and-development field and they'll tell you that training by itself is extremely unlikely to substantially change behavior on its own, without additional supports.

By the way, the anecdotes cited for the success of Google's 90-minute training program are not persuasive. It's easy to find some anecdotes that support one's claims. Scientists call this "confirmation bias."

Believe it or not, there is a burgeoning science around what successful learning-to-performance solutions look like. This article, unfortunately, encourages the false notion that training programs alone will be successful in producing behavior change.