In an article by Farhad Manjoo in the New York Times reports on Google's efforts to improve diversity. This is a compendable effort.

I was struck that while Google was utilizing scientists to devise the content of a diversity training program, it didn't seem to be utilizing research on the learning-to-performance process at all. It could be that Manjoo left it out of the article, or it could be that Google is missing the boat. Here's my commentary:

Dear Farhad,

Either this article is missing vital information–or Google, while perhaps using research on unconscious biases, is completely failing to utilize research-based best practices in learning-to-performance design. Ask almost any thought leader in the training-and-development field and they'll tell you that training by itself is extremely unlikely to substantially change behavior on its own, without additional supports.

By the way, the anecdotes cited for the success of Google's 90-minute training program are not persuasive. It's easy to find some anecdotes that support one's claims. Scientists call this "confirmation bias."

Believe it or not, there is a burgeoning science around what successful learning-to-performance solutions look like. This article, unfortunately, encourages the false notion that training programs alone will be successful in producing behavior change.

As of today, the Learning Styles Challenge payout is rising from $1000 to $5000! That is, if any person or group creates a real-world learning intervention that takes learning styles into account–and proves that such an intervention produces better learning results than a non-learning-styles intervention, they’ll be awarded $5,000!

Special thanks to the new set of underwriters, each willing to put $1000 in jeopardy to help get the word out to the field:

Learning Styles Challenge Rules

We’re still using the original rules, as established back in 2006. Read them here.


What is Implied in This Debunking

The basic finding in the research is that learning interventions that take into account learning styles do no better than learning interventions that do not take learning styles into account. This does not mean that people do not have differences in the way they learn. It just means that designing with learning styles in mind is unlikely to produce benefits–and thus the extra costs are not likely to be a good investment.

Interestingly, there are learning differences that do matter! For example, if we really want to get benefits from individual differences, we should consider the knowledge and skill level of our learners.


What Can You Do to Spread the Word

Thanks to multiple efforts by many people over the years to lessen the irrational exuberance of the learning-styles proliferators, fewer and fewer folks in the learning field are falling prey to the learning-styles myth. But the work is not done yet. This issue still needs your help!

Here’s some ideas for how you can help:

  • Spread the word through social media! Blogs, Twitter, LinkedIn, Facebook!
  • Share this information with your work colleagues, fellow students, etc.
  • Gently challenge those who proselytize learning styles.
  • Share the research cited below.


History of the Learning Styles Challenge

It has been exactly eight years since I wrote in a blog post:

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

Eight years is a long time. Since that time, over one billion babies have been born, 72 billion metric tons of carbon pollution have been produced, and the U.S. Congress has completely stopped functioning.

However, not once in these past eight years has any person or group collected on the Learning Styles challenge. Not once!


Research on Learning Styles

However, since 2006, more and more people have discovered that learning styles are unlikely to be an effective way to design instruction.

First, there was the stunning research review in the top-tier scientific journal, Psychological Science in the Public Interest:

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.

The authors wrote the following:

We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all. (p. 105)

To read more about what they wrote, click here.

Two years later, two of the authors reiterated their findings in a separate–and nicely written–article for the Association for the Study of Medical Education. You can access that article at: http://uweb.cas.usf.edu/~drohrer/pdfs/Rohrer&Pashler2012MedEd.pdf. Here’s the research citation:

Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

A researcher who had once advocated for learning styles did an about face after he did some additional research:

Cook, D. A. (2012). Revisiting cognitive and learning styles in computer-assisted instruction: Not so useful after all. Academic Medicine, 87(6), 778-784.
Of course, not everyone is willing to give up on learning styles. For example, Furnham (2012) wrote:
The application of, and research into, learning styles and approaches is clearly alive and well. (p. 77).
Furnham, A. (2012). Learning styles and approaches to learning. In K. R. Harris, S. Graham, T. Urdan, S. Graham, J. M. Royer, & M. Zeidner (Eds.), APA handbooks in psychology. APA educational psychology handbook, Vol. 2. Individual differences and cultural and contextual factors (pp. 59-81). doi:10.1037/13274-003

A quick cursory look–today–through the PsycINFO database shows that scientific published articles on learning styles are still being published.


Learning Styles in the Workplace Learning Field

Guy Wallace, performance analyst and instructional architect, has been doing a great job keeping the workplace learning field up on the learning-styles debate. Check out his article in eLearn Magazine and his blog post update.

You’ll note from Guy’s blog post that many prominent thought leaders in the field have been suspicious of learning styles for many years.

 

 

 

The Edgar Dale Myth recently resurfaced in a TED Talk, indicating that TED talks–though doing a great job preparing the speaker to be engaging–don't fact check.

Here is the offending TED Talk, with the erroneous information starting at 5:48.

http://www.youtube.com/watch?v=u8ecQDX1XOw

As chronicled on this blog since 2006, and on a precursor website since 2002 (12 years ago), the Edgar Dale Myth is pernicious, dangerous, and seemingly immortal. See the following link for the original post:

http://is.gd/EdgarDaleMyth

Special thanks to JC Kinnamon, PhD for pointing out the offending video to me. JC has been a long-time compiler of the Edgar Dale Myth, and I thank him for his continued efforts!

Crazy Thought

Perhaps there is a silver lining in the continued resurfacing of these myths. For years, I've seen these myths as completely detrimental to our field–something to squash with crusader-like zealotry.

But the speaker in the TED video got me thinking. Perhaps these myths have social value. Perhaps when a myth surfaces, we are receiving an important signal that the conveyor of the myth is a lightweight–that it is very likely that he or she (or they) really don't understand learning at a deep level.

One of the things I hope everybody in the learning field understands is that, first, there is good research and there is bad research; and second, that people who cite research fall into two categories. Some people seek the truth in the research and report what they find. Other people seek research to prop up their previously-held beliefs, to "demonstrate" the benefits of their products or services, or to cast themselves in an aura of credibility.

Beware of People Citing Research

Yes, it's complicated! And everybody suffers somewhat from unintentional confirmatory bias, but caveat emptor, we as professionals have to be careful in assessing research claims. Science is built on skepticism. Professional practice should utilize a healthy dose of skepticism as well.

The speaker at the TED talk showed that he couldn't be trusted to know learning at a deep level by his use of this phony Edgar Dale "research." And then he double-confirmed it by asking his audience to play word bingo. He gave everybody a bingo card with words on it. Then he had people keep track of the words he was saying. When someone got all the words in a row, column, or diagonal he had them stand up and say BINGO, with applause from the audience. Anybody who knows learning knows that his word-bingo game distracted the audience from the main points of his message, hurting learning.

The use of the Edgar Dale myth by the speaker was an accurate portrayal of his lack of deep knowledge. He may have had some valuable things to say otherwise, but how could we be sure–indications were not good.

Is there a silver lining in the re-occurance of these myths? I've tried to convince myself in writing this that there might be some benefit, but the benefit comes only to those who know that the myths are myths!!! And, for the benifits to accrue, those of us who are out there trying to squash the myths must continue to proselytize and educate. Indeed, for our field to maximize its positive influence, each and every one of us must be hungry hunters of good research, skeptical assessors, and eager communicators.

So, send this blog post to those who are open to continuous improvement. SMILE

 

Here is the comment I sent to the NY Times in response to their focus on a supposed research study that purported to show that gifted kids are being underserved.

I'm a little over the top in my comments, but still I think this is worth printing because it demonstrates the need for good research savvy and it shows that even the most respected news organizations can make really poor research-assessment mistakes.

Egads!! Why is the New York Times giving so much "above-the-fold" visibility to a poorly-conceived research study funded by a conservative think tank with obvious biases?

Why isn't at least one of your contributors a research-savvy person who could comment on the soundness of the research? Instead, your contributors assume the research is sound.

Did you notice that the references in the original research report were not from top-tier refereed scientific journals?

In the original article from the Thomas Fordham Institute (a conservative-funded enterprise), the authors try to wash away criticisms about regression-to-the-mean and test-variability, but this bone against the obvious–and most damaging, and most valid–criticisms is not good enough.

If you took the top 10% of players in the baseball draft, the football draft, any company's onboarding class, any randomly selected group of maple trees, a large percentage of the top performers would not be top performers a year or two later. Damn, ask any baseball scout whether picking out the best prospects is a sure thing. It's not!

And, in the cases I mentioned above, the measures are more objective than an educational test, which has much higher variability–which would make more top performers leak out of the top ranks.

NY Times–you should be embarrassed to have published these responses to this non-study. Seriously, don't you have any research-savvy people left on your staff?

We have scientific journals because the research is vetted by experts.

Check out this article which claims:

"Science of mind is one of the most important intellectual developments in the last half century. It should not be obscured by neurobabble."

This might be a follow-up post to an earlier one I wrote that showed how easily we are fooled by scientific claims.

 

 

The Neon Elephant Award

The Neon Elephant Award is awarded to a person, team, or organization exemplifying enlightenment, integrity, and innovation in the field of workplace learning and performance. Announced in December—during the time of year when the northern hemisphere turns away from darkness toward the light and hope of warmer days to come—the Neon Elephant Award honors those who have truly changed the way we think about the practice of learning and performance improvement. Award winners are selected for demonstrated success in pushing the field forward in significant paradigm-altering ways while maintaining the highest standards of ethics and professionalism. 

 

Why “Neon Elephant?”

The elephant represents learning, power, strength, and the importance of nurturing the community. The glow of neon represents enlightenment, illumination, and a spark of something unique and alluring.

 

Selection Methodology

The award is based purely on merit and the criteria detailed above. Proposals are not accepted, nor are any entrance fees solicited or accepted. While advice on the selection is sought from industry thought leaders, Dr. Will Thalheimer of Work-Learning Research is the final arbiter. Awards will only be made in years when exceptional contributions to the workplace learning and performance field are apparent.

 

Winners

The 2020 Neon Elephant Award, given this year to Mirjam Neelen and Paul Kirschner for writing the book, Evidence-Informed Learning Design: Use Evidence to Create Training Which Improves Performanceand for their many years publishing their blog 3-Star Learning Experiences.

The 2019 Neon Elephant Award, given this year to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

The 2018 Neon Elephant Award, given this year to Clark Quinn for writing a book debunking the learning myths, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions—and for his many years advocating for research-based practices in the workplace learning field.

The 2017 Neon Elephant Award, given this year to Patti Shank for writing and publishing two research-to-practice books this year, Write and Organize for Deeper Learning and Practice and Feedback for Deeper Learning—and for her many years advocating for research-based practices in the workplace learning field.

The 2016 Neon Elephant Award, given this year to Pedro De Bruycere, Paul A. Kirschner, and Casper D. Hulshof for their book Urban Myths about Learning and Education—a book that provides a research-based reality check on the myths and misinformation that float around the learning field.

The 2015 Neon Elephant Award, given this year to Julie Dirksen for her book, Design for How People Learn—a book that wonderfully conveys practical, research-based wisdom through the authentic voice of an experienced instructional designer and strategist.

The 2014 Neon Elephant Award, given this year to Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel for their book, Make it Stick: The Science of Successful Learning—a book that brilliantly conveys scientific principles of learning in prose that is easy to digest, comprehensive and true in its recommendations, highly-credible, and impossible to ignore or forget.

The 2013 Neon Elephant Award, given this year to Gary Klein for his many years doing research and practice in naturalistic decision making, cognitive task analysis, and insight learning–and for reminding us that real-world explorations of human behavior are essential in enabling us to distill key insights.

The 2012 Neon Elephant Award, given this year to K. Anders Ericsson for his many years conducting research on expertise and creating a body of knowledge that has inspired many others to translate his research into recommendations for use by performance-improvement professionals.

The 2011 Neon Elephant Award, given this year to Jeroen van Merriënboer for his many years conducting research on learning and translating that research into practical models for use by learning professionals.

The 2010 Neon Elephant Award was awarded to Richard E. Clark for his many years in leading the workplace learning-and-performance field by bridging the gap between academic research and practical application.

The 2009 Neon Elephant Award was awarded to Ruth Clark for her many years in leading the workplace learning-and-performance field with research-based insights and recommendations, and—by so doing—helping to professionalize our field.

The 2008 Neon Elephant Award was awarded to Robert Brinkerhoff for developing the Success Case evaluation method and for advocating that learning professionals play a more “courageous” role in their organizations.

The 2007 Neon Elephant Award was awarded to Sharon Shrock and Bill Coscarelli for advocating against the use of memorization-level questions in learning measurement and for the use of authentic assessment items, including scenario-based questions, simulations, and real-world skills tests.

The 2006 Neon Elephant Award was awarded to Cal Wick of the Fort Hill Company for his work developing methodologies and software to support learning transfer.

Transfer of Training Estimates

Have you heard or seen any of the following statements? And do you believe them?

“Only 10% of training transfers to the job.”

“Only 10% of the investment in training
actually transfers to the job.”

“Although $100 billion is spent on
training each year, only 10%
of these expenditures
result in transfer to the job.”

Similar citations have been used in both the research and practitioner communities. But is any of this information accurate? Robert Fitzpatrick’s cautionary tale about how an innocent rhetorical question became twisted into established fact should make all of us skeptical enough to question the veracity of the claims that we encounter.

The following article is reprinted with permission from the TIP Newsletter of the Society for Industrial and Organizational Psychology (SIOP), a well-respected organization of Industrial/Organizational Psychologists. To learn more about SIOP, visit their website. To learn about the TIP Newsletter, a non-refereed publication of SIOP, visit http://my.siop.org/tipdefault.


The Strange Case of the Transfer of Training Estimate

Robert Fitzpatrick
Cranberry Township, Pennsylvania

Some time ago, a learning systems product development manager named David L. Georgenson set about to write an article on transfer of training, with emphasis on ways in which it might best be nurtured in organizations. To introduce his discussion, Georgenson hit upon the idea of asking a rhetorical question, thus: “How many times have you heard training directors say: ‘I…would estimate that only 10% of content which is presented in the classroom is reflected in behavioral change on the job’” (Georgenson, 1982, p.75). Georgenson had no need to, and did not, cite any evidence or authority for the 10% estimate; it is clear that he had used a rhetorical device to catch the reader’s attention. The estimate may or may not be accurate; it seems plausible but not compellingly so.

Georgenson’s article contains nothing about the dollar cost of training. There is no reason that Georgenson should have dealt with cost, and he did not.

In time, other authors wanted to write on transfer and to find some introductory way to convince the reader that transfer is indeed a problem worth writing about. And so were spawned a number of articles and books which used the estimate of Georgenson’s fictive training directors. Here are some examples in which Georgenson (1982) was specifically cited as the source:

“It is also estimated that only 10% of the dollars spent on training results in actual behavioral change back on trainees’ jobs” (Wexley & Baldwin, 1986, p. 503).

“It is estimated that while American industries annually spend up to $100 billion on training and development, not more than 10% of these expenditures actually result in transfer to the job” (Baldwin & Ford, 1988, p. 63).

“Less than 10% of [estimated expenditures on staff development] may produce behavioral changes on the job” (Alavi, 1994, p. 160).

“Georgenson (1981) [sic] estimated that not more than 10% of the $100 billion spent by industry actually made a difference to what happens in the workplace!” (Dickson & Bamford, 1995, p. 91)

“…given the finding that only 10% of training expenditures have been shown to result in behavioral changes back on the job” (Facteau, Dobbins, Russell, Ladd, & Kudisch, 1995, p. 2).

And there is more. Some writers on transfer did not cite Georgenson but did cite others who cited Georgenson. For instance:

“A recent comprehensive survey of research and literature by Timothy Baldwin and Kevin Ford found the following:…It is estimated that while American industries annually spend up to $100 billion on training and development, not more than 10% of these expenditures actually result in transfer to the job…(1988, p. 63).” (Broad & Newstrom, 1992, p. 7)

“Timothy Baldwin and Kevin Ford (1988, p. 63) report: ‘Not more than 10% of these expenditures [on training] actually result in transfer to the job.’” (Robinson & Robinson, 1995, p. 3) See also Fitzpatrick (1996).

In most of these examples, the 10% figure is accurately identified as an estimate, though words such as “finding” and “report” do appear. But almost all refer to $100 billion, though Georgenson’s imaginary training directors said nothing about expenditures. If they had, one supposes they would have made some adjustment for inflation over the years.

All the writings cited here so far come from 1996 or earlier. I found most of them through the Social Science Citation Index. Soon, the search process became burdensome and the returns seemed to be diminishing. I put the information aside, with the thought that, like the black plague of long ago, the epidemic of transfer estimates had run its course.

But recently I caught up with the April 2001 issue of the Journal of Applied Psychology. There, in the introductory paragraph of an otherwise enlightening article, it says: “U.S. businesses spend upwards of $100 billion annually on formal and informal training activities (Georgenson, 1982). However, it is estimated that only 10% of these training expenditures result in transfer of training to the job (Georgenson, 1982).” (Smith-Jentsch, Salas, & Brannick, 2001, p.279)

So the plague is back. Or perhaps it never went away. Some will say it doesn’t matter. It’s only introductory fluff, not centrally germane to the main thrust of the topic which it introduces.

But others may argue that it does matter. If we can’t trust the introductory citations, how can we then accept the more weighty citations and ideas which follow? And sometimes the introductory matter is important in itself; if Georgenson had said that only 90% of what is taught is transferred to the job, isn’t it less likely that we would have read his article (or funded his study of transfer) in the first place?

References

Alavi, M. (1994). Computer-mediated collaborative learning: An empirical evaluation. MIS Quarterly, 18, 159–174.

Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41, 63–105.

Broad, M. L., & Newstrom, J. W. (1992). Transfer of training: Action-packed strategies to ensure high payoff from training investments. Reading, MA: Addison-Wesley.

Dickson, D., & Bamford, D. (1995). Improving the interpersonal skills of social work students: The problem of transfer of training and what to do about it. British Journal of Social Work, 25, 85–105.

Facteau, J. D., Dobbins, G. H., Russell, J. E. A., Ladd, R. T., & Kudisch, J. D. (1995). The influence of general perceptions of the training environment on pretraining motivation and perceived training transfer. Journal of Management, 21, 1–25.

Fitzpatrick, R. (1996). [Review of the book Performance consulting: Moving beyond training]. Personnel Psychology, 49, 188-191.

Georgenson, D. L. (1982). The problem of transfer calls for partnership. Training and Development Journal, 36 (10), 75-78.

Robinson, D. G., & Robinson, J. C. (1995). Performance consulting: Moving beyond training. San Francisco: Berrett-Koehler.

Smith-Jentsch, K. A., Salas, E., & Brannick, M. T. (2001). To transfer or not to transfer? Investigating the combined effects of trainee characteristics, team leader support, and team climate. Journal of Applied Psychology, 86, 279-292.

Wexley, K. N., & Baldwin, T. T. (1986). Posttraining strategies for facilitating positive transfer: An empirical exploration. Academy of Management Journal, 29, 503-520.

End of article by Robert Fitzpatrick


To see the original article online, go to: http://www.siop.org/tip/backissues/TipOct01/pdf%20tip/392_018to019.pdf


…Will Thalheimer continues the discussion below…

Implications for the Learning-and-Performance World.

Obviously, we should ignore the 10% number. More broadly, we need to be sure we’re not basing our instructional-design and performance-improvement decisions on faulty information. There’s lots of it out there. We need to find information we can trust and we need to develop ways to check the validity of the claims that we encounter.

 

The Association of Psychological Science commissioned a review of the evidence for the benefits of using learning styles, and the report is clear.

We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all. (p. 105)

Research Citation:

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R.
(2008).
Learning styles: Concepts and evidence.
Psychological Science in the Public Interest, 9, 105-119.

You can access the article by clicking here.

You can access Richard Mayer’s nice intro to the article—which stresses the benefits of research—by clicking here.

My $1,000 Learning Styles Challenge

Three and one half years ago I offered $1,000 to any person or group who could demonstrate the benefits of learning styles in a real-world practical training program. No one has collected the money yet.

Here was the challenge:

Can an e-learning program that utilizes learning-style information
outperform an e-learning program that doesn’t utilize such information
by 10% or more on a realistic test of learning, even it is allowed to
cost up to twice as much to build?

You can access my original Learning Styles Challenge by clicking here.

You can access my three-year update on the challenge by clicking here.

Final Nail in the Coffin of Learning Styles?

Is this excellent research review by some of the most highly-respected researchers in the learning-research field a final nail in the coffin of learning styles?

Well, as a researcher I must always maintain openness to new information. Perhaps someday more research will demonstrate some specific benefits to learning styles. As the authors of the review say themselves:

Although we have argued that the extant data do not provide support for the learning-styles hypothesis, it should be emphasized that we do not claim that the same kind of instruction is most useful in all contexts and with all learners. An obvious point is that the optimal instructional method is likely to vary across disciplines. For instance, the optimal curriculum for a writing course probably includes a heavy verbal emphasis, whereas the most efficient and effective method of teaching geometry obviously requires visual–spatial materials. Of course, identifying the optimal approach for each discipline is an empirical question, and we espouse research using strong research methods to identify the optimal approach for each kind of subject matter.

Furthermore, it is undoubtedly the case that a particular student will sometimes benefit from having a particular kind of course content presented in one way versus another. One suspects that educators’ attraction to the idea of learning styles partly reflects their (correctly) noticing how often one student may achieve enlightenment from an approach that seems useless for another student. There is, however, a great gap from such heterogeneous responses to instructional manipulations—whose reality we do not dispute—to the notion that presently available taxonomies of student types offer any valid help in deciding what kind of instruction to offer each individual. Perhaps future research may demonstrate such linkages, but at present, we find no evidence for it. (p. 116)

As a consultant in the workplace learning-and-performance field, I will likely do my clients harm if I advised for the use of a learning-style learning design. I will continue to advise clients against designing their learning based on learning styles. At the same time, I will encourage them to be watchful for specific learning needs of individual learners. For example, when a learner is confused, he or she probably needs feedback and guidance.

I recommend that you read the article and Mayer’s introduction. Both provide wisdom about how to think about research and how to avoid being fooled.

Article Note: The date in the article and on the database PsycINFO says the article is from 2008. However, the
copyright is from 2009 and the article includes citations from 2009 and
the article appears as the “current article” on the APS (Association for
Psychological Science) website, and news reports just started surfacing in December 2009 and January 2010. The evidence suggests the article just recently came out.

UPDATE 2014: I’ve been joined by others. Reward is up to $5,000. Click here to see the latest challenge.

=======================

 Original Post 2009:

It has been over three years since I offered $1,000 to anyone who could
demonstrate that utilizing learning styles improved learning outcomes. Click here for the original challenge.

So far, no one has even come close.

For all the talk about learning styles over the last 15 years, we might expect that I was at risk of quickly losing my money.

Let me be clear, my argument is not that people don’t have different
learning styles, learning preferences, or learning skills. My argument
is that for real-world instructional-development situations, learning
styles is an ineffective and inefficient waste of resources that is
unlikely to produce meaningful results.

Let me leave you with the original challenge:

“Can an e-learning program that utilizes learning-style information
outperform an e-learning program that doesn’t utilize such information
by 10% or more on a realistic test of learning, even it is allowed to
cost up to twice as much to build?”

The challenge is still on.

I've been gathering a list of Myths that the Business Side Has about Learning.

I reached out to my clients, to groups in LinkedIn, to my Brown Bag Learning participants. I also reviewed some books, including Stolovitch & Keeps "Telling Ain't Training"; Doyle's "The Manager's Pocket Guide to Training", Bell's "Managers as Mentors". I also brought to memory my own recollections from over a decade of work and research on learning.

I compiled a list of about 140 myths and then used a card-sort methodology to separate them into categories.

Here are the results:

Everybody Hold Myths

First, it became clear that the Business Side isn't the only group that holds myths. Learners and we as Learning Professionals have our own sets of myths. We can't demonize the Business Side. We have to go out of our way to understand and work with the business side to craft workable effective solutions for our organizations and all the people impacted.

Let me say that sometimes I kind of regret that a distinction has to be made between us as learning professionals and them as the business side. There's something wrong with that distinction (we are IN the business aren't we), yet the dichotomy makes some sense since we support others who do the actual work of the business.

The Most Popular Myths
(that the Business Side Has about Learning, according to Learning Professionals)

These are in order from my card-sorting categorization effort. The most-often cited are listed first.

  1. Bad Learning Designs are Thought to be Good Learning Designs (big list below).
  2. Training Alone Produces Improvements in On-the-job Performance.
  3. Information Presentation is Sufficient as a Training Design.
  4. Training & Instructional Design Require No Special Skills or Competencies.
  5. Learners Know How to Learn.
  6. Managers Think Learning & Development is a Low-Priority Part of their Role.

Other High-Importance Categories

  • On-the-Job Learning is Forgotten or NOT Utilized or NOT Supported.
  • It’s a Training Issue (a conclusion drawn before considering alternative causes).
  • Formal Training has Little Impact.
  • Experienced Workers Don’t Need Training.
  • Development of Learning Interventions is Easy and Can be Shortened or Short-Changed.
  • Measurement of Learning. Miscellaneous Issues thereof.
  • Technology is Key to Learning Success.

Will's and Other Additions

  • Learning Designs Don’t Need to specifically Minimize Forgetting (Enable Remembering).
  • Content Doesn’t Need Validation.
  • Particular Behaviors are Easy to Learn (e.g., It's easy to do customer service).
  • Learning is Always Beneficial. It is Never Disruptive or Distracting. It Never Misinforms.
  • Opportunity Costs of Learning Can be Ignored.
  • We Don’t Have to Measure Learning.
  • We Have to Measure ROI.
  • We can Avoid Measuring Retrieval.

Short List of the Bad Learning Designs that the Business Side (and others I might add) Think Are Good Learning Designs

  • It is good to have new employee take all their elearning
    courses right away before starting work.
  • Employees ONLY learn by doing.
  • Reading is always bad, boring, and ineffective.
  • Training can be just as effective if we make it as short as
    possible.
  • Training doesn’t need pre-work or post-work.
  • A large library of courses or books is the way to go.
  • Employees need to know everything.
  • We should and CAN cater to learning styles.
  • Latest management craze (provide book to everyone).
  • Six-hour online courses are fine.
  • Some learning media are inherently better than other
    learning media.
  • Best value in training is a 10 to 12 hour day.
  • More information = More learning.
  • People remember 10% of what they read, 20% of what they see…
  • Most communication is by body language (57% is body
    language, only a small fraction communicated is in the actual learning
    messages).
  • We need more exciting visual decorations to grab attention.
  • Immediate feedback is always best.
  • Etc.


The MOST IMPORTANT QUESTION:  What do we do?

The first thing to do is to demonize everyone and give ourselves kudos for our wisdom,  conscientiousness, and whimsical charm.

No.

The first thing to do is to take responsibility. Just as a speaker must take responsibility to ensure that his or her listeners are understanding the intended message (even though much is out of the speaker's control), we must take responsibility for ensuring that our business stakeholders (1) understand learning at a deep level, (2) understand how they can ensure that training is applied successfully on the job, and (3) understand how they can create a work-learning environment that supports employees in learning on their own, from each other, and from their managers.

I got started on this myth gathering as a way to help me build a course for a client (a very large company) to help them improve work-learning at their company top to bottom, including formal and on-the-job learning.

Will this be easy? No. Someone today at my Brown Bag Webinosh asked, "Haven't we been trying to bust these myths for decades?" Great question, and it goes to the difficulty of the task. Many of us have been trying for decades to make changes, but I think also that many of us are just doing our little part as order takers. We build learning interventions when asked. So, bottom line is that I think we could try harder. That's the first thing.

We need to try smarter as well. I've learned over the years, when I've tried to communicate complicated research-based information, that it is critical to find just the right metaphor, just the right visual model, just the right explanation that is both simple and robust to get the job done.

Maybe human learning and performance is just too complicated to enable this, but I think it's worth a try to build some better metaphors, models, and explanations.

We also need to continue to offer research, real-world examples, and valid evaluation results as evidence. We also need to understand our business partners and their mental models and build our case within their frameworks, so they get what we're saying. We need to build into our training-development process our stakeholder-education efforts and our stakeholder-understanding efforts.

Reaching Out

If your company has created a learning intervention to help your business managers better understand learning and their role in it, I'd love to learn more. Contact me.

If your company would like to utilize or co-develop such a learning intervention, feel free to contact me now.

Complete Lists of Myths That the Business Side Has About Learning
(according to Learning Professionals)
(Note that these are offered "as is" with typos, etc.)

  1. "learning" is the accountability of the Training
    or Development Department or staff, rather than a leadership responsibility
  2. 1 and done – one class and they'll know everything
  3. 1 or 2 day management training seminar can turn an
    ineffective manager in to a high performing one.
  4. A best practice is to "get all the PPLs out of the
    way"
  5. A business gives a metrics pass to the learning group
    because “that stuff can’t be measured” and is then puzzled.
  6. A learning buffet (large library of courses) is the way to
    go
  7. A learning group is not integrated with those responsible
    for performance support.
  8. a test is need to prove the learners
    ""know"" it
  9. Any set of questions will do. There is no need to check to
    see which ones are good measures and which are not.
  10. Anyone can train someone else therefore anyone can create a
    training course.
  11. Asking a performance leader (someone good at their job) to
    deliver on the job training should not diminish that performer's output
  12. Bad Learning Designs are Thought to be Good Learning Designs
    (big list below).
  13. best value in training is a day 10 or 12 hours long.
  14. Build it and it will run: it's vital to get IT involved
  15. Build it before or without any needs assessment.
  16. Butts in seats is all that matters
  17. Content Doesn’t Need Validation.
  18. Context doesn't matter; just teach everyone the right steps
    to a task
  19. Courses without organizational support are okay.
  20. customer service is easy to teach.
  21. Delivering or presenting instructional content (via ILT or
    online courseware) is sufficient to elicit improved performance in the
    workplace.
  22. Different media create different learning results.
  23. Don't bother with objectives; just present the content.
  24. E-learning development is fast
  25. e-Learning isn't learning.
  26. E-Learning takes 1/3 the time of classroom instruction, so
    it should only cost 1/3 as much to create
  27. electronic learning is just as effective as in person
    learning
  28. Employees can't manage their own learning successfully
  29. employees need to know everything.
  30. Employees only learn by doing.
  31. everyone learns the same way (often the way that the manager
    best learns
  32. Everyone learns the same way, so only one style of learning
    is required..
  33. Experienced Workers Don’t Need Training.
  34. Facilitators can develop great courses
  35. Formal (scheduled, structured, SME-created) learning
    interventions are the best means of conveying knowledge and skills to our
    workforce
  36. Formal Training has Little Impact.
  37. Getting certified by taking a training class alone
  38. Hands-on training is okay if it just enables
    situation-actions
  39. Help mgmt solve problem, not just do workshop
  40. I already know it so I don't need to go to training.
  41. I attended a training class so I don't need to practice it.
  42. I attended a training class so I must know how to do it.
  43. I don't have to take part.
  44. I don't need to go through training, I just need my people
    to
  45. I know everyone had different learning styles, but I learn
    hands on.
  46. I left them a to-do list–they should follow it. No follow
    up required.
  47. I need new folks to start immediately. No time for training.
  48. I should see immediate results on my bottom line the first
    day after training
  49. idea sharing is a good form of learning
  50. If ""they"" can do it,
    ""they"" can train it.
  51. If I tell all of my people what to do in a meeting, they'll
    do it and won't need reminders or additional training
  52. If someone doesn't know how to do something I will just do
    it myself because it's faster than teaching
  53. If someone is trained on something they will be able to
    easily figure out how to apply it to their current job without any guidance
  54. 'if we build it they will come
  55. I'll figure it out on my own so therefore I don't need to go
    to training.
  56. I'm a Director/VP so I don't need to go.
  57. I'm a visual learner – I can only understand it if I see it.
  58. in hard economic times it makes sense to cut training.
  59. Information makes for learning
  60. Information Presentation is Sufficient as a Training Design.
  61. Interactive eLearning is only for Gen X or younger. Older
    folks won't get it.
  62. It has to be interactive
  63. IT training still needs vaildation if the training is
    presented from a task point of view. Must ensure that the steps taught are the
    steps needed to complete the task.
  64. It’s a Training Issue.
  65. Its a training issue
  66. It's better if I just have someone show them how to do it.
  67. It's easy for people to change if you train them right
  68. It's okay for the training function to be order takers.
  69. I've been promoted so I don't have to go to training.
  70. Just send me the handouts/training materials and I'll figure
    it out.
  71. Lack of cultural sensitivity for global audiences
  72. Lack of performance results mostly from lack of skills or
    knowledge.
  73. latest management book or craze (providing book to everyone)
  74. Learners have misconception that they don't have
    responsibility to go beyond listening.
  75. Learners Know How to Learn.
  76. Learners know what they need
  77. Learning Designs Don’t Need to specifically Minimize
    Forgetting (Enable Remembering).
  78. Learning Development is Easy and Can be Shortened or
    Short-Changed.
  79. Learning does not happen outside the classroom
  80. learning is a luxury. 
    We hired smart people.  Just work.
  81. Learning is Always Beneficial. It is Never Disruptive or
    Distracting. It Never Misinforms.
  82. Learning P's. don't understand that learning happens on the
    job.
  83. Learning should not take a lot of time away from work.  And people should be able to do self-study
    for almost everything
  84. Learning/Training is the responsibility of other departments
    — NOT the responsibility of the managers.
  85. Let's give them a book or seminar on the topic and they'll
    be all better.
  86. Live virtual programs (LVC) are most effective when they are
    recorded without an audience and made available for playback
  87. Managers think it's more valuable to create multiple SMEs as
    opposed to structured learning.
  88. Managers Think Learning & Development is a Low-Priority
    Part of their Role.
  89. Measurement of Learning Misc. Issues.
  90. Money not available
  91. More information provided, more learning.
  92. more/better training will solve the problem
  93. Most communication is by body language (55%) and tone of
    voice (37%) rather than choice of words (7%). [This is a bastardization of
    Mehrabian's studies.]
  94. My reports went through e-learning. I don't need to do more.
  95. My time is valuable, I don't have time to take a training
    class.
  96. need a class [to practice the stuff]; I already read it
  97. Non-business people shouldn’t be involved in business
    decision making
  98. Not just test scores!
  99. On the job training happens without structure or reward or
    cost
  100. one size fits all" approach
  101. Only paper and pencil tests (i.e., multiple
    choice/true-false) are adequate for regulatory purposes to prove that the
    learner has mastered the content.
  102. On-the-Job Learning is Forgotten or NOT Utilized or NOT
    Supported.
  103. Opportunity Costs of Learning Can be Ignored.
  104. Other High-Importance Categories
  105. Particular Behaviors are Easy to Learn.
  106. People can learn how to use software from a cheat sheet.
  107. people can learn without being made self-aware about their
    own level of competence.
  108. people know "how" to learn
  109. People's overall learning doesn't matter, I just want them
    to do the task right
  110. Performers should be assessed immediately after they have
    received the content from an instructor or from a courseware program.
  111. PowerPoint with narration is good enough.
  112. PPL completion rate is the way to measure quality of
    training.
  113. PPLs and in-store activities are useless – we need to do
    hands-on training "instead".
  114. presentation = training
  115. Pyramid.
  116. Quantify and communicate the value
  117. Reading is always bad, boring, ineffective.
  118. Regulatory and credentialling agencies create good tests.
  119. Reports generated by a Learning Management System (LMS) are
    sufficient for monitoring the learning-to-assessment-to-performance continuum
    in our workplace.
  120. Role plays are a waste of my time.
  121. Seen IT buy ""learning"" w/o consultitng
    HR or Training dept
  122. Six-hour online courses are just fine. i.e. no
    acknowledgement of information overload erasing what is learned.
  123. SME's are the best trainers, and Trainers are always the SME's":
    Pulling an SME to deliver training just because they know the most isn't always
    the most effective approach.
  124. SME's or developers make the best (or even competent)
    trainers.
  125. So often what is perceived by mgmt as good training is
    attributed to the skills of a good presenter, not to training design.
  126. successful performance during training usually results in
    improved otj performance.
  127. Technology is Key to Learning Success.
  128. Technology is key to learning success.
  129. Tell me what I need to know and that's enough.
  130. tell once, people know it.
  131. tell them and they'll do it.
  132. Telling is all we need to do."
  133. Telling somebody once means they will remember it AND apply
    it to their work.
  134. That "presentation" = "training".
  135. That stakeholders will see imediate results (i.e. less than
    1 year).
  136. The best way to design is to use the "present and test
    method"
  137. The biggest myth is that training alone will change people's
    behaviors.
  138. The business believes that they can put an employee through
    training (be it live, web-based, etc.) and magically they will automatically
    put the skills into place
  139. the course alone will solve the problem
  140. the HR as a service provider model gives problems as your
    'client' is your customer – and the customer is always right
  141. The more slides, the better (death by PowerPoint)
  142. The only way to learn is on-the-job-training; spending money
    on training programs is a waste
  143. The skills of instructional designers and educators are
    pretty shallow and their key abilities are primarily related to instructional
    technology.
  144. the training department can't help – they don't know our
    side of the business
  145. the training is bad
  146. there are learning styles
  147. There are way too many PPLs… but we need a PPL on
    _____________.
  148. There is no special knowledge needed to teach, design, or
    organize training
  149. They can learn all they need to know in (pick arbitrary unit
    of time)
  150. they don't realize the importance of reinforcement, repeat
    sessions, follow up
  151. They have a college degree so they already know it.
  152. They need a course in order to learn
  153. Think in-person learning is more effective than online
  154. too busy
  155. Training & Instructional Design Require No Special
    Skills or Competencies.
  156. Training Alone Produces Improvements in On-the-job
    Performance.
  157. Training can be just as effective if we make it as short as
    possible (one day instead of three days)
  158. training course will solve the problem.
  159. Training determines job content and tasks, not the
    supervisor or work center.
  160. training doesn't need follow-up
  161. Training doesn't need pre-work or post-work
  162. Training done to replace what managers should be doing
  163. training fixes everything
  164. Training is a cheap-quick-easy solution to a problem with my
    people
  165. Training is common sense.
  166. Training is the responsibility of the organization that
    sponsors it and the trainer who delivers it.
  167. Training is time consuming and does not produce results
  168. Training isn't very important in my responsibilities.
  169. Training Just Happens
  170. Training takes too long.
  171. Training will automatically change behavior on the job
  172. Training willing workers creates willing and able workers.
  173. Training/teaching/telling = learning
  174. Trainings are luxury and sometimes seen as a cookie for the
    staff at a time no one really need it. Let them have some legal fun
  175. Try again, make sure you use the Access Code that is showing
    and follow by a # sign.
  176. verbal responses (for example to customers) are easy.
  177. We can Avoid Measuring Retrieval.
  178. we can send them an email.
  179. We can train people to do anything…
  180. we can train people to instantly recall anything.
  181. We can use common sense to guide training design.
  182. We can't bring in outside help – our industry is too
    specialized and our needs are too unique.
  183. We Don’t Have to Measure Learning.
  184. We don't have to look at the performance situation.
  185. We don't have to validate our content.
  186. We don't need to learn! We just need to prove we meet the
    regulation.
  187. We don't need to practice. Just tell them.
  188. We have no time allocated for training in our budget so it
    doesn't happen (mgr may not realize that a lot of training happens on the job –
    not only as a formal process where the employee sits at the computer).
  189. We Have to Measure ROI.
  190. We only hire people who know what they are doing, they don't
    need to learn anything, and if they do, they'll pick it up on the job
  191. we should automatically assume that an SME is ipso facto
    'the best trainer'.
  192. We should/can cater to learning styles.
  193. When things are not going well it is clearly a lack of
    skills and knowledge – so TRAIN them
  194. Why explain to all levels of employees how the organisation
    works, how the departments relate to one another, etc
  195. Why would I train my employees if they are already doing it?
  196. Why would I want to train my employees in specific
    sub-skills
  197. You can develop a perfect course without SMEs.
  198. you can fix anything with enough training.
  199. You can’t teach people relationship skills (either they have
    them or they don’t)
  200. You don't need objectives, any one can write training.
  201. You either have the ability to learn or not.
  202. You have competence or not, then you learn it on the job.
  203. You need to use a technology to train people properly

Ideas Participants in My Brown-Bag Learning Event Offered on What We as Learning Professionals Ought to Do about the Myth Problem
(Note that these are offered "as is" with typos, etc.)

  • Our responsibility – gently guide. Present the right solution when asked for the wrong one
  • Give examples of whether X type of intervention has been successful
  • Offer performance solutions: this is what we can do (beyond training)
  • Bring out the research to dispel the myths
  • Develop solid business acumen and work, plan, collaborate from there
  • to educate clients
  • We need to discuss the learning models and theories that we support when appropriate
  • We should be advocates for learners
  • We should questions their thinking, ask for evidence
  • Provide real evidence of success.
  • educate, communicate, inform
  • We have to walk a fine line between sticking to the ""truths"" we know, yet dealing tactfully with management's myths.
  • myth busters
  • Don't be an order taker
  • I have found that the RIGHT manager can make a difference. Sometimes change can come from within, by working to influence a middle manager.
  • SHOW OUR VALUE
  • Have proof/case studies of effects of good design and guidance.
  • Don't wait to be invited to clarify them. Anticipate the reality and invite yourself to the table.
  • Sell our clients on our skills and recommendations. It keeps coming down to convincing management about the value of what we have to offer.
  • As learning professionals we need to promote the effort to focus on what is needed to improve performance.
  • To have a clear focus and mission for learning in our organizations, and to be able to communicate clearly, with supporting information.
  • Dealing with these myths is our reality and part of scoping a project and defining target and objectives realistically… all the time…