Tag Archive for: research

A recent research review (by Paul L. Morgan, George Farkas, and Steve Maczuga) finds that teacher-directed mathematics instruction in first grade is superior to other methods for students with “math difficulties.” Specifically, routine practice and drill was more effective than the use of manipulatives, calculators, music, or movement for students with math difficulties.

For students without math difficulties, teacher-directed and student-centered approaches performed about the same.

In the words of the researchers:

In sum, teacher-directed activities were associated with greater achievement by both MD and non-MD students, and student-centered activities were associated with greater achievement only by non-MD students. Activities emphasizing manipulatives/calculators or movement/music to learn mathematics had no observed positive association with mathematics achievement.

For students without MD, more frequent use of either teacher-directed or student-centered instructional practices was associated with achievement gains. In contrast, more frequent use of manipulatives/calculator or movement/music activities was not associated with significant gains for any of the groups.

Interestingly, classes with higher proportions of students with math difficulties were actually less likely to be taught with teacher-directed methods — the very methods that would be most helpful!

Will’s Reflection (for both Education and Training)

These findings fit in with a substantial body of research that shows that learners who are novices in a topic area will benefit most from highly-directed instructional activities. They will NOT benefit from discovery learning, problem-based learning, and similar non-directive learning events.

See for example:

  • Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), 75-86.
  • Mayer, R. E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? American Psychologist, 59(1), 14-19.

As a research translator, I look for ways to make complicated research findings usable for practitioners. One model that seems to be helpful is to divide learning activities into two phases:

  1. Early in Learning (When learners are new to a topic, or the topic is very complex)
    The goal here is to help the learners UNDERSTAND the content. Here we provide lots of learning support, including repetitions, useful metaphors, worked examples, immediate feedback.
  2. Later in Learning (When learners are experienced with a topic, or when the topic is simple)
    The goal here is to help the learners REMEMBER the content or DEEPEN they’re learning. To support remembering, we provide lots of retrieval practice, preferably set in realistic situations the learners will likely encounter — where they can use what they learned. We provide delayed feedback. We space repetitions over time, varying the background context while keeping the learning nugget the same. To deepen learning, we engage contingencies, we enable learners to explore the topic space on their own, we add additional knowledge.

What Elementary Mathematics Teachers Should Stop Doing

Elementary-school teachers should stop assuming that drill-and-practice is counterproductive. They should create lesson plans that guide their learners in understanding the concepts to be learned. They should limit the use of manipulatives, calculators, music, and movement. Ideas about “arts integration” should be pushed to the back burner. This doesn’t mean that teachers should NEVER use these other methods, but they should be used to create occasional, short, and rare moments of variety. Spending hours using manipulatives, for example, is certainly harmful in comparison with more teacher-directed activities.

 

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model’s recommendations leverageable for learning professionals). People keep asking me about the model, so I’m going to briefly describe it here. If you want to look at my original YouTube video about the model — which goes into more depth — you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here’s a graphic depiction:

 

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can’t make decisions (D), learners who can’t remember what they’ve learned (E), learners who can’t apply what they’ve learned (F), and learners who can’t persevere in their own learning (G); are learners who simply haven’t received leverageable benefits.

When we train or teach only to A, B, and C, we aren’t really helping our learners, we aren’t providing a return on the learning investments, we haven’t done enough to support our learners’ future performance.

 

 

The Danger

Have you ever seen the following “research” presented to demonstrate some truth about human learning?

Unfortunately, all of the above diagrams are evangelizing misleading information. Worse, these fabrications have been rampant over the last two or three decades—and seem to have accelerated during the age of the internet. Indeed, a Google image search for “Dale’s Cone” produces about 80% misleading information, as you can see below from a recent search.

Search 2015:

 

Search 2017:

 

This proliferation is a truly dangerous and heinous result of incompetence, deceit, confirmatory bias, greed, and other nefarious human tendencies.

It is also hurting learners throughout the world—and it must be stopped. Each of us has a responsibility in this regard.

 

New Research

Fortunately, a group of tireless researchers—who I’ve had the honor of collaborating with—has put a wooden stake through the dark heart of this demon. In the most recent addition of the scientific journal Educational Technology, Deepak Subramony, Michael Molenda, Anthony Betrus, and I (my contribution was small) produced four articles on the dangers of this misinformation and the genesis of it. After working separately over the years to debunk this bit of mythology, the four of us have come together in a joint effort to rally the troops—people like you, dedicated professionals who want to create the best outcomes for your learners.

Here are the citations for the four articles. Later, I will have a synopsis of each article.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

Many thanks to Lawrence Lipsitz, the editor of Educational Technology, for his support, encouragement, and efforts in making this possible!

To get a copy of the “Special Issue” or to subscribe to Educational Technology, go to this website. (Note, 2017: I don’t think the journal is being published anymore.)

 

The Background

There are two separate memes we are debunking, what we’ve labeled (1) the mythical retention chart and (2) the corruption of Dale’s Cone of Experience. As you will see—or might have noticed in the images I previously shared—the two have often be comingled.

Here is an example of the mythical retention chart:

 

Oftentimes though, this is presented in text:

“People Remember:

  • 10 percent of what they read;
  • 20 percent of what they hear;
  • 30 percent of what they see;
  • 50 percent of what they see and hear;
  • 70 percent of what they say; and
  • 90 percent of what they do and say

Note that the numbers proffered are not always the same, nor are the factors alleged to spur learning. So, for example, you can see that on the graphic, people are said to remember 30 percent of what they hear, but in the text, the percentage is 20 percent. In the graphic, people remember 80 percent when they are collaborating, but in the text they remember 70% of what they SAY. I’ve looked at hundreds of examples, and the variety is staggering.

Most importantly, the numbers do NOT provide good guidance for learning design, as I will detail later.

Here is a photocopied image of the original Dale’s Cone:

Edgar Dale (1900-1985) was an American educator who is best known for developing “Dale’s Cone of Experience” (the cone above) and for his work on how to incorporate audio-visual materials into the classroom learning experience. The image above was photocopied directly from his book, Audio-visual methods in teaching (from the 1969 edition).

 

You’ll note that Dale included no numbers in his cone. He also warned his readers not to take the cone too literally.

Unfortunately, someone somewhere decided to add the misleading numbers. Here are two more examples:

 

I include these two examples to make two points. First, note how one person clearly stole from the other one. Second, note how sloppy these fabricators are. They include a Confucius quote that directly contradicts what the numbers say. On the left side of the visuals, Confucius is purported to say that hearing is better than seeing, while the numbers on the right of the visuals say that seeing is better than hearing. And, by the way, Confucius did not actually say what he is being alleged to have said! What seems clear from looking at these and other examples is that people don’t do their due diligence—their ends seems to justify their means—and they are damn sloppy, suggesting that they don’t think their audiences will examine their arguments closely.

By the way, these deceptions are not restricted to the English-speaking world:

 

Intro to the Special Issue of Educational Technology

As Deepak Subramony and Michael Molenda say in the introduction to the Special Issue of Educational Technology, the four articles presented seek to provide a “comprehensive and complete analysis of the issues surrounding these tortured constructs.” They also provide “extensive supporting material necessary to present a comprehensive refutation of the aforementioned attempts to corrupt Dale’s original model.”

In the concluding notes to the introduction, Subramony and Molenda leave us with a somewhat dystopian view of information trajectory in the internet age. “In today’s Information Age it is immensely difficult, if not practically impossible, to contain the spread of bad ideas within cyberspace. As we speak, the corrupted cone and its attendant “data” are akin to a living organism—a virtual 21st century plague—that continues to spread and mutate all over the World Wide Web, most recently to China. It therefore seems logical—and responsible—on our part that we would ourselves endeavor to continue our efforts to combat this vexing misinformation on the Web as well.”

Later, I will provide a section on what we can all do to help debunk the myths and inaccuracies imbedded in these fabrications.

Now, I provide a synopsis of each article in the Special Edition.


Synopsis of First Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

The authors point out that, “Learners—both face-to-face and distant—in classrooms, training centers, or homes are being subjected to lessons designed according to principles that are both unreliable and invalid. In any profession this would be called malpractice.” (p. 6).

The article makes four claims.

Claim 1: The Data in the Retention Chart is Not Credible

First, there is no body of research that supports the data presented in the many forms of the retention chart. That is, there is no scientific data—or other data—that supports the claim that People Remember some percentage of what they learned. Interestingly, where people have relied on research citations from 1943, 1947, 1963, and 1967 as the defining research when they cite the source of their data, the numbers—10%, 20%, 30% and so on—actually appeared as early as 1914 and 1922—when they were presented as information long known. A few years ago, I compiled research on actual percentages of remembering. You can access it here.

Second, the fact that the numbers all are divisible by 5 or 10 makes it obvious to anyone who has done research that these are not numbers derived by actual research. Human variability precludes round numbers. In addition, as pointed out as early at 1978 by Dwyer, there is the question of how the data were derived—what were learners actually asked to do? Note for example that the retention chart data always measures—among other things—how much people remember by reading, hearing, and seeing. How people could read without seeing is an obvious confusion. What are people doing when they only see and don’t read or listen? Also problematic is how you’d create a fair test to compare situations where learners listened or watched something. Are they tested on different tests (one where they see and one where they listen), which seems to allow bias or are they tested on the same test, in which case on group would be at a disadvantage because they aren’t taking a test in the same context in which they learned.

Third, the data portrayed don’t relate to any other research in the scientific literature on learning. As the authors write, “There is within educational psychology a voluminous literature on remembering and learning from various mediated experiences. Nowhere in this literature is there any summary of findings that remotely resembles the fictitious retention chart.” (p. 8)

Finally, as the author’s say, “Making sense of the retention chart is made nearly impossible by the varying presentations of the data, the numbers in the chart being a moving target, altered by the users to fit their individual biases about desirable training methods.” (p. 9).

Claim 2: Dale’s Cone is Misused.

Dale’s Cone of Experience is a visual depiction that portrays more concrete learning experiences at the bottom of the cone and more abstract experiences at the top of the cone. As the authors write, “The cone shape was meant to convey the gradual loss of sensory information” (p. 9) in the learning experiences as one moved from lower to higher levels on the cone.

“The root of all the perversions of the Cone is the assumption that the Cone is meant to be a prescriptive guide. Dale definitely intended the Cone to be descriptive—a classification system, not a road map for lesson planning.” (p. 10)

Claim 3: Combining the Retention Chart Data with Dale’s Cone

“The mythical retention data and the concrete-to-abstract cone evolved separately throughout the 1900’s, as illustrated in [the fourth article] ‘Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone.’ At some point, probably around 1970, some errant soul—or perhaps more than one person—had the regrettable idea of overlaying the dubious retention data on top of Dale’s Cone of Experience.” (p. 11). We call this concoction the corrupted cone.

“What we do know is that over the succeeding years [after the original corruption] the corrupted cone spread widely from one source to another, not in scholarly publications—where someone might have asked hard questions about sources—but in ephemeral materials, such as handouts and slides used in teaching or manuals used in military or corporate training.” (p. 11-12).

“With the growth of the Internet, the World Wide Web, after 1993 this attractive nuisance spread rapidly, even virally. Imagine the retention data as a rapidly mutating virus and Dale’s Cone as a host; then imagine the World Wide Web as a bathhouse. Imagine the variety of mutations and their resistance to antiviral treatment. A Google Search in 2014 revealed 11,000 hits for ‘Dale’s Cone,’ 14,500 for ‘Cone of Learning,’ and 176,000 for ‘Cone of Experience.’ And virtually all of them are corrupted or fallacious representations of the original Dale’s cone. It just might be the most widespread pedagogical myth in the history of Western civilization!” (p. 11).

Claim 4: Murky Provenance

People who present the fallacious retention data and/or the corrupted cone often cite other sources—that might seem authoritative. Dozens of attributions have been made over the years, but several sources appear over and over, including the following:

  • Edgar Dale
  • Wiman & Meierhenry
  • Bruce Nyland
  • Various oil companies (Mobil, Standard Oil, Socony-Vacuum Oil, etc.)
  • NTL Institute
  • William Glasser
  • British Audio-Visual Society
  • Chi, Bassok, Lewis, Reimann, & Glaser (1989).

Unfortunately, none of these sources are real sources. They are false.

Conclusion:

“The retention chart cannot be supported in terms of scientific validity or logical interpretability. The Cone of Experience, created by Edgar Dale in 1946, makes no claim of scientific grounding, and its utility as a prescriptive theory is thoroughly unjustified.” (p. 15)

“No qualified scholar would endorse the use of this mish-mash as a guide to either research or design of learning environments. Nevertheless, [the corrupted cone] obviously has an allure that surpasses logical considerations. Clearly, it says something that many people want to hear. It reduces the complexity of media and method selection to a simple and easy to remember formula. It can thus be used to support a bias toward whatever learning methodology might be in vogue. Users seem to employ it as pseudo-scientific justification for their own preferences about media and methods.” (p. 15)


Synopsis of Second Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

The authors point to earlier attempts to debunk the mythical retention data and the corrupted cone. “Critics have been attempting to debunk the mythical retention chart at least since 1971. The earliest critics, David Curl and Frank Dwyer, were addressing just the retention data.  Beginning around 2002, a new generation of critics has taken on the illegitimate combination of the retention chart and Edgar Dale’s Cone of Experience – the corrupted cone.” (p. 17).

Interestingly, we only found two people who attempted to debunk the retention “data” before 2000. This could be because we failed to find other examples that existed, or it might just be because there weren’t that many examples of people sharing the bad information.

Starting in about 2002, we noticed many sources of refutation. I suspect this has to do with two things. First, it is easier to quickly search human activity in the internet age, giving an advantage in seeking examples. Second, the internet also makes it easier for people to post the erroneous information and share it to a universal audience.

The bottom line is that there have been a handful of people—in addition to the four authors—who have attempted to debunk the bogus information.


Synopsis of Third Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

The authors of the article provide a series of brief synopses of the major players who have been cited as sources of the bogus data and corrupted visualizations. The goal here is to give you—the reader—additional information so you can make your own assessment of the credibility of the research sources provided.

Most people—I suspect—will skim through this article with a modest twinge of voyeuristic pleasure. I did.


Synopsis of Fourth Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

The authors present a decade-by-decade outline of examples of the reporting of the bogus information—From 1900 to the 2000s. The outline represents great detective work by my co-authors, who have spent years and years searching databases, reading articles, and reaching out to individuals and institutions in search of the genesis and rebirth of the bogus information. I’m in continual awe of their exhaustive efforts!

The timeline includes scholarly work such as the “Journal of Education,” numerous books, academic courses, corporate training, government publications, military guidelines, etc.

The breadth and depth of examples demonstrates clearly that no area of the learning profession has been immune to the disease of poor information.


Synopsis of the Exhibits:

The authors catalog 16 different examples of the visuals that have been used to convey the mythical retention data and/or the corrupted cone. They also present about 25 text examples.

The visual examples are black-and-white canonical versions, and given these limitations, can’t convey the wild variety of examples available now on the internet. Still, they show in their variety just how often people have modified Dale’s Cone to support their own objectives.


My Conclusions, Warnings, and Recommendations

The four articles in the special edition of Educational Technology represent a watershed moment in the history of misinformation in the learning profession. The articles utilize two examples—the mythical retention data (“People remember 10%, 20%, 30%…”) and the numerical corruptions of Dale’s Cone—and demonstrate the following:

  1. There are definitively-bogus data sources floating around the learning profession.
  2. These bogus information sources damage the effectiveness of learning and hurt learners.
  3. Authors of these bogus examples do not do their due diligence in confirming the validity of their research sources. They blithely reproduce sources or augment them before conveying them to others.
  4. Consumers of these bogus information sources do not do their due diligence in being skeptical, in expecting and demanding validated scientific information, in pushing back against those who convey weak information.
  5. Those who stand up publically to debunk such misinformation—though nobly fighting a good fight—do not seem to be winning the war against this misinformation.
  6. More must be done if we are to limit the damage.

Some of you may chaff at my tone here, and if I had more time I might have been able to be more careful in my wording. But still, this stuff matters! Moreover, these articles focus only on two examples of bogus memes in the learning field. There are many more! Learning styles anyone?

Here is what you can do to help:

  1. Be skeptical.
  2. When conveying or consuming research-based information, check the actual source. Does it say what it is purported to say? Is it a scientifically-validated source? Are there corroborating sources?
  3. Gently—perhaps privately—let conveyors of bogus information know that they are conveying bogus information. Show them your sources so they can investigate for themselves.
  4. When you catch someone conveying bogus information, make note that they may be the kind of person who is lazy or corrupt in the information they convey or use in their decision making.
  5. Punish, sanction, or reprimand those in your sphere of influence who convey bogus information. Be fair and don’t be an ass about it.
  6. Make or take opportunities to convey warnings about the bogus information.
  7. Seek out scientifically-validated information and the people and institutions who tend to convey this information.
  8. Document more examples.

To this end, Anthony Betrus—on behalf of the four authors—has established www.coneofexperience.com. The purpose of this website is to provide a place for further exploration of the issues raised in the four articles. It provides the following:

  • Series of timelines
  • Links to other debunking attempts
  • Place for people to share stories about their experience with the bogus data and visuals.

The learning industry also has responsibilities.

  1. Educational institutions must ensure that validated information is more likely to be conveyed to their students, within the bounds of academic freedom…of course.
  2. Educational institutions must teach their students how to be good consumers of “research,” “data,” and information (more generally).
  3. Trade organizations must provide better introductory education for their members; more myth-busting articles, blog posts, videos, etc.; and push a stronger evidence-based-practice agenda.
  4. Researchers have to partner with research translators more often to get research-based information to real-world practitioners.

Links:

 

 

About two years ago, four enterprising learning researchers reviewed the research on training and development and published their findings in a top-tier refereed scientific journal. They did a really nice job!

Unfortunately, a vast majority of professionals in the workplace learning-and-performance field have never read the research review, nor have they even heard about it.

As a guy whose consulting practice is premised on the idea that good learning research can be translated into practical wisdom for instructional designers, trainers, elearning developers, chief learning officers and other learning executives, I have been curious to see to what extent this seminal research review has been utilized by other learning professionals. So, for the last year and a half or so, I’ve been asking the audiences I encounter in my keynotes and other conference presentations whether they have encountered this research review.

Often I use the image below to ask the question:

Click here to see original research article…

 

What would be your guess as to the percentage of folks in our industry who have read this?

10%

30%

50%

70%

90%

Sadly, in almost all of the audiences I’ve encountered, less than 5% of the learning professionals have read this research review.

Indeed, usually more than 95% of workplace learning professionals have “never heard of it” even two years after it was published!!!

THIS IS DEEPLY TROUBLING!

And the slur this dumps on our industry’s most potent institutions should be self-evident. And I, too, must take blame for not being more successful in getting these issues heard.

A Review of the Review

People who are subscribed to my email newsletter (you can sign up here), have already been privy to this review many months ago.

I hope the following review will be helpful, and remember, when you’re gathering knowledge to help you do your work, make sure you’re gathering it from sources who are mindful of the scientific research. There is a reason that civilization progresses through its scientific efforts–science provides a structured process of insight generation and testing, creating a self-improving knowledge-generation process that maximizes innovation while minimizing bias.

——————————-

Quotes from the Research Review:

“It has long been recognized that traditional,
stand-up lectures are an inefficient and
unengaging strategy for imparting
new knowledge and skills.” (p. 86)

 

“Training costs across organizations remain
relatively constant as training shifts from
face-to-face to technology-based methods.” (p. 87)

 

“Even when trainees master new knowledge and
skills in training, a number of contextual factors
determine whether that learning is applied
back on the job…” (p. 90)

 

“Transfer is directly related to opportunities
to practice—opportunities provided either by
the direct supervisor or the organization
as a whole.” (p. 90)

 

“The Kirkpatrick framework has a number of
theoretical and practical shortcomings…” (p. 91)

Introduction

I, Will Thalheimer, am a research translator. I study research from peer-reviewed scientific journals on learning, memory, and instruction and attempt to distill whatever practical wisdom might lurk in the dark cacophony of the research catacomb. It’s hard work—and I love it—and the best part is that it gives me some research-based wisdom to share with my consulting clients. It helps me not sound like a know-nothing. Working to bridge the research-practice gap also enables me to talk with trainers, instructional designers, elearning developers, chief learning officers, and other learning executives about their experiences using research-based concepts.

 

It is from this perspective that I have a sad, and perhaps horrifying, story to tell. In 2012—an excellent research review on training was published in a top-tier journal. Unbelievably, most training practitioners have never heard of this research review. I know because when I speak at conferences and chapters in our field I often ask how many people have read the article. Typically, less than 5% of experienced training practitioners have! Less than 1 in 20 people in our field have read a very important review article.

 

What the hell are we doing wrong? Why does everyone know what a MOOC is, but hardly anyone has looked at a key research article?

 

You can access the article by clicking here. You can also read my review of some of the article’s key points as I lay them out below.

 

Is This Research Any Good?

Not all research is created equal. Some is better than others. Some is crap. Too much “research” in the learning-and-performance industry is crap so it’s important to first acknowledge the quality of the research review.

The research review by Eduardo Salas, Scott Tannenbaum, Kurt Kraiger, and Kimberly Smith-Jentsch from November 2012 was published in the highly-regarded peer-reviewed scientific journal, Psychological Science in the Public Interest, published by the Association for Psychological Science, one of the most respected social-science professional organizations in the world. The research review not only reviews research, but also utilizes meta-analytic techniques to distill findings from multiple research studies. In short, it’s high-quality research.

 

The rest of this article will highlight key messages from the research review.

 

Training & Development Gets Results!

The research review by Salas, Tannenbaum, Kraiger, and Smith-Jentsch shows that training and development is positively associated with organizational effectiveness. This is especially important in today’s economy because the need for innovation is greater and more accelerated—and innovation comes from the knowledge and creativity of our human resources. As the researchers say, “At the organizational level, companies need employees who are both ready to perform today’s jobs and able to learn and adjust to changing demands. For employees, that involves developing both job-specific and more generalizable skills; for companies, it means taking actions to ensure that employees are motivated to learn.” (p. 77). Companies spend a ton of money every year on training—in the United States the estimate is $135 billion—so it’s first important to know whether this investment produces positive outcomes. The bottom line: Yes, training does produce benefits.

 

To Design Training, It Is Essential to Conduct a Training Needs Analysis

“The first step in any training development effort ought to be a training needs analysis (TNA)—conducting a proper diagnosis of what needs to be trained, for whom, and within what type of organizational system. The outcomes of this step are (a) expected learning outcomes, (b) guidance for training design and delivery, (c) ideas for training evaluation, and (d) information about the organizational factors that will likely facilitate or hinder training effectiveness. It is, however, important to recognize that training is not always the ideal solution to address performance deficiencies, and a well-conducted TNA can also help determine whether a non-training solution is a better alternative.” (p. 80-81) “In sum, TNA is a must. It is the first and probably the most important step toward the design and delivery of any training.” (p. 83) “The research shows that employees are often not able to articulate what training they really need” (p. 81) so just asking them what they need to learn is not usually an effective strategy.

 

Learning Isn’t Always Required—Some Information can be Looked Up When Needed

When doing a training-needs analysis and designing training, it is imperative to separate information that is “need-to-know” from that which is “need-to-access.” Since learners forget easily, it’s better to use training time to teach the need-to-know information and prepare people on how to access the need-to-access information.

 

Do NOT Offer Training if It is NOT Relevant to Trainees

In addition to being an obvious waste of time and resources, training courses that are not specifically relevant to trainees can hurt motivation for training in general. “Organizations are advised, when possible, to not only select employees who are likely to be motivated to learn when training is provided but to foster high motivation to learn by supporting training and offering valuable training programs.” (p. 79) This suggests that every one of the courses on our LMS should have relevance and value.

 

It’s about Training Transfer—Not Just about Learning!

“Transfer refers to the extent to which learning during training is subsequently applied on the job or affects later job performance.” (p. 77) “Transfer is critical because without it, an organization is less likely to receive any tangible benefits from its training investments.” (p. 77-78) To ensure transfer, we have to utilize proven scientific research-based principles in our instructional designs. Relying on our intuitions is not enough—because they may steer us wrong.

 

We must go Beyond Training!

“What happens in training is not the only thing that matters—a focus on what happens before and after training can be as important. Steps should be taken to ensure that trainees perceive support from the organization, are motivated to learn the material, and anticipate the opportunity to use their skills once on (or back on) the job.” (p. 79)

 

Training can be Designed for Individuals or for Teams

“Today, training is not limited to building individual skills—training can be used to improve teams as well.” (p. 79)

 

Management and Leadership Training Works

“Research evidence suggests that management and leadership development efforts work.” (p. 80) “Management and leadership development typically incorporate a variety of both formal and informal learning activities, including traditional training, one-on-one mentoring, coaching, action learning, and feedback.” (p. 80)

 

Forgetting Must Be Minimized, Remembering Must Be Supported

One meta-analysis found that one year after training, “trainees [had] lost over 90% of what they learned.” (p. 84) “It helps to schedule training close in time to when trainees will be able to apply what they have learned so that continued use of the trained skill will help avert skill atrophy. In other words, trainees need the chance to ‘use it before they lose it.’ Similarly, when skill decay is inevitable (e.g., for infrequently utilized skills or knowledge) it can help to schedule refresher training.” (p. 84)

 

Common Mistakes in Training Design Should Be Avoided

“Recent reports suggest that information and demonstrations (i.e., workbooks, lectures, and videos) remain the strategies of choice in industry. And this is a problem [because] we know from the body of research that learning occurs through the practice and feedback components.” (p. 86) “It has long been recognized that traditional, stand-up lectures are an inefficient and unengaging strategy for imparting new knowledge and skills.” (p. 86) Researchers have “noted that trainee errors are typically avoided in training, but because errors often occur on the job, there is value in training people to cope with errors both strategically and on an emotional level.” (p. 86) “Unfortunately, systematic training needs analysis, including task analysis, is often skipped or replaced by rudimentary questions.” (p. 81)

 

Effective Training Requires At Least Four Components

“We suggest incorporating four concepts into training: information, demonstration, practice, and feedback.” (p. 86) Information must be presented clearly and in a way that enables the learners to fully understand the concepts and skills being taught. Skill demonstrations should provide clarity to enable comprehension. Realistic practice should be provided to enable full comprehension and long-term remembering. Proving feedback after decision-making and skill practice should be used to correct misconceptions and improve the potency of later practice efforts.

The bottom line is that more realistic practice is needed. Indeed, the most effective training utilizes relatively more practice and feedback than is typically provided. “The demonstration component is most effective when both positive and negative models are shown rather than positive models only.” (p. 87)

Will’s Note: While these four concepts are extremely valuable, personally I think they are insufficient. See my research review on the Decisive Dozen for my alternative.

 

E-Learning Can Be Effective, But It May Not Lower the Cost of Training

“Both traditional forms of training and technology-based training can work, but both can fail as well. (p. 87) While the common wisdom argues that e-learning is less costly, recent “survey data suggest that training costs across organizations remain relatively constant as training shifts from face-to-face to technology-based methods.” (p. 87) This doesn’t mean that e-learning can’t offer a cost savings, but it does mean that most organizations so far haven’t realized cost savings. “Well-designed technology-based training can be quite effective, but not all training needs are best addressed with that approach. Thus, we advise that organizations use technology-based training wisely—choose the right media and incorporate effective instructional design principles.” (p. 87)

 

Well-Designed Simulations Provide Potent Learning and Practice

“When properly constructed, simulations and games enable exploration and experimentation in realistic scenarios. Properly constructed simulations also incorporate a number of other research-supported learning aids, in particular practice, scaffolding or context-sensitive support, and feedback. Well-designed simulation enhances learning, improves performance, and helps minimize errors; it is also particularly valuable when training dangerous tasks. (p. 88)

 

To Get On-the-Job Improvement, Training Requires After-Training Support

“The extent to which trainees perceive the posttraining environment (including the supervisor) as supportive of the skills covered in training had a significant effect on whether those skills are practiced and maintained.” (p. 88) “Even when trainees master new knowledge and skills in training, a number of contextual factors determine whether that learning is applied back on the job: opportunities to perform; social, peer, and supervisory support; and organizational policies.” (p. 90) A trainee’s supervisor is particularly important in this regard. As repeated from above, researchers have “discovered that transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization as a whole.” (p. 90)

 

On-the-Job Learning can be Leveraged with Coaching and Support

“Learning on the job is more complex than just following someone or seeing what one does. The experience has to be guided. Researchers reported that team leaders are a key to learning on the job. These leaders can greatly influence performance and retention. In fact, we know that leaders can be trained to be better coaches…Organizations should therefore provide tools, training, and support to help team leaders to coach employees and use work assignments to reinforce training and to enable trainees to continue their development.” (p. 90)

 

Trainees’ Supervisors Can Make or Break Training Success

Researchers have “found that one misdirected comment by a team leader can wipe out the full effects of a training program.” (p. 83) “What organizations ought to do is provide leaders with information they need to (a) guide trainees to the right training, (b) clarify trainees’ expectations, (c) prepare trainees, and (d) reinforce learning…” (p. 83) Supervisors can increase trainees’ motivation to engage in the learning process. (p. 85) “After trainees have completed training, supervisors should be positive about training, remove obstacles, and ensure ample opportunity for trainees to apply what they have learned and receive feedback.” (p. 90) “Transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization.” (p. 90)

 

Will’s Note: I’m a big believer in the power of supervisors to enable learning. I’ll be speaking on this in an upcoming ASTD webinar.

 

Basing Our Evaluations on the Kirkpatrick 4 Levels is Insufficient!!!

“Historically, organizations and training researchers have relied on Kirkpatrick’s [4-Level] hierarchy as a framework for evaluating training programs…[Unfortunately,] The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… Although the Kirkpatrick hierarchy has clear limitations, using it for training evaluation does allow organizations to compare their efforts to those of others in the same industry. The authors recommendations for improving training evaluation fit into two categories. First, [instead of only using the Kirkpatrick framework] “organizations should begin training evaluation efforts by clearly specifying one or more purposes for the evaluation and should then link all subsequent decisions of what and how to measure to the stated purposes.” (p. 91) Second, the authors recommend that training evaluations should “use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes.” (p. 91)

 

This is a devastating critique that should give us all pause. Of course it is not the first such critique, nor will it have to be the last I’m afraid. The worst part about the Kirkpatrick model is that it controls the way we think about learning measurement. It doesn’t allow us to see alternatives.

 

Leadership is Needed for Successful Training and Development

“Human resources executives, learning officers, and business leaders can influence the effectiveness of training in their organizations and the extent to which their company’s investments in training produce desired results. Collectively, the decisions these leaders make and the signals they send about training can either facilitate or hinder training effectiveness…Training is best viewed as an investment in an organization’s human capital, rather than as a cost of doing business. Underinvesting can leave an organization at a competitive disadvantage. But the adjectives “informed” and “active” are the key to good investing. When we use the word “informed,” we mean being knowledgeable enough about training research and science to make educated decisions. Without such knowledge, it is easy to fall prey to what looks and sounds cool—the latest training fad or technology.”  (p. 92)

Thank you!

I’d like to thank all my clients over the years for hiring me as a consultant, learning auditor, workshop provider, and speaker–and thus enabling me to continue in the critical work of translating research into practical recommendations.

If you think I might be able to help your organization, please feel free to contact me directly by emailing me at “info at worklearning dot com” or calling me at 617-718-0767.

 

The spacing effect is one of the most potent learning factors there is–because it helps minimize forgetting.

Here’s a research-to-practice report on the subject, backed by over 100 research studies from scientific refereed journals, plus examples. Originally published in 2006, the recommendations are still valid today.

Click to download the research-to-practice report on spacing.   It’s a classic!

 

And here’s some more recent research and exploration.

Robert Slavin, Director of the Center for Research and Reform in Education at Johns Hopkins University, recently wrote the following:

"Sooner or later, schools throughout the U.S. and other countries will be making informed choices among proven programs and practices, implementing them with care and fidelity, and thereby improving outcomes for their children. Because of this, government, foundations, and for-profit organizations will be creating, evaluating, and disseminating proven programs to meet high standards of evidence required by schools and their funders. The consequences of this shift to evidence-based reform will be profound immediately and even more profound over time, as larger numbers of schools and districts come to embrace evidence-based reform and as more proven programs are created and disseminated."

To summarize, Slavin says that (1) schools and other education providers will be using research-based criteria to make decisions (2) that this change will have profound effects, significantly improving learning results, and (3) many stakeholders and institutions within the education field will be making radical changes, including holding themselves and others to account for these improvements.

In Workplace Learning and Performance

But what about us? What about we workplace learning-and-performance professionals? What about our institutions? Will we be left behind? Are we moving toward evidence-based practices ourselves?

My career over the last 16 years is devoted to helping the field bridge the gap between research and practice, so you might imagine that I have a perspective on this. Here it is, in brief:

Some of our field is moving towards research-based practices. But we have lots of roadblocks and gatekeepers that are stalling the journey for the large majority of the industry. I've been pleasantly surprised in working on the Serious eLearning Manifesto about the large number of people who are already using research-based practices; but as a whole, we are still stalled.

Of course, I'm still a believer. I think we'll get there eventually. In the meantime, I want to work with those who are marching ahead, using research wisely, creating better learning for their learners. There are research translaters who we can follow, folks like Ruth Clark, Rich Mayer, K. Anders Ericsson, Jeroen van Merriënboer, Richard E. Clark, Julie Dirksen, Clark Quinn, Gary Klein, and dozens more. There are practitioners who we can emulate–because they are already aligning themselves with the research: Marty Rosenheck, Eric Blumthal, Michael Allen, Cal Wick, Roy Pollock, Andy Jefferson, JC Kinnamon, and thousands of others.

Here's the key question for you who are reading this: "How fast do you want to begin using research-based recommendations?"

And, do you really want to wait for our sister profession to perfect this before taking action?

NPR's Morning Edition produced a five-minute radio piece on the U.S. Airforce Academy's attempt at improving learning results by modifying the ability-grouping of their cadets.

Shankar Vedantam

According to the piece, reported by Shankar Vedantam, based on research by Dartmouth researcher Bruce Sacerdote and colleagues:

  • Weaker students did better when in squadrons with stronger students (but note caveats below).
  • However, when researchers intentionally created squadrons with only the strongest and weakest students (that is, the middle students were removed), the weaker students did worse than they otherwise would have. The researchers argue that this was caused by the splintering of the squadron into groups of strong students and groups of weak students.
  • Middle students did better when they didn't have weaker and stronger students in their squadrons.
  • It appears that the middle students acted as a glue in the mixed-ability squadrons–and specifically, they helped the squadron to avoid splitting into groups.

Of course, one study should not be taken without some skepticism. Indeed, there is a long history of research on academic ability grouping. For example see the review article:

Schofield, J. W. (2010). International evidence on ability grouping with curriculum differentiation and the achievement gap in secondary schools. Teachers College Record, 112(5), 1492-1528.

As Schofield reports:

International research supports the conclusion that having high-ability/high-achieving schoolmates/classmates is associated with increased achievement. It also suggests that ability grouping with curriculum differentiation increases the achievement gap. For example, attending a high-tier school in a tiered system is linked with increased achievement, whereas attending a low-tier school is linked with decreased achievement, controlling for initial achievement. Furthermore, there is a stronger link between students’ social backgrounds and their achievement in educational systems with more curriculum differentiation and in those with earlier placement in differentiated educational programs as compared with others.

But she also warns:

However, numerous methodological issues remain in this research, which suggests both the need for caution in interpreting such relationships and the value of additional research on mechanisms that may account for such relationships.

In addition, social effects are probably not the only effects in play. For example, the research tells us that learners do better when they are presented with information and given instructional supports targeted specifically to their cognitive needs. So for example, this could be why the middle-ability students did better when they were grouped together.

Also interesting is that neither the NPR piece or Shofield's abstract reports specifically on how the mixed groupings affect the stronger learners.

Indeed, other researchers have advocated that gifted students should not be so ignored. See for example the following review article:

Subotnik, R. F., Olszewski-Kubilius, P., & Worrell, F. C. (2012). A proposed direction forward for gifted education based on psychological science. Gifted Child Quarterly, 56(4), 176-188.

Here's what these authors recommend:

In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student’s talents and abilities.

I just read the following research article, and found a great mini-review of some essential research.

  • Hagemans, M. G., van der Meij, H., & de Jong, T. (2013). The effects of a concept map-based support tool on simulation-based inquiry learning. Journal of Educational Psychology, 105(1), 1-24. doi:10.1037/a0029433

Experiment-Specific Findings:

The article shows that simulations—the kind that ask learners to navigate through the simulation on their own—are more beneficial when learners are supported in their simulation playing. Specifically, they found that learners given the optimal learning route did better than those supplied with a sub-optimal learning route. They also found that concept maps helped the learners by supporting their comprehension. They also found that learners who got feedback on the correctness of their practice attempts were motivated to correct their errors and thus provided themselves with additional practice.

Researchers’ Review of Learners’ Poor Learning Strategies

The research Hagemans, van der Meij, and de Jong did is good, but what struck me as even more relevant for you as a learning professional is their mini review of research that shows that learners are NOT very good stewards of their own learning. Here is what their mini-review said (from Hagemans, van der Meij, and de Jong, 2013, p. 2:

  • Despite the importance of planning for learning, few students engage spontaneously in planning activities (Manlove & Lazonder, 2004).  
  • Novices are especially prone to failure to engage in planning prior to their efforts to learn (Zimmerman, 2002).  
  • When students do engage in planning their learning, they often experience difficulty in adequately performing the activities involved (de Jong & Van Joolingen, 1998; Quintana et al., 2004). For example, they do not thoroughly analyze the task or problem they need to solve (Chi, Feltovich, & Glaser, 1981; Veenman, Elshout, & Meijer, 1997) and tend to act immediately (Ge & Land, 2003; Veenman et al., 1997), even when a more thorough analysis would actually help them to build a detailed plan for learning (Veenman, Elshout, & Busato, 1994).  
  • The learning goals they set are often of low quality, tending to be nonspecific and distal (Zimmerman, 1998).
  • In addition, many students fail to set up a detailed plan for learning, whereas if they do create a plan, it is often poorly constructed (Manlove et al., 2007). That is, students often plan their learning in a nonsystematic way, which may cause them to start floundering (de Jong & Van Joolingen, 1998), or they plan on the basis of what they must do next as they proceed, which leads to the creation of ad hoc plans in which they respond to the realization of a current need (Manlove & Lazonder, 2004).  
  • The lack of proper planning for learning may cause students to miss out on experiencing critical moments of inquiry, and their investigations may lack systematicity.
  • Many students also have problems with monitoring their progress, in that they have difficulty in reflecting on what has already been done (de Jong & Van Joolingen, 1998).
  • Regarding monitoring of understanding, students often do not know when they have comprehended the subject matter material adequately (Ertmer & Newby, 1996; Thiede, Anderson, & Therriault, 2003) and have difficulty recognizing breakdowns in their understanding (Ertmer & Newby, 1996).
  • If students do recognize deficits in their understanding, they have difficulty in expressing explicitly what they do not understand (Manlove & Lazonder, 2004).
  • One consequence is that students tend to overestimate their level of success, which may result in “misplaced optimism, substantial understudying, and, ultimately, low test scores” (Zimmerman, 1998, p. 9).

The research article is available by clicking here.

Final Thoughts

This research, and other research I have studied over the years, shows that we CANNOT ALWAYS TRUST THAT OUR LEARNERS WILL KNOW HOW TO LEARN. We as instructional designers have to design learning environments that support learners in learning. We need to know the kinds of learning situations where our learners are likely to succeed and those where they are likely to fail without additional scaffolding.

The research also shows, more specifically, that inquiry-based simulation environments can be powerful learning tools, but ONLY if we provide the learners with guidance and/or scaffolding that enables them to be successful. Certainly, some few may succeed without support, but most will act suboptimally.

We have a responsibility to help our learners. We can't always put it on them…

This article first appeared in my Newsletter in the October 2013 issue. You can sign up for my newsletter by clicking here.

————————————————————————–

Onboarding is ubiquitous. Every organization does it. Some do it with great fanfare. Some make a substantial investment. Some just let supervisors get their new hires up to speed. Unfortunately, most organizations make critical mistakes in onboarding—mistakes that increase turnover, raise costs, weaken employee loyalty, and lower productivity.

Fortunately, recent research highlights onboarding best practices. If organizations would just use the wisdom from the research, they’d save themselves money, time, and resources—and employees in those companies would have to deal with many fewer headaches.

Key Outcomes

Recent reviews of the research suggest that there four key outcomes that enable onboarding success:

  1. New hires have to quickly and effectively learn their new job role.
  2. New hires have to feel a sense of self-efficacy in doing their job.
  3. New hires have to learn the organizational culture.
  4. New hires have to gain acceptance and feel accepted by their coworkers.

Enabling Factors

Recent research suggests that the following factors are helpful in ensuring onboarding success:

 

What New Hires Can Do

  1. Be proactive in learning and networking
  2. Be open to new ways of thinking and acting
  3. Be active in seeking information and getting feedback
  4. Be active in building relationships

What the Organization Can Do

  1. Ensure that managers take a very active and effective role
  2. Provide formal orientations that go beyond information dissemination
  3. Provide realistic previews of the organization and the job
  4. Proactively enable new hires to connect with long-tenured employees

 

Five Biggest Mistakes
(In Reverse Order of Importance)

5—Providing an Information Dump during Orientation

The research shows that employee orientations can facilitate onboarding. However, too many organizations think their orientations should just cram tons of information down the throats of employees. Even worse are orientations that have employees sit and listen to presentation after presentation. Oh the horror. New employees are excited to get going. Putting them into the prison of listening—even to great content—is a rudeness that shouldn’t be tolerated. The best orientations help build relationships. They get employees involved. They prepare new hires for how to learn and grow and network on their own. They help new hires learn the organization culture—both the good and the bad. They share the organization’s vision, passions, and its strategic concerns.

4—Thinking that Training is Sufficient

Training can be essential to help get new employees competent in their new roles, but it is NEVER sufficient on its own. Training should be supported by prompting mechanisms (like job aids), structures and support for learning on the job, reinforcement and follow-through, and coaching to provide feedback, set goals, and lend emotional support.

3—Forgetting the Human Side of Onboarding

New hires are human beings, and, just like the rest of us, they too are influenced by the dynamics of social interaction. They don’t just learn to do a job. They also learn to love and trust a company, a work unit, or a group of coworkers—or they don’t. In return, new hires are either trusted and respected by their coworkers or they’re not.  The research is very clear about this. One of the keys to successful onboarding is the strength of the relationships that are built in the first year of a person’s tenure. The stronger the bonds, the more likely it is that a person will stay and bring value to the organization.

2—Considering Onboarding as Something that Can Be Done Quickly

Some companies offer a one week orientation and then cut loose their new hires to sink or swim. Enlightened companies, on the other hand, realize that onboarding is like relationship-building—it takes time. It takes time to really learn one’s job well. It takes time to integrate into the organizational culture. It takes time to connect with people. Realistic estimates suggest that onboarding can take 6 months, 12 months, or even 18 months to fully integrate a person into a new organization.

1—Not Preparing Supervisors

Supervisors are the single most important leverage point for onboarding success. You’ve probably heard it said that people don’t quit their companies, they quit their supervisors. Well, the flip side can also be said. People don’t join a company, they join a supervisor and his/her workgroup. Unfortunately, most supervisors just have no idea about the importance of onboarding and how to do it correctly. Where best practices give supervisors training and an onboarding checklist, too many supervisors just wing it. The real tragedy is that the investment in onboarding training and a checklist for supervisors is quite small in the greater scheme of things.

Final Thoughts on Onboarding

As a workplace learning-and-performance consultant, when I’ve been called in to advise companies on their onboarding programs, I often see incredibly dedicated professionals who are passionate about welcoming new people into their organizations. Unfortunately, too many times, I see organizations that have the wrong mental models about what makes onboarding successful. It’s a shame that our old mental models keep us from effectiveness—when the research on onboarding now gives us sound prescriptions for making onboarding successful.

Great article on How to Create Great Teachers. It's focused on K-12 education primarily, but there is wisdom in the discussion relevant to workplace learning.

Here's the major points I take away:

  1. Great teachers need deep content knowledge.
  2. Great teachers need good classroom-management verbalization skills.
  3. Great teachers need their content knowledge to be fluently available to them in the context of typical classroom situations. To get this fluency, they need to practice in such situations—and practice linking actions (especially their verbal utterances) to specific classroom situations.