Definition of MicroLearning

, , ,

I’ve looked for a good definition of microlearning, but because I couldn’t find one, I’ve created my own.

Microlearning involves the use of:

“Relatively short engagements in learning-related activities—typically ranging from a few seconds up to 20 minutes (or up to an hour in some cases)—that may provide any combination of content presentation, review, practice, reflection, behavioral prompting, performance support, goal reminding, persuasive messaging, task assignments, social interaction, diagnosis, coaching, management interaction, or other learning-related methodologies.”

Microlearning has five utilization cases:

  1. Course Replacement
    Provides training content and learning support, often as a replacement for classroom training or long-form elearning.
  2. Course Augmentation
    Provides after-course or within-course streams of short learning interactions to reinforce, strengthen, or deepen learning.
  3. Retrieval Support
    Provides retrieval practice, spaced repetitions, and reminding to ensure knowledge and skills can be remembered when needed.
  4. Just-In-Time (Moment-of-Need) Learning
    Provides information when learners need it to perform a task they are working on.
  5. Behavioral Prompts
    Provides action nudges, task assignments, or performance support to directly prompt and support behavior.

If it’s not obvious, there are clearly overlaps in these five use cases, and furthermore, a single microlearning thread may utilize more than one of the methodologies suggested. For example, when using microlearning as a replacement for a standard elearning course, you might also consider retrieval support and behavioral prompts in your full learning design.

Benchmarking Your Smile Sheets Against Other Companies may be a Fool’s Game!

,

Original post appeared in 2011. I update it here.

Updated Article

When companies think of evaluation, they often first think of benchmarking their performance against other companies. There are important reasons to be skeptical of this type of approach, especially as a sole source of direction.

I often add this warning to my workshops on how to create more effective smile sheets: Watch out! There are vendors in the learning field who will attempt to convince you that you need to benchmark your smile sheets against your industry. You will spend (waste) a lot of money with these extra benchmarking efforts!

Two forms of benchmarking are common, (1) idea-generation, and (2) comparison. Idea-generation involves looking at other company’s methodologies and then assessing whether particular methods would work well at our company. This is a reasonable procedure only to the extent that we can tell whether the other companies have similar situations to ours and whether the methodologies have really been successful at those other companies.

Comparison benchmarking for training and development looks further at a multitude of learning methods and results and specifically attempts to find a wide range of other companies to benchmark against. This approach requires stringent attempts to create valid comparisons. This type of benchmarking is valuable only to the extent that we can determine whether we are comparing our results to good companies or bad and whether the comparison metrics are important in the first place.

Both types of benchmarking require exhaustive efforts and suffer from validity problems. It is just too easy to latch on to other company’s phantom results (i.e., results that seem impressive but evaporate upon close examination). Picking the right metrics are difficult (i.e., a business can be judged on its stock price, its revenues, profits, market share, etc.). Comparing companies between industries presents the proverbial apple-to-orange problem. It’s not always clear why one business is better than another (e.g., It is hard to know what really drives Apple Computer’s current success: its brand image, its products, its positioning versus its competitors, its leaders, its financial savvy, its customer service, its manufacturing, its project management, its sourcing, its hiring, or something else). Finally, and most pertinent here, it is extremely difficult to determine which companies are really using best practices (e.g., see Phil Rosenweig’s highly regarded book on The Halo Effect) because companies’ overall results usually cloud and obscure the on-the-job realities of what’s happening.

The difficulty of assessing best practices in general pales in comparison to the difficulties of assessing its training-and-development practices. The problem is that there just aren’t universally accepted and comparable metrics to utilize for training and development. Where baseball teams have wins and losses, runs scored, and such; and businesses have revenues and profits and the like; training and development efforts produce more fuzzy numbers—certainly ones that aren’t comparable from company to company. Reviews of the research literature on training evaluation have found very low levels of correlation (usually below .20) between different types of learning assessments (e.g., Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008).

Of course, we shouldn’t dismiss all benchmarking efforts. Rigorous benchmarking efforts that are understood with a clear perspective can have value. Idea-generation brainstorming is probably more viable than a focus on comparison. By looking to other companies’ practices, we can gain insights and consider new ideas. Of course, we will want to be careful not to move toward the mediocre average instead of looking to excel.

The bottom line on benchmarking from other companies is: be careful, be willing to spend lots of time and money, and don’t rely on cross-company comparisons as your only indicator.

Finally, any results generated by brainstorming with other companies should be carefully considered and pilot-tested before too much investment is made.

 

Smile Sheet Issues

Both of the meta-analyses cited above found that smile sheets were correlated with an r = 0.09, which is virtually no correlation at all. I have detailed smile-sheet design problems in detail in my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. In short, most smile sheets focus on learner satisfaction, and fail to focus on factors related to actual learning effectiveness. Most smile sheets utilize Likert-like scales or numeric scales that offer learners very little granularity between answer choices, opening up responding to bias, fatigue, and disinterest. Finally, most learners have fundamental misunderstandings about their own learning (Brown, Roediger & McDaniel, 2014; Kirschner & van Merriënboer, 2013), so asking for their perceptions with general questions about their perceptions is too often a dubious undertaking.

The bottom line is that traditional smile sheets are providing almost everyone with meaningless data in terms of learning effectiveness. When we benchmark our smile sheets against other companies’ smile sheets we compound our problems.

 

Wisdom from Earlier Comments

Ryan Watkins, researcher and industry guru, wrote:

I would add to this argument that other companies are no more static than our own — thus if we implement in September 2011 what they are doing in March 2011 from our benchmarking study, then we are still behind the competition. They are continually changing and benchmarking will rarely help you get ahead. Just think of all the companies that tried to benchmark the iPod, only to later learn that Apple had moved on to the iPhone while the others were trying to “benchmark” what they were doing with the iPod. The competition may have made some money, but Apple continues to win the major market share.

Mike Kunkle, sales training and performance expert, wrote:

Having used benchmarking (carefully and prudently) with good success, I can’t agree with avoiding it, as your title suggests, but do agree with the majority of your cautions and your perspectives later in the post.

Nuance and context matter greatly, as do picking the right metrics to compare, and culture, which is harder to assess. 70/20/10 performance management somehow worked at GE under Welch’s leadership. I’ve seen it fail miserably at other companies and wouldn’t recommend it as a general approach to good people or performance management.

In the sales performance arena, at least, benchmarking against similar companies or competitors does provide real benefit, especially in decision-making about which solutions might yield the best improvement. Comparing your metrics to world-class competitors and calculating what it would mean to you to move in that direction, allows for focus and prioritization, in a sea of choices.

It becomes even more interesting when you can benchmark internally, though. I’ve always loved this series of examples by Sales Benchmark Index:
http://www.salesbenchmarkindex.com/Portals/23541/docs/why-should-a-sales-professional-care-about-sales-benchmarking.pdf

 

Citations

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Brown, P. C., Roediger, H. L., III, & McDaniel, M. A. (2014). Make It Stick: The Science of Successful Learning. Cambridge, MA: Belknap Press of Harvard University Press.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169–183.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

Top 10 Reasons to Write a Blog Post Debunking the Learning Styles Myth

,

To honor David Letterman soon after his sign off, I’ll use his inverted top-10 design.

The following represent the Top 10 Reasons to Write a Blog Post Debunking the Learning Styles Myth:

10. Several scientific review articles have been published showing that using learning styles to design learning produces no appreciable benefits. See The Debunker Club resource page on learning styles.

9. If you want to help your readers create the most effective learning interventions, you’d do better focusing on other design principles, for example those put forth in the Serious eLearning Manifesto, the Decisive Dozen Research, the Training Maximizers Model, or the books Make It Stick, How We Learn, or Design for How People Learn.

8. There are already great videos debunking the learning-styles myth (Tesia Marshik, Daniel Willingham), so you’re better off spreading the word through your own blog network; through Twitter, Hangouts, and LinkedIn; and with your colleagues at work.

7. The learning styles myth is so pervasive that the first 17 search topics on Google (as of June 1, 2015) continue to encourage the learning styles idea — even though it is harmful to learners and wasteful as a learning method. Just imagine how many lives you would touch if your blog post jumped into the top searches.

6. It’s a total embarrassment to the learning fields (the K-12 education field, the workplace training field, higher education). We as members of those fields need to get off our asses and do something. Haven’t teachers suffered enough blows to their reputation than to have to absorb a pummeling from articles like those in The New York Times and Wired Magazine? Haven’t instructional designers and trainers been buffeted enough by calls for their inability to maximize learning results?

5. Isn’t it about time that we professionals took back our field from vendors and those in the commercial industrial complex who only want to make a buck, who don’t care about the learners, who don’t care about the science, who don’t care about anything but their own special interests? Do what is right! Get off the mat and put a fist in the mouth of the learning-styles industrial complex!

4. Write a blog post on the learning-styles myth because you can have a blast with over-the-top calls to action, like one I just wrote in #5 above. Boy that was fun!

3. There’s some evidence that directly confronting advocates of strong ideas — like learning-styles true believers — will only make them more resistant in their unfounded beliefs. See the Debunkers Handbook for details. Therefore, our best efforts may be to focus not on the true believers, but on the general population. In this, our goal should be to create a climate of skepticism in terms of learning styles. You can directly help in this effort by writing a blog post, by taking to Twitter and LinkedIn, by sharing with your colleagues and friends.

2. Because you’re a professional.

1. Because the learning-styles idea is a myth.

Insert uplifting music here…

Training Maximizers

,

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model’s recommendations leverageable for learning professionals). People keep asking me about the model, so I’m going to briefly describe it here. If you want to look at my original YouTube video about the model — which goes into more depth — you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here’s a graphic depiction:

 

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can’t make decisions (D), learners who can’t remember what they’ve learned (E), learners who can’t apply what they’ve learned (F), and learners who can’t persevere in their own learning (G); are learners who simply haven’t received leverageable benefits.

When we train or teach only to A, B, and C, we aren’t really helping our learners, we aren’t providing a return on the learning investments, we haven’t done enough to support our learners’ future performance.

 

 

Why the World Needs Research Translators

,

Research translators are people who read research articles from scientific refereed journals and distill the wisdom from those articles into practical recommendations for practitioners. Sometimes research translators translate one article at a time (or a few), compiling the main points from the article and transforming those main points into recommendations for practice.

More effectively, research translators read many articles about a particular topic and then — based on years of immersion in the research and years of experience with practitioners — make sense of the topic findings in relation to a wider body of research and the needs of practitioners. After developing a comprehensive and practical understanding of the research findings, research translators create simple elegant models and metaphors to help practitioners deeply understand the research findings, while ensuring that recommendations are clear, leverageable, and potent.

Research translators add value because they bridge the gap between the worlds of research and practice — between groups who speak different languages.

Some researchers are brilliant in translating research into practice. Most are not. We shouldn’t blame them for their deficiencies. The world they inhabit pushes against research translation in myriad ways. Researchers are not incentivized to do research translation. Indeed, those who write popular books are often scorned by other researchers. Researchers don’t have time to hang out with practitioners to learn their language, to deeply understand their needs, to see how research gets understood/misunderstood and applied, to see what obstacles are faced. Researchers’ language pool so controls their own thinking and verbal output that they can’t help themselves in using jargon that then overwhelms the working-memory capacity of their readers and listeners.

At a minimum, here is what research translation requires:

  1. Deep and current understanding of a wide body of research.
  2. Deep and current understanding of the practitioner ecosystem, language, motivations, incentive systems, body of knowledge, blind spots and misconceptions, organizational influences, etc.
  3. Ability to compile research into practical wisdom, utilize metaphor to support comprehension, create models that balance simplicity with precision, craft recommendations that propel appropriate applications of the research while avoiding misapplication, etc.
  4. Ability to reach a wide swath of practitioners to ensure that the research-based messages are heard.
  5. Ability to craft messaging that ensures that research-based messages are understood, remembered, and found compelling enough to generate actual attempts to be used.
  6. Ability to provide corrective feedback and encouragement as practitioners attempt to utilize research-based messages.

Researchers’ Biggest Blind Spot

In my experience, most researcher’s biggest blind spot is that they just can’t communicate without the use of jargon and big words that overwhelm the working-memory capacity of those they are attempting to reach. Even when they try to communicate plainly to practitioners they just can’t do it.

Here is an example from a recent book that that authors claim is written to be accessible to practitioners. I won’t “out” the researchers here because I love their book and want it to do well.

My yellow highlights indicate jargon that is likely to overload working memory.

More than 50% of the paragraph is jargon, rendering the paragraph virtual indecipherable.

The Tragedy of the Uncommon

There are very few research translators in my field, the learning field. Ruth Clark recently retired, leaving a gaping hole. There’s simply no place for us — that is, there’s no place to earn a living as a research translator. The academy wants researchers, not research translators. Industry wants practitioners, not research translators. Those of us who try to carve out a niche as research translators find that research translation hardly pays a penny, that we must be consultants first. In some ways, this is great because it keeps us close to practitioners — and we get to see on a daily basis how research can be used to make learning more effective. In other ways, being a consultant doesn’t really give us enough time to do the research.

It is said that a successful consultant will allocate time as follows:

  • 3 days a week to paid work.
  • 1 day a week to marketing.
  • 1 day a week to administrative tasks.

For those translating research we can add:

  • 2 days a week compiling research.
  • 2 days a week crafting communications to share the research.

Ruth Clark once told me that surviving as a research translator was “really hard.” And, of course, she is the most successful full-time research translator in the history of our field.

It doesn’t make sense to wish away the realities faced, to hope the academy would make room for research translators, to hope that industry would have at least a few positions open. It’s just not going to happen anytime soon.

There is a window however. Perseverance, perhaps? But more importantly, innovating new business models that make research translation a sustainable option.

Summary

Research translation ain’t easy, but it’s a vital part of the research-to-practice ecosystem.

 

 

 

 

Kirkpatrick Model Good or Bad? The Epic Mega Battle!

, ,

Clark Quinn and I have started debating top-tier issues in the workplace learning field. In the first one, we debated who has the ultimate responsibility in our field. In the second one, we debated whether the tools in our field are up to the task.

In this third installment of the series, we’ve engaged in an epic battle about the worth of the 4-Level Kirkpatrick Model. Clark and I believe that these debates help elucidate critical issues in the field. I also think they help me learn. This debate still intrigues me, and I know I’ll come back to it in the future to gain wisdom.

And note, Clark and I certainly haven’t resolved all the issues raised. Indeed, we’d like to hear your wisdom and insights in the comments section.

————————–

Will:

I want to pick on the second-most renowned model in instructional design, the 4-Level Kirkpatrick Model. It produces some of the most damaging messaging in our industry. Here’s a short list of its treacherous triggers: (1) It completely ignores the importance of remembering to the instructional design process, (2) It pushes us learning folks away from a focus on learning—where we have the most leverage, (3) It suggests that Level 4 (organizational results) and Level 3 (behavior change) are more important than measuring learning—but this is an abdication of our responsibility for the learning results themselves, (4) It implies that Level 1 (learner opinions) are on the causal chain from training to performance, but two major meta-analyses show this to be false—smile sheets, as now utilized, are not correlated with learning results! If you force me, I’ll share a quote from a top-tier research review that damns the Kirkpatrick model with a roar. “Buy the ticket, take the ride.”

 

Clark:

I laud that you’re not mincing words!   And I’ll agree and disagree.  To address your concerns: 1) Kirkpatrick is essentially orthogonal to the remembering process. It’s not about learning, it’s about aligning learning to impact.  2) I also think that Kirkpatrick doesn’t push us away from learning, though it isn’t exclusive to learning (despite everyday usage). Learning isn’t the only tool, and we should be willing to use job aids (read: performance support) or any other mechanism that can impact the organizational outcome.  We need to be performance consultants! 3) Learning in and of itself isn’t important; it’s what we’re doing with it that matters. You could ensure everyone could juggle chainsaws, but unless it’s Cirque de Soleil, I wouldn’t see the relevance.

So I fully agree with Kirkpatrick on working backwards from the org problem and figuring out what we can do to improve workplace behavior.  Level 2 is about learning, which is where your concerns are, in my mind, addressed.  But then you need to go back and see if what they’re able to do now is what is going to help the org!  And I’d counter that the thing I worry about is the faith that if we do learning, it is good.  No, we need to see if that learning is impacting the org.  4) Here’s where I agree, that Level 1 (and his numbering) led people down the garden path: people seem to think it’s ok to stop at level 1!  Which is maniacal, because what learners think has essentially zero correlation with whether it’s working (as you aptly say)).  So it has led to some really bad behavior, serious enough to make me think it’s time for some recreational medication!

 

Will:

Actually, I’m flashing back to grad school. “Orthogonal” was one of the first words I remember learning in the august halls of my alma mater. But my digression is perpendicular to this discussion, so forget about it! Here’s the thing. A model that is supposed to align learning to impact ought to have some truth about learning baked into its DNA. It’s less than half-baked, in my not-so-humble opinion.

As they might say in the movies, the Kirkpatrick Model is not one of God’s own prototypes! We’re responsible people, so we ought to have a model that doesn’t distract us from our most important leverage points. Working backward is fine, but we’ve got to go all the way through the causal path to get to the genesis of the learning effects. Level 1 is a distraction, not a root. Yes, Level 2 is where the K-Model puts learning, but learning back in 1959 is not the same animal that it is today. We actually have a pretty good handle on how learning works now. Any model focused on learning evaluation that omits remembering is a model with a gaping hole.

 

Clark:

Ok, now I’m confused.  Why should a model of impact need to have learning in its genes?  I don’t care whether you move the needle with performance support, formal learning, or magic jelly beans; what K talks about is evaluating impact.  What you measure at Level 2 is whether they can do the task in a simulated environment.  Then you see if they’re applying it at the workplace, and whether it’s having an impact.

No argument that we have to use an approach to evaluate whether we’re having the impact at level 2 that we should, but to me that’s a separate issue.  Kirkpatrick just doesn’t care what tool we’re using, nor should it.  Kirkpatrick doesn’t care whether you’re using behavioral, cognitive, constructivist, or voodoo magic to make the impact, as long as you’re trying something.

We should be defining our metric for level 2, arguably, to be some demonstrable performance that we think is appropriate, but I think the model can safely be ignorant of the measure we choose at level 2 and 3 and 4.  It’s about making sure we have the chain.  I’d be worried, again, that talking about learning at level 2 might let folks off the hook about level 3 and 4 (which we see all too often) and make it a matter of faith. So I’m gonna argue that including the learning into the K model is less optimal than keeping it independent. Why make it more complex than need be?  So, now, what say you?

 

Will:

Clark! How can you say the Kirkpatrick model is agnostic to the means of obtaining outcomes? Level 2 is “LEARNING!” It’s not performance support, it’s not management intervention, it’s not methamphetamine. Indeed, the model was focused on training.

The Kirkpatricks (Don and Jim) have argued—I’ve heard them live and in the flesh—that the four levels represent a causal pathway from 1 to 4. In addition, the notion of working backward implies that there is a causal connection between the levels. The four-level model implies that a good learner experience is necessary for learning, that learning is necessary for on-the-job behavior, and that successful on-the-job behavior is necessary for positive organizational results. Furthermore, almost everybody interprets it this way.

The four levels imply impact at each level, but look at all the factors that they are missing! For example, learners need to be motivated to apply what they’ve learned. Where is that in the model? Motivation can be an impact too! We as learning professionals can influence motivation. There are other impacts we can make as well. We can make an impact on what learners remember, whether learners are supported back on the job, etc.

Here’s what a 2012 seminal research review from a top-tier scientific journal concluded: “The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… (p. 91). That’s pretty damning!

 

Clark:

I don’t see the Kirkpatrick model as an evaluation of the learning experience, but instead of the learning impact.   I see it as determining the effect of a programmatic intervention on an organization.  Sure, there are lots of other factors: motivation, org culture, effective leadership, but if you try to account for everything in one model you’re going to accomplish nothing.  You need some diagnostic tools, and Kirkpatrick’s model is one.

If they can’t perform appropriately at the end of the learning experience (level 2), that’s not a Kirkpatrick issue, the model just lets you know where the problem is. Once they can, and it’s not showing up in the workplace (level 3), then you get into the org factors. It is about creating a chain of impact on the organization, not evaluating the learning design.  I agree that people misuse the model, so when people only do 1 or 2, they’re wasting time and money. Kirkpatrick himself said he should’ve numbered it the other way around.

Now if you want to argue that that, in itself, is enough reason to chuck it, fine, but let’s replace it with another impact model with a different name, but the same intent of focusing on the org impact, workplace behavior changes, and then intervention. I hear a lot of venom directed at the Kirkpatrick model, but I don’t see it ‘antithetical to learning’.

And I worry the contrary; I see too many learning interventions done without any consideration of the impact on the organization.  Not just compliance, but ‘we need a course on X’ and they do it, without ever looking to see whether a course on X will remedy the biz problem. What I like about Kirkpatrick is that it does (properly used) put the focus on the org impact first.

 

Will:

Sounds like you’re holding on to Kirkpatrick because you like its emphasis on organizational performance. Let’s examine that for a moment. Certainly, we’d like to ensure that Intervention X produces Outcome Y. You and I agree. Hugs all around. Let’s move away from learning for a moment. Let’s go Mad Men and look at advertising. Today, advertising is very sophisticated, especially online advertising because companies can actually track click-rates, and sometimes can even track sales (for items sold online). So, in a best-case scenario, it works this way:

  • Level 1 – Web surfers says they like the advertisement
  • Level 2 – Web surfers show comprehension by clicking on link.
  • Level 3 – Web surfers spend time reading/watching on splash page.
  • Level 4 – Web surfers buy the product offered on the splash page.

A business person’s dream! Except that only a very small portion of sales actually happen this way (although, I must admit, the rate is increasing). But let’s look at a more common example. When a car is advertised, it’s impossible to track advertising through all four levels. People who buy a car at a dealer can’t be definitively tracked to an advertisement.

So, would we damn our advertising team? Would we ask them to prove that their advertisement increased car sales? Certainly, they are likely to be asked to make the case…but it’s doubtful anybody takes those arguments seriously… and shame on folks who do!

In case, I’m ignorant of how advertising works behind the scenes—which is a possibility, I’m a small “m” mad man—let me use some other organizational roles to make my case.

  • Is our legal team asked to prove that their performance in defending a lawsuit is beneficial to the company? No, everyone appreciates their worth.
  • Do our recruiters have to jump through hoops to prove that their efforts have organizational value? They certainly track their headcounts, but are they asked to prove that those hires actually do the company good? No!
  • Do our maintenance staff have to get out spreadsheets to show how their work saves on the cost of new machinery? No!
  • Do our office cleaning professionals have to utilize regression analyses to show how they’ve increased morale and productivity? No again!

There should be a certain disgust in feeling we have to defend our good work every time…when others don’t have to.

I use the Mad Men example to say that all this OVER-EMPHASIS on proving that our learning is producing organizational outcomes might be a little too much. A couple of drinks is fine, but drinking all day is likely to be disastrous.

Too many words is disastrous too…But I had to get that off my chest…

 

Clark:

I do see a real problem in communication here, because I see that the folks you cite *do* have to have an impact. They aren’t just being effective, but they have to meet some level of effectiveness. To use your examples: the legal team has to justify its activities in terms of the impact on the business. If they’re too tightened down about communications in the company, they might stifle liability, but they can also stifle innovation. And if they don’t provide suitable prevention against legal action, they’re turfed out.   Similarly, recruiters have to show that they’re not interviewing too many, or too few people, and getting the right ones. They’re held up against retention rates and other measures.  The maintenance staff does have to justify headcount against the maintenance costs, and those costs against the alternative of replacement of equipment (or outsourcing the servicing).  And the office cleaning folks have to ensure they’re meeting environmental standards at an efficient rate.  There are standards of effectiveness everywhere in the organization except L&D.  Why should we be special?

Let’s go on: sales has to estimate numbers for each quarter, and put that up against costs. They have to hit their numbers, or explain why (and if their initial estimates are low, they can be chastised for not being aggressive enough). They also worry about the costs of sales, hit rates, and time to a signature. Marketing, too, has to justify expenditure. To use your example, they do care about how many people come to the site, how long they stay, how many pages they hit, etc. And they try to improve these. At the end of the day, the marketing investment has to impact the sales. Eventually, they do track site activity to dollars. They have to. If we don’t, we get boondoggles. If you don’t rein in marketing initiatives, you get these shenanigans where existing customers are boozed up and given illegal gifts that eventually cause a backlash against the company. Shareholders get a wee bit stroppy when they find that investments aren’t paying off, and that the company is losing unnecessary money.

It’s not a case of ‘if you build it, it is good’! You and I both know that much of what is done in the name of formal learning (and org L&D activity in general) isn’t valuable. People take orders and develop courses where a course isn’t needed. Or create learning events that don’t achieve the outcomes. Kirkpatrick is the measure that tracks learning investments back to impact on the business.  and that’s something we have to start paying attention to. As someone once said, if you’re not measuring, why bother? Show me the money! And if you’re just measuring your efficiency, that your learning is having the desired behavioral change, how do you know that behavior change is necessary to the organization? And until we get out of the mode where we do the things we do on faith,  and start understanding have a meaningful impact on the organization, we’re going to continue to be the last to have an influence on the organization, and the first to be cut when things are tough. Yet we have the opportunity to be as critical to the success of the organization as IT! I can’t stand by seeing us continue to do learning without knowing that it’s of use. Yes, we do need to measure our learning for effectiveness as learning, as you argue, but we have to also know that what we’re helping people be able to do is what’s necessary. Kirkpatrick isn’t without flaws, numbering, level 1, etc. But it’s a clear value chain that we need to pay attention to. I’m not saying in lieu of measuring our learning effectiveness, but in addition. I can’t see it any other way.

 

Will:

Okay, I think we’ve squeezed the juice out of this tobacco. I would have said “orange” but the Kirkpatrick Model has been so addictive for so long…and black is the new orange anyway…

I want to pick up on your great examples of individuals in an organizations needing to have an impact. You noted, appropriately, that everyone must have an impact. The legal team has to prevent lawsuits, recruiters have to find acceptable applicants, maintenance has to justify their worth compared to outsourcing options, cleaning staff have to meet environmental standards, sales people have to sell, and so forth.

Here is the argument I’m making: Employees should be held to account within their circles of maximum influence, and NOT so much in their circles of minimum influence.

So for example, let’s look at the legal team.

Doesn’t it make sense that the legal team should be held to account for the number of lawsuits and amount paid in damages more than they should be held to account for the level of innovation and risk taking within the organization?

What about the cleaning professionals?

Shouldn’t we hold them more accountable for measures of perceived cleanliness and targeted environmental standards than for the productivity of the workforce?

What about us learning-and-performance professionals?

Shouldn’t we be held more accountable for whether our learners comprehend and remember what we’ve taught them more than whether they end up increasing revenue and lowering expenses?

I agree that we learning-and-performance professionals have NOT been properly held to account. As you say, “There are standards of effectiveness everywhere in the organization except L&D.” My argument is that we, as learning-and-performance professionals, should have better standards of effectiveness—but that we should have these largely within our maximum circles of influence.

Among other things, we should be held to account for the following impacts:

  • Whether our learning interventions create full comprehension of the learning concepts.
  • Whether they create decision-making competence.
  • Whether they create and sustain remembering.
  • Whether they promote a motivation and sense-of-efficacy to apply what was learned.
  • Whether they prompt actions directly, particularly when job aids and performance support are more effective.
  • Whether they enable successful on-the-job performance.
  • Et cetera.

Final word, Clark?

 

Clark:

First, I think you’re hoist by your own petard.  You’re comparing apples and your squeezed orange. Legal is measured by lawsuits, maintenance by cleanliness, and learning by learning. Ok that sounds good, except that legal is measured by lawsuits against the organization. And maintenance is measured by the cleanliness of the premises.  Where’s the learning equivalent?  It has to be: impact on decisions that affect organizational outcomes.  None of the classic learning evaluations evaluate whether the objectives are right, which is what Kirkpatrick does. They assume that, basically, and then evaluate whether they achieve the objective.

That said, Will, if you can throw around diagrams, I can too. Here’s my attempt to represent the dichotomy. Yes, you’re successfully addressing the impact of the learning on the learner. That is, can they do the task. But I’m going to argue that that’s not what Kirkpatrick is for. It’s to address the impact of the intervention on the organization. The big problem is, to me, whether the objectives we’ve developed the learning to achieve are objectives that are aligned with organizational need. There’s plenty of evidence it’s not.

 

So here I’m trying to show what I see K doing. You start with the needed business impact: more sales, lower compliance problems, what have you. Then you decide what has to happen in the workplace to move that needle.  Say, shorter time to sales, so the behavior is decided to be timeliness in producing proposals. Let’s say the intervention is training on the proposal template software. You design a learning experience to address that objective, to develop ability to use the software. You use the type of evaluation you’re talking about to see if it’s actually developing their ability. Then you use K to see if it’s actually being used in the workplace (are people using the software to create proposals), and then to see if it’d affecting your metrics of quicker turnaround. (And, yes, you can see if they like the learning experience, and adjust that.)

And if any one element isn’t working: learning, uptake, impact, you debug that.  But K is evaluating the impact process, not the learning design. It should flag if the learning design isn’t working, but it’s not evaluating your pedagogical decisions, etc. It’s not focusing on what the Serious eLearning Manifesto cares about, for instance. That’s what your learning evaluations do, they check to see if the level 2 is working. But not whether level 2 is affecting level 4, which is what ultimately needs to happen. Yes, we need level 2 to work, but then the rest has to fall in line as well.

My point about orthogonality is that K is evaluating the horizontal, and you’re saying it should address the vertical. That, to me, is like saying we’re going to see if the car runs by ensuring the engine runs. Even if it does, but if the engine isn’t connected through the drivetrain to the wheels, it’s irrelevant. So we do want a working, well-tuned, engine, but we also want a clutch or torque converter, transmission, universal joint, driveshaft, differential, etc. Kirkpatrick looks at the drive train, learning evaluations look at the engine.

We don’t have to come to a shared understanding, but I hope this at least makes my point clear.

 

Will:

Okay readers! Clark and I have fought to a stalemate… He says that the Kirkpatrick model has value because it reminds us to work backward from organizational results. I say the model is fatally flawed because it doesn’t incorporate wisdom about learning. Now it’s your turn to comment. Can you add insights? Please do!

 

Video on Learning Objectives

,

There are so many confusions and mythologies on learning objectives that I thought I’d create a video to help disambiguate some of the worst misinformation.

Here is the video. Below the video, I have created a quiz so you can challenge and reinforce your knowledge. Watch the video first, then a day or more later–if you can manage it–take the quiz. Or, take the quiz first, then immediately watch the video–only later, after a few days, look at the quiz feedback.

——


 

 

——

Take the Quiz —

Before or After Watching the Video

——


Mythical Retention Data & The Corrupted Cone

, ,

The Danger

Have you ever seen the following “research” presented to demonstrate some truth about human learning?

Unfortunately, all of the above diagrams are evangelizing misleading information. Worse, these fabrications have been rampant over the last two or three decades—and seem to have accelerated during the age of the internet. Indeed, a Google image search for “Dale’s Cone” produces about 80% misleading information, as you can see below from a recent search.

Search 2015:

 

Search 2017:

 

This proliferation is a truly dangerous and heinous result of incompetence, deceit, confirmatory bias, greed, and other nefarious human tendencies.

It is also hurting learners throughout the world—and it must be stopped. Each of us has a responsibility in this regard.

 

New Research

Fortunately, a group of tireless researchers—who I’ve had the honor of collaborating with—has put a wooden stake through the dark heart of this demon. In the most recent addition of the scientific journal Educational Technology, Deepak Subramony, Michael Molenda, Anthony Betrus, and I (my contribution was small) produced four articles on the dangers of this misinformation and the genesis of it. After working separately over the years to debunk this bit of mythology, the four of us have come together in a joint effort to rally the troops—people like you, dedicated professionals who want to create the best outcomes for your learners.

Here are the citations for the four articles. Later, I will have a synopsis of each article.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

Many thanks to Lawrence Lipsitz, the editor of Educational Technology, for his support, encouragement, and efforts in making this possible!

To get a copy of the “Special Issue” or to subscribe to Educational Technology, go to this website. (Note, 2017: I don’t think the journal is being published anymore.)

 

The Background

There are two separate memes we are debunking, what we’ve labeled (1) the mythical retention chart and (2) the corruption of Dale’s Cone of Experience. As you will see—or might have noticed in the images I previously shared—the two have often be comingled.

Here is an example of the mythical retention chart:

 

Oftentimes though, this is presented in text:

“People Remember:

  • 10 percent of what they read;
  • 20 percent of what they hear;
  • 30 percent of what they see;
  • 50 percent of what they see and hear;
  • 70 percent of what they say; and
  • 90 percent of what they do and say

Note that the numbers proffered are not always the same, nor are the factors alleged to spur learning. So, for example, you can see that on the graphic, people are said to remember 30 percent of what they hear, but in the text, the percentage is 20 percent. In the graphic, people remember 80 percent when they are collaborating, but in the text they remember 70% of what they SAY. I’ve looked at hundreds of examples, and the variety is staggering.

Most importantly, the numbers do NOT provide good guidance for learning design, as I will detail later.

Here is a photocopied image of the original Dale’s Cone:

Edgar Dale (1900-1985) was an American educator who is best known for developing “Dale’s Cone of Experience” (the cone above) and for his work on how to incorporate audio-visual materials into the classroom learning experience. The image above was photocopied directly from his book, Audio-visual methods in teaching (from the 1969 edition).

 

You’ll note that Dale included no numbers in his cone. He also warned his readers not to take the cone too literally.

Unfortunately, someone somewhere decided to add the misleading numbers. Here are two more examples:

 

I include these two examples to make two points. First, note how one person clearly stole from the other one. Second, note how sloppy these fabricators are. They include a Confucius quote that directly contradicts what the numbers say. On the left side of the visuals, Confucius is purported to say that hearing is better than seeing, while the numbers on the right of the visuals say that seeing is better than hearing. And, by the way, Confucius did not actually say what he is being alleged to have said! What seems clear from looking at these and other examples is that people don’t do their due diligence—their ends seems to justify their means—and they are damn sloppy, suggesting that they don’t think their audiences will examine their arguments closely.

By the way, these deceptions are not restricted to the English-speaking world:

 

Intro to the Special Issue of Educational Technology

As Deepak Subramony and Michael Molenda say in the introduction to the Special Issue of Educational Technology, the four articles presented seek to provide a “comprehensive and complete analysis of the issues surrounding these tortured constructs.” They also provide “extensive supporting material necessary to present a comprehensive refutation of the aforementioned attempts to corrupt Dale’s original model.”

In the concluding notes to the introduction, Subramony and Molenda leave us with a somewhat dystopian view of information trajectory in the internet age. “In today’s Information Age it is immensely difficult, if not practically impossible, to contain the spread of bad ideas within cyberspace. As we speak, the corrupted cone and its attendant “data” are akin to a living organism—a virtual 21st century plague—that continues to spread and mutate all over the World Wide Web, most recently to China. It therefore seems logical—and responsible—on our part that we would ourselves endeavor to continue our efforts to combat this vexing misinformation on the Web as well.”

Later, I will provide a section on what we can all do to help debunk the myths and inaccuracies imbedded in these fabrications.

Now, I provide a synopsis of each article in the Special Edition.


Synopsis of First Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

The authors point out that, “Learners—both face-to-face and distant—in classrooms, training centers, or homes are being subjected to lessons designed according to principles that are both unreliable and invalid. In any profession this would be called malpractice.” (p. 6).

The article makes four claims.

Claim 1: The Data in the Retention Chart is Not Credible

First, there is no body of research that supports the data presented in the many forms of the retention chart. That is, there is no scientific data—or other data—that supports the claim that People Remember some percentage of what they learned. Interestingly, where people have relied on research citations from 1943, 1947, 1963, and 1967 as the defining research when they cite the source of their data, the numbers—10%, 20%, 30% and so on—actually appeared as early as 1914 and 1922—when they were presented as information long known. A few years ago, I compiled research on actual percentages of remembering. You can access it here.

Second, the fact that the numbers all are divisible by 5 or 10 makes it obvious to anyone who has done research that these are not numbers derived by actual research. Human variability precludes round numbers. In addition, as pointed out as early at 1978 by Dwyer, there is the question of how the data were derived—what were learners actually asked to do? Note for example that the retention chart data always measures—among other things—how much people remember by reading, hearing, and seeing. How people could read without seeing is an obvious confusion. What are people doing when they only see and don’t read or listen? Also problematic is how you’d create a fair test to compare situations where learners listened or watched something. Are they tested on different tests (one where they see and one where they listen), which seems to allow bias or are they tested on the same test, in which case on group would be at a disadvantage because they aren’t taking a test in the same context in which they learned.

Third, the data portrayed don’t relate to any other research in the scientific literature on learning. As the authors write, “There is within educational psychology a voluminous literature on remembering and learning from various mediated experiences. Nowhere in this literature is there any summary of findings that remotely resembles the fictitious retention chart.” (p. 8)

Finally, as the author’s say, “Making sense of the retention chart is made nearly impossible by the varying presentations of the data, the numbers in the chart being a moving target, altered by the users to fit their individual biases about desirable training methods.” (p. 9).

Claim 2: Dale’s Cone is Misused.

Dale’s Cone of Experience is a visual depiction that portrays more concrete learning experiences at the bottom of the cone and more abstract experiences at the top of the cone. As the authors write, “The cone shape was meant to convey the gradual loss of sensory information” (p. 9) in the learning experiences as one moved from lower to higher levels on the cone.

“The root of all the perversions of the Cone is the assumption that the Cone is meant to be a prescriptive guide. Dale definitely intended the Cone to be descriptive—a classification system, not a road map for lesson planning.” (p. 10)

Claim 3: Combining the Retention Chart Data with Dale’s Cone

“The mythical retention data and the concrete-to-abstract cone evolved separately throughout the 1900’s, as illustrated in [the fourth article] ‘Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone.’ At some point, probably around 1970, some errant soul—or perhaps more than one person—had the regrettable idea of overlaying the dubious retention data on top of Dale’s Cone of Experience.” (p. 11). We call this concoction the corrupted cone.

“What we do know is that over the succeeding years [after the original corruption] the corrupted cone spread widely from one source to another, not in scholarly publications—where someone might have asked hard questions about sources—but in ephemeral materials, such as handouts and slides used in teaching or manuals used in military or corporate training.” (p. 11-12).

“With the growth of the Internet, the World Wide Web, after 1993 this attractive nuisance spread rapidly, even virally. Imagine the retention data as a rapidly mutating virus and Dale’s Cone as a host; then imagine the World Wide Web as a bathhouse. Imagine the variety of mutations and their resistance to antiviral treatment. A Google Search in 2014 revealed 11,000 hits for ‘Dale’s Cone,’ 14,500 for ‘Cone of Learning,’ and 176,000 for ‘Cone of Experience.’ And virtually all of them are corrupted or fallacious representations of the original Dale’s cone. It just might be the most widespread pedagogical myth in the history of Western civilization!” (p. 11).

Claim 4: Murky Provenance

People who present the fallacious retention data and/or the corrupted cone often cite other sources—that might seem authoritative. Dozens of attributions have been made over the years, but several sources appear over and over, including the following:

  • Edgar Dale
  • Wiman & Meierhenry
  • Bruce Nyland
  • Various oil companies (Mobil, Standard Oil, Socony-Vacuum Oil, etc.)
  • NTL Institute
  • William Glasser
  • British Audio-Visual Society
  • Chi, Bassok, Lewis, Reimann, & Glaser (1989).

Unfortunately, none of these sources are real sources. They are false.

Conclusion:

“The retention chart cannot be supported in terms of scientific validity or logical interpretability. The Cone of Experience, created by Edgar Dale in 1946, makes no claim of scientific grounding, and its utility as a prescriptive theory is thoroughly unjustified.” (p. 15)

“No qualified scholar would endorse the use of this mish-mash as a guide to either research or design of learning environments. Nevertheless, [the corrupted cone] obviously has an allure that surpasses logical considerations. Clearly, it says something that many people want to hear. It reduces the complexity of media and method selection to a simple and easy to remember formula. It can thus be used to support a bias toward whatever learning methodology might be in vogue. Users seem to employ it as pseudo-scientific justification for their own preferences about media and methods.” (p. 15)


Synopsis of Second Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

The authors point to earlier attempts to debunk the mythical retention data and the corrupted cone. “Critics have been attempting to debunk the mythical retention chart at least since 1971. The earliest critics, David Curl and Frank Dwyer, were addressing just the retention data.  Beginning around 2002, a new generation of critics has taken on the illegitimate combination of the retention chart and Edgar Dale’s Cone of Experience – the corrupted cone.” (p. 17).

Interestingly, we only found two people who attempted to debunk the retention “data” before 2000. This could be because we failed to find other examples that existed, or it might just be because there weren’t that many examples of people sharing the bad information.

Starting in about 2002, we noticed many sources of refutation. I suspect this has to do with two things. First, it is easier to quickly search human activity in the internet age, giving an advantage in seeking examples. Second, the internet also makes it easier for people to post the erroneous information and share it to a universal audience.

The bottom line is that there have been a handful of people—in addition to the four authors—who have attempted to debunk the bogus information.


Synopsis of Third Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

The authors of the article provide a series of brief synopses of the major players who have been cited as sources of the bogus data and corrupted visualizations. The goal here is to give you—the reader—additional information so you can make your own assessment of the credibility of the research sources provided.

Most people—I suspect—will skim through this article with a modest twinge of voyeuristic pleasure. I did.


Synopsis of Fourth Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

The authors present a decade-by-decade outline of examples of the reporting of the bogus information—From 1900 to the 2000s. The outline represents great detective work by my co-authors, who have spent years and years searching databases, reading articles, and reaching out to individuals and institutions in search of the genesis and rebirth of the bogus information. I’m in continual awe of their exhaustive efforts!

The timeline includes scholarly work such as the “Journal of Education,” numerous books, academic courses, corporate training, government publications, military guidelines, etc.

The breadth and depth of examples demonstrates clearly that no area of the learning profession has been immune to the disease of poor information.


Synopsis of the Exhibits:

The authors catalog 16 different examples of the visuals that have been used to convey the mythical retention data and/or the corrupted cone. They also present about 25 text examples.

The visual examples are black-and-white canonical versions, and given these limitations, can’t convey the wild variety of examples available now on the internet. Still, they show in their variety just how often people have modified Dale’s Cone to support their own objectives.


My Conclusions, Warnings, and Recommendations

The four articles in the special edition of Educational Technology represent a watershed moment in the history of misinformation in the learning profession. The articles utilize two examples—the mythical retention data (“People remember 10%, 20%, 30%…”) and the numerical corruptions of Dale’s Cone—and demonstrate the following:

  1. There are definitively-bogus data sources floating around the learning profession.
  2. These bogus information sources damage the effectiveness of learning and hurt learners.
  3. Authors of these bogus examples do not do their due diligence in confirming the validity of their research sources. They blithely reproduce sources or augment them before conveying them to others.
  4. Consumers of these bogus information sources do not do their due diligence in being skeptical, in expecting and demanding validated scientific information, in pushing back against those who convey weak information.
  5. Those who stand up publically to debunk such misinformation—though nobly fighting a good fight—do not seem to be winning the war against this misinformation.
  6. More must be done if we are to limit the damage.

Some of you may chaff at my tone here, and if I had more time I might have been able to be more careful in my wording. But still, this stuff matters! Moreover, these articles focus only on two examples of bogus memes in the learning field. There are many more! Learning styles anyone?

Here is what you can do to help:

  1. Be skeptical.
  2. When conveying or consuming research-based information, check the actual source. Does it say what it is purported to say? Is it a scientifically-validated source? Are there corroborating sources?
  3. Gently—perhaps privately—let conveyors of bogus information know that they are conveying bogus information. Show them your sources so they can investigate for themselves.
  4. When you catch someone conveying bogus information, make note that they may be the kind of person who is lazy or corrupt in the information they convey or use in their decision making.
  5. Punish, sanction, or reprimand those in your sphere of influence who convey bogus information. Be fair and don’t be an ass about it.
  6. Make or take opportunities to convey warnings about the bogus information.
  7. Seek out scientifically-validated information and the people and institutions who tend to convey this information.
  8. Document more examples.

To this end, Anthony Betrus—on behalf of the four authors—has established www.coneofexperience.com. The purpose of this website is to provide a place for further exploration of the issues raised in the four articles. It provides the following:

  • Series of timelines
  • Links to other debunking attempts
  • Place for people to share stories about their experience with the bogus data and visuals.

The learning industry also has responsibilities.

  1. Educational institutions must ensure that validated information is more likely to be conveyed to their students, within the bounds of academic freedom…of course.
  2. Educational institutions must teach their students how to be good consumers of “research,” “data,” and information (more generally).
  3. Trade organizations must provide better introductory education for their members; more myth-busting articles, blog posts, videos, etc.; and push a stronger evidence-based-practice agenda.
  4. Researchers have to partner with research translators more often to get research-based information to real-world practitioners.

Links:

 

 

Triggered Action Planning

, ,

This article was originally published in Will’s Insight News, my monthly newsletter.

It has been updated and improved to include new information.

Click here if you want to sign up for my newsletter…

Radically Improved Action Planning

Using Cognitive Triggers to Support On-the-Job Performance

Most of us who have been trainers have tried one or more methods of action planning–hoping to get our learners to apply what they’ve learned back on the job. The most common form of action planning goes something like this (at the end of a training program):

“Okay, take a look at this action-planning handout. Think of 3 things from the course you’d like to take away and apply back on the job. This is critically important. If you feel you’ve learned something you’d like to use, you won’t get the results you want if you forget what your goals are. On the handout, you’ll see space to write down your 3 action-planning goals. I’m going to give you 20 minutes to do this because it’s so important!”

Unfortunately, that method is likely to get less than half the follow-through that another–research based–method may get you!

When we as trainers do action planning, we are recognizing that learning is not enough. We want to make sure that all of our passionate, exhaustive efforts at training are not wasted. If we’re honest with ourselves, we know that if our learners forget everything they’ve learned, then we really haven’t been effective. This goes for e-learning as well. There’s a lot of effort that goes into creating an e-learning course–and, if we can maximize the benefits through effective action planning, then we ought to do it.

 

Before sharing with you my radically improved action-planning method, it’s critical that I motivate it. Look at the above diagram. It shows that the human mind is subject to both conscious and sub-conscious messages. It also shows that the sub-conscious channel is using a broader bandwidth–and when humans process messages consciously, they often filter the messages in ways that limit the effectiveness of those messages.

One of the most important findings from psychological research in the past 10 years–I hate to call it “brain science” because that’s an inaccurate tease–is that much of what controls human thinking comes from or is influenced by sub-conscious primes. Speed limit signs (conscious messages to slow down) are not as effective as narrowing streets, planting trees near streets, and other sub-conscious influencers. Committing to a diet may not be as effective as using smaller dishes, removing snacks from eyesight, and shopping at farmer’s markets instead of in the processed-food isles of grocery stores.

We workplace professionals tend to use the conscious communication channel almost exclusively–we think it’s our job to compile content, make the best arguments for it’s usefulness, and share information so that our learners acknowledge its value and plan to use it. But, if a large part of human cognition is sub-conscious, shouldn’t we use that too? Don’t we have a professional responsibility to be as effective as we can?

My action-planning method does just that. It sets triggers that later create spontaneous sub-conscious prompts to action. I’m calling this “Triggered Action Planning”–a reminder that we are TAP-ping into our learners’ sub-conscious processing to help them remember what they’ve learned. SMILE.

The basic concept is this:  We want learners, when they are back on the job, to be reminded of what they’ve learned. We should do this by aligning context–one of the Decisive Dozen research-based learning factors–in our training designs. We can do this by using more hands-on exercises, more real work, more simulations–but we can extend this to action planning as well.

The key is to set SITUATION-ACTION triggers. We want contextual situations to trigger certain actions. So for example, if we teach supervisors to bring their direct reports into decision-making, we want them to think about this when they are having team meetings, when they are discussing a decision with one of their direct reports, etc. The SITUATION could be a team meeting. The ACTION could be delegating a decision, asking for input, etc., as appropriate.

In action planning, it’s even simpler. Instead of just asking our learners what their goals are for implementing what they’ve learned, we also ask them to select situations when they will begin to carry out those goals. So for example:

  • GOAL: I will work with my team to identify a change initiative.
  • SITUATION-ACTION: At our first staff meeting in October,
    I will work with my team to identify a change initiative.

Remarkably, this kind of intervention–what researchers call “implementation intentions”–has been found to create incredibly significant effects, often doubling compliance of actual performance!!!!!!!!!!!!!

I think this research finding is so important to workplace learning that I’ve devoted a whole section of my unpublished tome to considering how to use it. Instead of using the term “implementation intentions”–it’s such a mouthful–I just call this trigger-setting.

The bottom line here is that we may be able to double the likelihood that our learners actually apply what they’ve learned simply by having our learners link situations and actions in their action planning.

New Job Aid for Triggered Action Planning

You can easily create your own triggered-action planning worksheets or e-learning interactions, but I’ve got one ready to go that you can use as is–FREE OF CHARGE BECAUSE I LOVE TO SHARE–or you can just use it as a starting point for your own triggered-action-planning exercises.

Click here to download the triggered-action-planning job aid (as a PDF)

Click here for a Word version (so you can modify)

Research:

Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38, 69-119.

Bjork, R. A., & Richardson-Klavehn, A. (1989). On the puzzling relationship between environmental context and human memory. In C. Izawa (Ed.) Current Issues in Cognitive Processes: The Tulane Floweree Symposium on Cognition (pp. 313-344). Hillsdale, NJ: Erlbaum.

Roediger, H. L., III, & Guynn, M. J. (1996). Retrieval processes. In E. L. Bjork & R. A. Bjork (Eds.), Memory (pp. 197-236). San Diego, CA: Academic Press.

Smith, S. M., & Vela, E. (2001). Environmental context-dependent memory: A review and meta-analysis. Psychonomic Bulletin & Review, 8, 203-220.

Thalheimer, W. (2013). The decisive dozen: Research review abridged. Available at the Work-Learning Research catalog.

Recent Research Review — Reviewed (and Lamented)

,

About two years ago, four enterprising learning researchers reviewed the research on training and development and published their findings in a top-tier refereed scientific journal. They did a really nice job!

Unfortunately, a vast majority of professionals in the workplace learning-and-performance field have never read the research review, nor have they even heard about it.

As a guy whose consulting practice is premised on the idea that good learning research can be translated into practical wisdom for instructional designers, trainers, elearning developers, chief learning officers and other learning executives, I have been curious to see to what extent this seminal research review has been utilized by other learning professionals. So, for the last year and a half or so, I’ve been asking the audiences I encounter in my keynotes and other conference presentations whether they have encountered this research review.

Often I use the image below to ask the question:

Click here to see original research article…

 

What would be your guess as to the percentage of folks in our industry who have read this?

10%

30%

50%

70%

90%

Sadly, in almost all of the audiences I’ve encountered, less than 5% of the learning professionals have read this research review.

Indeed, usually more than 95% of workplace learning professionals have “never heard of it” even two years after it was published!!!

THIS IS DEEPLY TROUBLING!

And the slur this dumps on our industry’s most potent institutions should be self-evident. And I, too, must take blame for not being more successful in getting these issues heard.

A Review of the Review

People who are subscribed to my email newsletter (you can sign up here), have already been privy to this review many months ago.

I hope the following review will be helpful, and remember, when you’re gathering knowledge to help you do your work, make sure you’re gathering it from sources who are mindful of the scientific research. There is a reason that civilization progresses through its scientific efforts–science provides a structured process of insight generation and testing, creating a self-improving knowledge-generation process that maximizes innovation while minimizing bias.

——————————-

Quotes from the Research Review:

“It has long been recognized that traditional,
stand-up lectures are an inefficient and
unengaging strategy for imparting
new knowledge and skills.” (p. 86)

 

“Training costs across organizations remain
relatively constant as training shifts from
face-to-face to technology-based methods.” (p. 87)

 

“Even when trainees master new knowledge and
skills in training, a number of contextual factors
determine whether that learning is applied
back on the job…” (p. 90)

 

“Transfer is directly related to opportunities
to practice—opportunities provided either by
the direct supervisor or the organization
as a whole.” (p. 90)

 

“The Kirkpatrick framework has a number of
theoretical and practical shortcomings…” (p. 91)

Introduction

I, Will Thalheimer, am a research translator. I study research from peer-reviewed scientific journals on learning, memory, and instruction and attempt to distill whatever practical wisdom might lurk in the dark cacophony of the research catacomb. It’s hard work—and I love it—and the best part is that it gives me some research-based wisdom to share with my consulting clients. It helps me not sound like a know-nothing. Working to bridge the research-practice gap also enables me to talk with trainers, instructional designers, elearning developers, chief learning officers, and other learning executives about their experiences using research-based concepts.

 

It is from this perspective that I have a sad, and perhaps horrifying, story to tell. In 2012—an excellent research review on training was published in a top-tier journal. Unbelievably, most training practitioners have never heard of this research review. I know because when I speak at conferences and chapters in our field I often ask how many people have read the article. Typically, less than 5% of experienced training practitioners have! Less than 1 in 20 people in our field have read a very important review article.

 

What the hell are we doing wrong? Why does everyone know what a MOOC is, but hardly anyone has looked at a key research article?

 

You can access the article by clicking here. You can also read my review of some of the article’s key points as I lay them out below.

 

Is This Research Any Good?

Not all research is created equal. Some is better than others. Some is crap. Too much “research” in the learning-and-performance industry is crap so it’s important to first acknowledge the quality of the research review.

The research review by Eduardo Salas, Scott Tannenbaum, Kurt Kraiger, and Kimberly Smith-Jentsch from November 2012 was published in the highly-regarded peer-reviewed scientific journal, Psychological Science in the Public Interest, published by the Association for Psychological Science, one of the most respected social-science professional organizations in the world. The research review not only reviews research, but also utilizes meta-analytic techniques to distill findings from multiple research studies. In short, it’s high-quality research.

 

The rest of this article will highlight key messages from the research review.

 

Training & Development Gets Results!

The research review by Salas, Tannenbaum, Kraiger, and Smith-Jentsch shows that training and development is positively associated with organizational effectiveness. This is especially important in today’s economy because the need for innovation is greater and more accelerated—and innovation comes from the knowledge and creativity of our human resources. As the researchers say, “At the organizational level, companies need employees who are both ready to perform today’s jobs and able to learn and adjust to changing demands. For employees, that involves developing both job-specific and more generalizable skills; for companies, it means taking actions to ensure that employees are motivated to learn.” (p. 77). Companies spend a ton of money every year on training—in the United States the estimate is $135 billion—so it’s first important to know whether this investment produces positive outcomes. The bottom line: Yes, training does produce benefits.

 

To Design Training, It Is Essential to Conduct a Training Needs Analysis

“The first step in any training development effort ought to be a training needs analysis (TNA)—conducting a proper diagnosis of what needs to be trained, for whom, and within what type of organizational system. The outcomes of this step are (a) expected learning outcomes, (b) guidance for training design and delivery, (c) ideas for training evaluation, and (d) information about the organizational factors that will likely facilitate or hinder training effectiveness. It is, however, important to recognize that training is not always the ideal solution to address performance deficiencies, and a well-conducted TNA can also help determine whether a non-training solution is a better alternative.” (p. 80-81) “In sum, TNA is a must. It is the first and probably the most important step toward the design and delivery of any training.” (p. 83) “The research shows that employees are often not able to articulate what training they really need” (p. 81) so just asking them what they need to learn is not usually an effective strategy.

 

Learning Isn’t Always Required—Some Information can be Looked Up When Needed

When doing a training-needs analysis and designing training, it is imperative to separate information that is “need-to-know” from that which is “need-to-access.” Since learners forget easily, it’s better to use training time to teach the need-to-know information and prepare people on how to access the need-to-access information.

 

Do NOT Offer Training if It is NOT Relevant to Trainees

In addition to being an obvious waste of time and resources, training courses that are not specifically relevant to trainees can hurt motivation for training in general. “Organizations are advised, when possible, to not only select employees who are likely to be motivated to learn when training is provided but to foster high motivation to learn by supporting training and offering valuable training programs.” (p. 79) This suggests that every one of the courses on our LMS should have relevance and value.

 

It’s about Training Transfer—Not Just about Learning!

“Transfer refers to the extent to which learning during training is subsequently applied on the job or affects later job performance.” (p. 77) “Transfer is critical because without it, an organization is less likely to receive any tangible benefits from its training investments.” (p. 77-78) To ensure transfer, we have to utilize proven scientific research-based principles in our instructional designs. Relying on our intuitions is not enough—because they may steer us wrong.

 

We must go Beyond Training!

“What happens in training is not the only thing that matters—a focus on what happens before and after training can be as important. Steps should be taken to ensure that trainees perceive support from the organization, are motivated to learn the material, and anticipate the opportunity to use their skills once on (or back on) the job.” (p. 79)

 

Training can be Designed for Individuals or for Teams

“Today, training is not limited to building individual skills—training can be used to improve teams as well.” (p. 79)

 

Management and Leadership Training Works

“Research evidence suggests that management and leadership development efforts work.” (p. 80) “Management and leadership development typically incorporate a variety of both formal and informal learning activities, including traditional training, one-on-one mentoring, coaching, action learning, and feedback.” (p. 80)

 

Forgetting Must Be Minimized, Remembering Must Be Supported

One meta-analysis found that one year after training, “trainees [had] lost over 90% of what they learned.” (p. 84) “It helps to schedule training close in time to when trainees will be able to apply what they have learned so that continued use of the trained skill will help avert skill atrophy. In other words, trainees need the chance to ‘use it before they lose it.’ Similarly, when skill decay is inevitable (e.g., for infrequently utilized skills or knowledge) it can help to schedule refresher training.” (p. 84)

 

Common Mistakes in Training Design Should Be Avoided

“Recent reports suggest that information and demonstrations (i.e., workbooks, lectures, and videos) remain the strategies of choice in industry. And this is a problem [because] we know from the body of research that learning occurs through the practice and feedback components.” (p. 86) “It has long been recognized that traditional, stand-up lectures are an inefficient and unengaging strategy for imparting new knowledge and skills.” (p. 86) Researchers have “noted that trainee errors are typically avoided in training, but because errors often occur on the job, there is value in training people to cope with errors both strategically and on an emotional level.” (p. 86) “Unfortunately, systematic training needs analysis, including task analysis, is often skipped or replaced by rudimentary questions.” (p. 81)

 

Effective Training Requires At Least Four Components

“We suggest incorporating four concepts into training: information, demonstration, practice, and feedback.” (p. 86) Information must be presented clearly and in a way that enables the learners to fully understand the concepts and skills being taught. Skill demonstrations should provide clarity to enable comprehension. Realistic practice should be provided to enable full comprehension and long-term remembering. Proving feedback after decision-making and skill practice should be used to correct misconceptions and improve the potency of later practice efforts.

The bottom line is that more realistic practice is needed. Indeed, the most effective training utilizes relatively more practice and feedback than is typically provided. “The demonstration component is most effective when both positive and negative models are shown rather than positive models only.” (p. 87)

Will’s Note: While these four concepts are extremely valuable, personally I think they are insufficient. See my research review on the Decisive Dozen for my alternative.

 

E-Learning Can Be Effective, But It May Not Lower the Cost of Training

“Both traditional forms of training and technology-based training can work, but both can fail as well. (p. 87) While the common wisdom argues that e-learning is less costly, recent “survey data suggest that training costs across organizations remain relatively constant as training shifts from face-to-face to technology-based methods.” (p. 87) This doesn’t mean that e-learning can’t offer a cost savings, but it does mean that most organizations so far haven’t realized cost savings. “Well-designed technology-based training can be quite effective, but not all training needs are best addressed with that approach. Thus, we advise that organizations use technology-based training wisely—choose the right media and incorporate effective instructional design principles.” (p. 87)

 

Well-Designed Simulations Provide Potent Learning and Practice

“When properly constructed, simulations and games enable exploration and experimentation in realistic scenarios. Properly constructed simulations also incorporate a number of other research-supported learning aids, in particular practice, scaffolding or context-sensitive support, and feedback. Well-designed simulation enhances learning, improves performance, and helps minimize errors; it is also particularly valuable when training dangerous tasks. (p. 88)

 

To Get On-the-Job Improvement, Training Requires After-Training Support

“The extent to which trainees perceive the posttraining environment (including the supervisor) as supportive of the skills covered in training had a significant effect on whether those skills are practiced and maintained.” (p. 88) “Even when trainees master new knowledge and skills in training, a number of contextual factors determine whether that learning is applied back on the job: opportunities to perform; social, peer, and supervisory support; and organizational policies.” (p. 90) A trainee’s supervisor is particularly important in this regard. As repeated from above, researchers have “discovered that transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization as a whole.” (p. 90)

 

On-the-Job Learning can be Leveraged with Coaching and Support

“Learning on the job is more complex than just following someone or seeing what one does. The experience has to be guided. Researchers reported that team leaders are a key to learning on the job. These leaders can greatly influence performance and retention. In fact, we know that leaders can be trained to be better coaches…Organizations should therefore provide tools, training, and support to help team leaders to coach employees and use work assignments to reinforce training and to enable trainees to continue their development.” (p. 90)

 

Trainees’ Supervisors Can Make or Break Training Success

Researchers have “found that one misdirected comment by a team leader can wipe out the full effects of a training program.” (p. 83) “What organizations ought to do is provide leaders with information they need to (a) guide trainees to the right training, (b) clarify trainees’ expectations, (c) prepare trainees, and (d) reinforce learning…” (p. 83) Supervisors can increase trainees’ motivation to engage in the learning process. (p. 85) “After trainees have completed training, supervisors should be positive about training, remove obstacles, and ensure ample opportunity for trainees to apply what they have learned and receive feedback.” (p. 90) “Transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization.” (p. 90)

 

Will’s Note: I’m a big believer in the power of supervisors to enable learning. I’ll be speaking on this in an upcoming ASTD webinar.

 

Basing Our Evaluations on the Kirkpatrick 4 Levels is Insufficient!!!

“Historically, organizations and training researchers have relied on Kirkpatrick’s [4-Level] hierarchy as a framework for evaluating training programs…[Unfortunately,] The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… Although the Kirkpatrick hierarchy has clear limitations, using it for training evaluation does allow organizations to compare their efforts to those of others in the same industry. The authors recommendations for improving training evaluation fit into two categories. First, [instead of only using the Kirkpatrick framework] “organizations should begin training evaluation efforts by clearly specifying one or more purposes for the evaluation and should then link all subsequent decisions of what and how to measure to the stated purposes.” (p. 91) Second, the authors recommend that training evaluations should “use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes.” (p. 91)

 

This is a devastating critique that should give us all pause. Of course it is not the first such critique, nor will it have to be the last I’m afraid. The worst part about the Kirkpatrick model is that it controls the way we think about learning measurement. It doesn’t allow us to see alternatives.

 

Leadership is Needed for Successful Training and Development

“Human resources executives, learning officers, and business leaders can influence the effectiveness of training in their organizations and the extent to which their company’s investments in training produce desired results. Collectively, the decisions these leaders make and the signals they send about training can either facilitate or hinder training effectiveness…Training is best viewed as an investment in an organization’s human capital, rather than as a cost of doing business. Underinvesting can leave an organization at a competitive disadvantage. But the adjectives “informed” and “active” are the key to good investing. When we use the word “informed,” we mean being knowledgeable enough about training research and science to make educated decisions. Without such knowledge, it is easy to fall prey to what looks and sounds cool—the latest training fad or technology.”  (p. 92)

Thank you!

I’d like to thank all my clients over the years for hiring me as a consultant, learning auditor, workshop provider, and speaker–and thus enabling me to continue in the critical work of translating research into practical recommendations.

If you think I might be able to help your organization, please feel free to contact me directly by emailing me at “info at worklearning dot com” or calling me at 617-718-0767.