For over two years I’ve been compiling and analyzing the research on learning transfer as it relates to workplace learning and development. Today I am releasing my findings to the public.

Here is the Overview from the Research-to-Practice Report:

Learning transfer—or “training transfer” as it is sometimes called—occurs when people learn concepts and/or skills and later utilize those concepts/skills in work situations.1 Because we invest time, effort, and resources to create learning interventions, we hope to get a return on those investments in the form of some tangible benefit—usually some form of improved work outcome. Transfer, then, is our paramount goal. When we transfer, we are successful. When we don’t transfer, we fail.

To be practical about this, it is not enough to help our learners comprehend concepts or understand skills. It is not enough to get them to remember concepts/skills. It is not enough to inspire our learners to be motivated to use what they’ve learned. These results may be necessary, but they are not sufficient. We learning professionals hold transfer sacrosanct because it is the ultimate standard for success and failure.

This research review was conducted to determine factors that can be leveraged by workplace learning professionals to increase transfer success. This effort was not intended to be an exhaustive scientific review, but rather a quick analysis of recent research reviews, meta-analyses, and selected articles from scientific refereed journals. The goal of this review was to distill validated transfer factors—learning design and learning support elements that increase the likelihood that learning will transfer—and make these insights practical for trainers, learning architects, instructional designers, elearning developers, and learning professionals in general. In targeting this goal, this review aligns with transfer researchers’ recent admonition to ensure the scientific research on learning transfer gets packaged in a format that is usable by those who design and develop learning (Baldwin, Ford, Blume, 2017).

Unfortunately, after reviewing the scientific articles referenced in this report as well as others not cited here, my conclusion is that many of the most common transfer approaches have not yet been researched with sufficient rigor or intensity to enable us to have full certainty about how to engineer transfer success. At the end of this report, I make recommendations on how we can have a stronger research base.

Despite the limitations of the research, this quick review did uncover many testable hypotheses about the factors that may support transfer. Factors are presented here in two categories—those with strong support in the research, and those the research identifies as having possible benefits. I begin by highlighting the overall strength of the research.

Special Thanks for Early Sponsorship

Translating scientific research involves a huge investment in time, and to be honest, I am finding it more and more difficult to carve out time to do translational research. So it is with special gratitude that I want to thank Emma Weber of Lever Transfer of Learning for sponsoring me back in 2017 on some of the early research-translation efforts that got me started in compiling the research for this report. Without Lever’s support, this research would not have been started!

Tidbits from the Report

There are 17 research-supported recommended transfer factors and an additional six possible transfer factors. Here are a subset of the supported transfer factors:

  • Transfer occurs most potently to the extent that our learning designs strengthen knowledge and skills.
  • Far transfer hardly ever happens. Near transfer—transfer to contexts similar to those practiced during training or other learning efforts—can happen.
  • Learners who set goals are more likely to transfer.
  • Learners who also utilize triggered action planning will be even more likely to transfer, compared to those who only set goals alone.
  • Learners with supervisors who encourage, support, and monitor learning transfer are more likely to successfully transfer.
  • The longer the time between training and transfer, the less likely that training-generated knowledge create benefits for transfer.
  • The more success learners have in their first attempts to transfer what they’ve learned, the more likely they are to persevere in more transfer-supporting behaviors.

The remaining recommendations can be viewed in the report (available below).

Recommendations to Researchers

While transfer researchers have done a great deal of work in uncovering how transfer works, the research base is not as solid as it should be. For example, much of the transfer research uses learners’ subjective estimates of transfer—rather than actual transfer—as the dependent measure. Transfer researchers themselves recognize the limitations of the research base, but they could be doing more. In the report, I offer several additional recommendations to the improvements they’ve already suggested.

The Research-to-Practice Report

 

Access the report by clicking here…

 

Sign Up for Additional Research-Inspired Practical Recommendations

 

Sign up for Will Thalheimer’s Newsletter here…

 

 

12th December 2019

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2019 Neon Elephant Award, given to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

Click here to learn more about the Neon Elephant Award…

 

2019 Award Winner – David Epstein

David Epstein, is an award-winning writer and journalist, having won awards for his writing from such esteemed bodies as the National Academies of Sciences, Engineering, and Medicine, the Society of Professional Journalists, and the National Center on Disability and Journalism—and has been included in the Best American Science and Nature Writing anthology. David has been a science writer for ProPublica and a senior writer at Sports Illustrated where he helped break the story on baseball legend Alex Rodriguez’s steroid use. David speaks internationally on performance science and the uses (and misuses) of data and his TED talk on human athletic performance has been viewed over eight million times.

Mr. Epstein is the author of two books:

David is honored this year for his new book on human learning and development, Range: Why Generalists Triumph in a Specialized World. The book lays out a very strong case for why most people will become better performers if they focus broadly on their development rather than focusing tenaciously and exclusively on one domain. If we want to raise our children to be great soccer players (aka “football” in most places), we’d be better off having them play multiple sports rather than just soccer. If we want to develop the most innovative cancer researchers, we shouldn’t just train them in cancer-related biology and medicine, we should give them a wealth of information and experiences from a wide range of fields.

Range is a phenomenal piece of art and science. Epstein is truly brilliant in compiling and comprehending the science he reviews, while at the same time telling stories and organizing the book in ways that engage and make complex concepts understandable. In writing the book, David is debunking the common wisdom that performance is improved most rapidly and effectively by focusing practice and learning toward a narrow foci. Where others have only hinted at the power of a broad developmental pathway, Epstein’s Range builds up a towering landmark of evidence that will remain visible on the horizon of the learning field for decades if not millennium.

We in the workplace learning-and-development field should immerse ourselves in Range—not just in thinking about how to design learning and architect learning contexts, but also in thinking about how to evaluate prospects for recruitment and hiring. It’s likely that we currently undervalue people with broad backgrounds and artificially overvalue people with extreme and narrow talents.

Here is a nice article where Epstein wrestles with a question that elucidates an issue we have in our field—what happens when many people in a field are not following research-based guidelines. The article is set in the medical profession, but there are definite parallels to what we face everyday in the learning field.

Epstein is the kind of person we should honor and emulate in the workplace learning field. He is unafraid in seeking the truth, relentless and seemingly inexhaustible in his research efforts, and clear and engaging as a conveyor of information. It is an honor to recognize him as this year’s winner of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

Christian Unkelbach and Fabia Högden, researchers at the Universität zu Köln, reviewed research on how pairing celebrities—or other stimuli—can imbue objects with characteristics that might be beneficial. Their article in Current Directions in Psychological Science (2019, 28(6), 540–546), titled Why Does George Clooney Make Coffee Sexy? The Case for Attribute Conditioning, described earlier research that showed how REPEATED PAIRINGS of George Clooney and the Nespresso brand, in advertisements, imbued the coffee brand with attributes such as cosmopolitan, sophisticated, and seductive. Research on persuasion (see Cialdini, 2009 and here’s a nice blog-post review), also has demonstrated the power of celebrities to gain attention and be persuasive.

 

Can we use the power of celebrity to support our training?

Yes! And first realize that you don’t have to have access to worldwide celebrities. There are always people in our organizations who are celebrities as well; people like our CEOs, our best and brightest, our most beloved. You don’t even really need celebrities to get some kind of transference.

What could celebrity do for us? It could make employees more interested in our training, more likely to pay attention, more likely to apply what they’ve learned, etc.

The only catch I see is that this kind of attribute transference may require multiple pairings, so we’d have to figure out ways to do that without it feeling repetitive.

I, Will Thalheimer, am Available!

George Clooney shouldn’t have all the fun. If you’d like to imbue your learning product or service with a sense of sexy research-inspired sophistication, my services are available. I’m so good, I can even sell overhead transparencies to trainers!

 

I’m joking! Please don’t call! SMILE

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

CEO’s are calling for their companies to be more innovative in the ever-accelerating competitive landscape! Creativity is the key leverage point for innovation. Research I’ve compiled (from the science on creativity) shows that unique and valuable ideas are generated when people and teams look beyond their inner circle to those in their peripheral networks. GIVEN THIS, a smart company will seed themselves with outside influencers who are working with new ideas.

But what are a vast majority of big companies doing that kills their own creativity? They are making it difficult or virtually impossible for their front-line departments to hire small businesses and consultants. It’s allowed, but massive walls are being built! And these walls have exploded over the last five to ten years:

  1. Only fully vetted companies can be hired, requiring small lean companies to waste time in compliance—or turn away in frustration. Also causing large-company managers to favor the vetted companies, even if a small business or consultant would provide better value or more-pertinent products or services.
  2. Master Service Agreements are required (pushing small companies away due to time and legal fees).
  3. Astronomical amounts of insurance are required. Why the hell do consultants need $2 million in insurance, even when they are consulting on non-safety-related issues? Why do they need any insurance at all if they are not impacting critical safety factors?
  4. Companies can’t be hired unless they’ve been in business for 5 or 10 or 15 years, completely eliminating the most unique and innovative small businesses or consultants—those who recently set up shop.
  5. Minimum company revenues are required, often in the millions of dollars.

These barriers, of course, aren’t the only ones pushing large organizations away from small businesses or consultants. Small companies often can’t afford sales forces or marketing budgets so they are less likely to gain large companies’ share of attention. Small companies aren’t seen as safe bets because they don’t have a name, or their website is not as beautiful, or they haven’t yet worked with other big-name companies, or the don’t speak the corporate language. Given these surface characteristics, only the bravest, most visionary frontline managers will take the risk to make the creative hire. And even then, their companies are making it increasingly hard for them to follow through.

Don’t be fooled by the high-visibility anecdotes that show a CEO hiring a book author or someone featured in Wired, HBR, or on some podcast. Yes, CEO’s and senior managers can easily find ways to hire innovators, and the resulting top-down creativity infusion can be helpful. But it can be harmful as well!!!! Too many times senior managers are too far away from knowing what works and what’s needed on the front lines. They push things innocently not knowing that they are distracting the troops from what’s most important, or worse, pushing the frontline teams to do stupid stuff against their best judgment.

Even more troublesome with these anecdotes of top-down innovation is that they are too few and far between. There may be ten senior managers who can hire innovation seeds, but there are dozens or hundreds or thousands of folks who might be doing so but can’t.

A little digression: It’s the frontline managers who know what’s needed—or perhaps more importantly the “leveraging managers” if I can coin a term. These are the managers who are deeply experienced and wise in the work that is getting done, but high enough in the organization to see the business-case big picture. I will specifically exclude “bottle-cap managers” who have little or no experience in a work area, but were placed there because they have business experience. Research shows these kind of hires are particularly counterproductive in innovation.

Let me summarize.

I’m not selling anything here. I’m in the training, talent development, learning evaluation business as a consultant—I’m not an innovation consultant! I’m just sharing this out of my own frustration with these stupid counter-productive barriers that I and my friends in small businesses and consultancies have experienced. I also am venting here to provide a call to action for large organizations to wake the hell up to the harm you are inflicting on yourselves and on the economy in general. By not supporting the most innovative small companies and consultants, you are dumbing-down the workforce for years to come!

Alright! I suppose I should offer to help instead of just gripe! I have done extensive research on creativity. But I don’t have a workshop developed, the research is not yet in publishable form, and it’s not really what I’m focused on right now. I’m focused on innovating in learning evaluation (see my new learning-evaluation model and my new method for capturing valid and meaningful data from learners). These are two of the most important innovations in learning evaluation in the past few years!

However, a good friend of mine did, just last month, suggest that the world should see the research on creativity that I’ve compiled (thanks Mirjam!). Given the right organization, situation, and requirements—and the right amount of money—I might be willing to take a break from my learning-evaluation work and bring this research to your organization. Contact me to try and twist my arm!

I’m serious, I really don’t want to do this right now, but if I can capture funds to reinvest in my learning-evaluation innovations, I just might be persuaded. On the contact-me link, you can set up an appointment with me. I’d love to talk with you if you want to talk innovation or learning evaluation.

For years, we have used the Kirkpatrick-Katzell Four-Level Model to evaluate workplace learning. With this taxonomy as our guide, we have concluded that the most common form of learning evaluation is learner surveys, that the next most common evaluation is learning, then on-the-job behavior, then organizational results.

The truth is more complicated.

In some recent research I led with the eLearning Guild and Jane Bozarth, we used the LTEM model to look for further differentiation. We found it.

Here’s some of the insights from the graphic above:

  • Learner surveys are NOT the most common form of learning evaluation. Program completion and attendance are more common, being done on most training programs in about 83% of organizations.
  • Learners surveys are still very popular, with 72% of respondents saying that they are used in more than one-third of their learning programs.
  • When we measure learning, we go beyond simple quizzes and knowledge checks.
    • Tier 5 assessments, measuring the ability to make realistic decisions, were reported by 24% of respondents to be used in more than one-third of their learning programs.
    • Tier 6 assessments, measuring realistic task performance (during learning), were reported by about 32% of respondents to be used in more than one-third of their learning programs.
    • Unfortunately, we messed up and forgot to include an option on Tier 4 Knowledge questions. However, previous eLearning Guild research in the 2007, 2008, and 2010 found that the percentage of respondents who reported that they measured memory recall of critical information was 60%, 60%, and 63% respectively.
  • Only about 20% of respondents said their organizations are measuring work performance.
  • Only about 16% of respondents said their organizations are measuring the organizational results from learning.
  • Interestingly, where the Four-Level Model puts all types of Results into one bucket, the LTEM framework encourages us to look at other results besides business results.
    • About 12% said their organizations were looking at the effect of the learning on the learner’s success and well-being.
    • Only about 3% said they were measuring the effects of learning on coworkers/family/friends.
    • Only about 3% said they were measuring the effects of learning on the community or society (as has been recommended by Roger Kaufman for years).
    • Only about 1% reported measuring the effects of learning on the environs.

 

Opportunities

The biggest opportunity—or the juiciest low-hanging fruit—is that we can stop just using Tier-1 attendance and Tier-3 learner-perception measures.

We can also begin to go beyond our 60%-rate in measuring Tier-4 knowledge and do more Tier-5 and Tier-6 assessments. As I’ve advocated for years, Tier-5 assessments using well-constructed scenario-based questions are the perfect balance of power and cost. They are aligned with the research on learning, they have moderate costs in terms of resources, and learners see them as challenging and interesting rather than punitive and unhelpful like they often see knowledge checks.

We can also begin to emphasize more Tier-7 evaluations. Shouldn’t we know whether our learning interventions are actually transferring to the workplace? The same is true for Tier-8 measures. We should look for strategic opportunities here—being mindful to the incredible costs of doing good Tier-8 evaluations. We should also consider looking beyond business results—as these are not the only effects our learning interventions are having.

Finally, we can use LTEM to help guide our learning-development efforts and our learning evaluations. By using LTEM, we are prompted to see things that have been hidden from us for decades.

 

The Original eLearning Guild Report

To get the original eLearning Guild report, click here.

 

The LTEM Model

To get the LTEM Model and the 34-page report that goes with it, click here.

Released Today: Research Report on Learning Evaluation Conducted with The eLearning Guild.

Report Title: Evaluating Learning: Insights from Learning Professionals.

I am delighted to announce that a research effort that I led in conjunction with Dr. Jane Bozarth and the eLearning Guild has been released today. I’ll be blogging about our findings over the next couple of months.

This is a major report — packed into 39 pages — and should be read by everyone in the workplace learning field interested in learning evaluation!

Just a teaser here:

We asked folks to consider the last three learning programs their units developed and to reflect on the learning-evaluation approaches they used.

While a majority were generally happy with their evaluation methods on these recent learning programs, about 40% where dissatisfied. Later, in a more general question about whether learning professionals are able to do the learning measurement they want to do, fully 52% said they were NOT able to do the kind of evaluation they thought was right to do.

In the full report, available only to Guild members, we dig down and explore the practices and perspectives that drive our learning-evaluation efforts. I encourage you to get the full report, as it touches on the methods we use, how we communicate with senior business leaders, what we’d like to do differently, and what we think we’re good at. Also, the report concludes with 12 powerful action strategies for getting the most out of our learning-evaluation efforts.

You can get the full report by clicking here.

 

 

Respondents

Over 200 learning professionals responded to Work-Learning Research’s 2017-2018 survey on current practices in gathering learner feedback, and today I will reveal the results. The survey ran from November 29th, 2017 to September 16th, 2018. The sample of respondents was drawn from Work-Learning Research’s mailing list and through extensive calls for participation in a variety of social media. Because of this sampling methodology, the survey results are likely skewed toward professionals who care and/or pay attention to research-based practice recommendations more than the workplace learning field as a whole. They are also likely more interested and experienced in learning evaluation as well.

Feel free to share this link with others.

Goal of the Research

The goal of the research was to determine what people are doing in the way of evaluating their learning interventions through the practice of asking learners for their perspectives.

Questions the Research Hoped to Answer

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?
  2. How does their use compare with other methods? Are other methods growing in prominence/use?
  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?
  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?
  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?
  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?
  7. What tools are organizations using to gather learner feedback?
  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?
  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

A summary of the findings indexed to these questions can be found at the end of this post.

Situating the Practice of Gathering Learner Feedback

When we gather feedback from learners, we are using a Tier 3 methodology on the LTEM (Learning-Transfer Evaluation Model) or Level 1 on the Kirkpatrick-Katzell Four-Level Model of Training Evaluation.

Demographic Background of Respondents

Respondents came from a wide range of organizations, including small, midsize, and large organizations.

Respondents play a wide range of roles in the learning field.

Most respondents live in the United States and Canada, but there was some significant representation from many predominantly English-speaking countries.

Learner-Feedback Findings

About 67% of respondents report that learners are asked about their perceptions on more than half of their organization’s learning programs, including elearning. Only about 22% report that they survey learners on less than half of their learning programs. This finding is consistent with past findings—surveying learners is the most common form of learning evaluation and is widely practiced.

The two most common question types in use are Likert-like questions and numeric-scale questions. I have argued against their use* and I am pleased that Performance-Focused Smile Sheet questions have been utilized by so many so quickly. Of course, this sample of respondents is comprised of folks on my mailing list so this result surely doesn’t represent current practice in the field as a whole. Not yet! LOL.

*Likert-like questions and numeric-scale questions are problematic for several reasons. First, because they offer fuzzy response choices, learners have a difficult time deciding between them and this likely makes their responses less precise. Second, such fuzziness may inflate bias as there are not concrete anchors to minimize biasing effects of the question stems. Third, Likert-like options and numeric scales likely deflate learner responding because learners are habituated to such scales and because they may be skeptical that data from such scales will actually be useful. Finally, Likert-like options and numeric scales produce indistinct results—averages all in the same range. Such results are difficult to assess, failing to support decision-making—the whole purpose for evaluation in the first place. To learn more, check out Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form (book website here).

The most common tools used to gather feedback from learners were paper surveys and SurveyMonkey. Questions delivered from within an LMS were the next highest. High-end evaluation systems like Metrics that Matter were not highly represented in our respondents.

Our respondents did not rate their learner-feedback efforts as very effective. Their learner surveys were seen as most effective in gauging learner satisfaction. Only about 33% of respondents thought their learner surveys gave them insights on the effectiveness of the learning.

Only about 15% of respondents found their data very useful in providing them feedback about how to improve their learning interventions.

Respondents report that their organizations are somewhat open to alternatives to their current learner-feedback approaches, but overall they are not actively looking for alternatives.

Most respondents report that their organizations are at least “modestly happy” with their learner-feedback assessments. Yet only 22% reported being “generally happy” with them. Combining this finding with the one above showing that lots of organizations are open to alternatives, it seems that organizational satisfaction with current learner-feedback approaches is soft.

We asked respondents about their organizations’ attempts to measure the following:

  • Learner Attendance
  • Whether Learner is Paying Attention
  • Learner Perceptions of the Learning (eg, Smile Sheets, Learner Feedback)
  • Amount or Quality of Learner Participation
  • Learner Knowledge of the Content
  • Learner Ability to Make Realistic Decisions
  • Learner Ability to Complete Realistic Tasks
  • Learner Performance on the Job (or in another future performance situation)
  • Impact of Learning on the Learner
  • Impact of Learning on the Organization
  • Impact of Learning on Coworkers, Family, Friends of the Learner
  • Impact of Learning on the Community or Society
  • Impact of Learning on the Environment

These evaluation targets are encouraged in LTEM (The Learning-Transfer Evaluation Model).

Results are difficult to show—because our question was very complicated (admittedly too complicated)—but I will summarize the findings below.

As you can see, learner attendance and learner perceptions (smile sheets) were the most commonly measured factors, with learner knowledge a distant third. The least common measures involved the impact of the learning on the environment, community/society, and the learner’s coworkers/family/friends.

The flip side—methods rarely utilized in respondents’ organizations—shows pretty much the same thing.

Note that the question above, because it was too complicated, probably produced some spurious results, even if the trends at the extremes are probably indicative of the whole range. In other words, it’s likely that attendance and smile sheets are the most utilized and measures of impact on the environment, community/society, and learners’ coworkers/family/friends are the least utilized.

Questions Answered Based on Our Sample

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?

    Yes! Smile sheets are clearly the most popular evaluation method, along with measuring attendance (if we include that as a metric).

  2. How does their use compare with other methods? Are other methods growing in prominence/use?

    Except for Attendance, nothing else comes close. The next most common method is measuring knowledge. Remarkably, given the known importance of decision-making (Tier 5 in LTEM) and task competence (Tier 6 in LTEM), these are used in evaluation at a relatively low level. Similar low levels are found in measuring work performance (Tier 7 in LTEM) and organizational results (part of Tier 8 in LTEM). We’ve known about these relatively low levels from many previous research surveys.

    Hardly any measurement is being done on the impact of learning on learner or his/her coworkers/family/friends, the impact of the learning on the community/society/environment, or on learner participation/attention.

  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?

    Learning professionals are moderately satisfied.

  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?

    Organizations are open to alternatives, with some actively seeking alternatives and some not looking.

  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?

    Likert-like options and numeric scales are the most commonly used. Thalheimer’s performance-focused smile-sheet method has gained traction in this sample of respondents—people likely more in the know about Thalheimer’s approach than the industry at large.

  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?

    Learning professionals think their current smile sheets are fairly good at measuring the satisfaction of learners. A full one-third of respondents feel that their current approaches are not valid enough to provide them with meaningful insights about the learning interventions.

  7. What tools are organizations using to gather learner feedback?

    The two most common methods for collecting learner feedback are paper surveys and SurveyMonkey. Questions from LMSs are the next most widely used. Sophisticated evaluation tools are not much in use in our respondent sample.

  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?

    This may be the most important question we might ask, given that evaluation is supposed to aid us in maintaining our successes and improving on our deficiencies. Only 15% of respondents found learner feedback “very helpful” in helping them improve their learning. Many found the feedback “somewhat helpful” but a full one-third found the feedback “not very useful” in enabling them to improve learning.

  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

    As described in Question 2 above, many of the targets of LTEM are not being adequately measured at this point in time (November 2017 to September 2018, during the time immediately before and after LTEM was introduced). This indicates that LTEM is poised to help organizations uncover evaluation targets that can be helpful in setting goals for learning improvements.

Lessons to be Drawn

The results of this survey reinforce what we’ve known for years. In the workplace learning industry, we default to learner-feedback questions (smile sheets) as our most common learning-evaluation method. This is a big freakin’ problem for two reasons. First, our learner-feedback methods are inadequate. We often use poor survey methodologies and ones particularly unsuited to learner feedback, including the use of fuzzy Likert-like options and numeric scales. Second, even if we used the most advanced learner-feedback methods, we still would not be doing enough to gain insights into the strengths and weaknesses of our learning interventions.

Evaluation is meant to provide us with data we can use to make our most critical decisions. We need to know, for example, whether our learning designs are supporting learner comprehension, learner motivation to apply what they’ve learned, learner ability to remember what they’ve learned, and the supports available to help learners transfer their learning to their work. We typically don’t know these things. As a result, we don’t make design decisions we ought to. We don’t make improvements in the learning methods we use or the way we deploy learning. The research captured here should be seen as a wake up call.

The good news from this research is that learning professionals are often aware and sensitized to the deficiencies of their learning-evaluation methods. This seems like a good omen. When improved methods are introduced, they will seek to encourage their use.

LTEM, the new learning-evaluation model (which I developed with the help of some of the smartest folks in the workplace learning field) is targeting some of the most critical learning metrics—metrics that have too often been ignored. It is too new to be certain of its impact, but it seems like a promising tool.

Why I have turned my Attention to Evaluation (and why you should too!)

For 20 years, I’ve focused on compiling scientific research on learning in the belief that research-based information—when combined with a deep knowledge of practice—can drastically improve learning results. I still believe that wholeheartedly! What I’ve also come to understand is that we as learning professionals must get valid feedback on our everyday efforts. It’s simply our responsibility to do so.

We have to create learning interventions based on the best blend of practical wisdom and research-based guidance. We have to measure key indices that tell us how our learning interventions are doing. We have to find out what their strengths are and what their weaknesses are. Then we have to analyze and assess and make decisions about what to keep and what to improve. Then we have to make improvements and again measure our results and continue the cycle—working always toward continuous improvement.

Here’s a quick-and-dirty outline of the recommended cycle for using learning to improve work performance. “Quick-and-dirty” means I might be missing something!

  1. Learn about and/or work to uncover performance-improvement needs.
  2. If you determine that learning can help, continue. Otherwise, build or suggest alternative methods to get to improved work performance.
  3. Deeply understand the work-performance context.
  4. Sketch out a very rough draft for your learning intervention.
  5. Specify your evaluation goals—the metrics you will use to measure your intervention’s strengths and weaknesses.
  6. Sketch out a rough draft for your learning intervention.
  7. Specify your learning objectives (notice that evaluation goals come first!).
  8. Review the learning research and consider your practical constraints (two separate efforts subsequently brought together).
  9. Sketch out a reasonably good draft for your learning intervention.
  10. Build your learning intervention and your learning evaluation instruments (Iteratively testing and improving).
  11. Deploy your “ready-to-go” learning intervention.
  12. Measure your results using the previously determined evaluation instruments, which were based on your previously determined evaluation objectives.
  13. Analyze your results.
  14. Determine what to keep and what to improve.
  15. Make improvements.
  16. Repeat (maybe not every step, but at least from Step 6 onward)

And here is a shorter version:

  1. Know the learning research
  2. Understand your project needs.
  3. Outline your evaluation objectives—the metrics you will use.
  4. Design your learning.
  5. Deploy your learning and your measurement.
  6. Analyze your results.
  7. Make Improvements
  8. Repeat.

More Later Maybe

The results shared here are the result from all respondents. If I get the time, I’d like to look at subsets of respondents. For example, I’d like to look at how learning executives and managers might differ from learning practitioners. Let me know how interested you would be in these results.

Also, I will be conducting other surveys on learning-evaluation practices, so stay tuned. We have been too long frustrated with our evaluation practices and more work needs to be done in understanding the forces that keep us from doing what we want to do. We could also use more and better learning-evaluation tools because the truth is that learning evaluation is still a nascent field.

Finally, because I learn a ton by working with clients who challenge themselves to do more effective interventions, please get in touch with me if you’d like a partner in thinking things through and trying new methods to build more effective evaluation practices. Also, please let me know how you’ve used LTEM (The Learning-Transfer Evaluation Model).

Some links to make this happen:

Appreciations

As always, I am grateful to all the people I learn from, including clients, researchers, thought leaders, conference attendees, and more… Thanks also to all who acknowledge and share my work! It means a lot!

Back in 2008, I began discussing the scientific research on “implementation intentions.” I did this first at an eLearning Guild conference in March of 2008. I also spoke about it in 2008 at a talk to Salem State University, in a Chicago Workshop entitled Creating and Measuring Learning Transfer, and in one of my Brown Bag Lunch sessions delivered online.

In 2014, I wrote about implementation intentions specifically as a way to increase after-training follow-through. Thinking the term “Implementation Intentions” was too opaque and too general, I coined the term “Triggered Action Planning,” and argued that goal-setting at the end of training—what was often called action planning—would not be effective as triggered action planning. Indeed, in recounting the scientific research on implementation intentions, I often talked about how researchers were finding that setting situation-action triggers could create results that were twice as good as goal-setting alone. Doubling the benefits of goal setting! These kinds of results are huge!

I just came across a scientific study that supports the benefits of triggered action planning.

 

Shlomit Friedman and Simcha Ronen conducted two experiments and found similar results in each. I’m going to focus on their second one because it focused on a real training class with real employees. They used a class that taught retail sales managers how to improve interactions with customers. All the participants got the same exact training and were then randomly assigned to two different experimental groups:

  • Triggered Action Planning—Participants were asked to visualize situations with customers and how they would respond to seven typical customer objections.
  • Goal-Reminding Action Planning—Participants were asked to write down the goals of the training program and the aspects of the training program that they felt were most important.

Four weeks after the training, secret shoppers were used. They interacted with the supervisors using the key phrases and rated each supervisor on dichotomously-anchored rating scales from 1 to 10, with ten being best. The secret shoppers were blind to condition—that is they did not know which supervisors had gotten triggered action planning and which received the goal instructions. The findings showed that the triggered action planning produced improvements over the goal-setting condition by 76%, almost doubling the results.

It should be pointed out that this experiment could have been better designed to have the control group select their own goals. There may be some benefit to actual goal-setting compared with being reminded about the goals of the course. The experiment had its strengths too, most notably (1) the use of observers to record real-world performance four weeks after the training, and (2) the fact that all the supervisors had gone through the exact same training and were randomly assigned to either triggered action planning or the goal-reminding condition.

Triggered Action Planning

Triggered Action Planning has great potential to radically improve the likelihood that your learners will actually use what you’ve taught them. The reason it works so well is that it is based on a fundamental characteristic of human cognition. We are triggered to think and act based on cues in our environment. As learning professionals we should do whatever we can to:

  • Figure out what cues our learners will face in their work situations.
  • Teach them what to do when they encounter these cues.
  • Give them a rich array of spaced, repeated practice in handling these situations.

To learn more about how to implement triggered action planning, see my original blog post.

Research Cited

Friedman, S., & Ronen, S. (2015). The effect of implementation intentions on transfer of training. European Journal of Social Psychology, 45(4), 409-416.

This blog post took three hours to write.