The LEARNNOVATORS team (specifically Santhosh Kumar) asked if I would join them in their Crystal Balling with Learnnovators interview series, and I accepted! They have some really great people on the series, I recommend that you check it out!

The most impressive thing was that they must have studied my whole career history and read my publication list and watched my videos because they came up with a whole set of very pertinent and important questions. I was BLOWN AWAY—completely IMPRESSED! And, given their dedication, I spent a ton of time preparing and answering their questions.

It’s a two part series and here are the links:

Here are some of the quotes they pulled out and/or I’d like to highlight:

Learning is one of the most wondrous, complex, and important areas of human functioning.

The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning.

Data can be good, but also very very bad.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important.

We will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Our senior managers don’t understand learning, they think it is easy, so they don’t support L&D like they should.

Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness.

Learning is hard and learning that is focused on work is even harder because our learners have other priorities—so we shouldn’t beat ourselves up too much.

We know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements.

Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera

Learnnovators asked me what I hoped for the learning and development field. Here’s what I said:

Nobody is good at predicting the future, so I will share the vision I hope for. I hope we in learning and development continue to be passionate about helping other people learn and perform at their best. I hope we recognize that we have a responsibility not just to our organizations, but beyond business results to our learners, their coworkers/families/friends, to the community, society, and the environs. I hope we become brilliantly professionalized, having rigorous standards, a well-researched body of knowledge, higher salaries, and career paths beyond L&D. I hope we measure better, using our results to improve what we do. I hope we, more-and-more, take a small-S scientific approach to our practices, doing more A-B testing, compiling a database of meaningful results, building virtuous cycles of continuous improvement. I hope we develop better tools to make building better learning—and better performance—easier and more effective. And I hope we continue to feel good about our contributions to learning. Learning is at the heart of our humanity!

Mirjam Neelen and Paul Kirschner have written a truly beautiful book—one that everyone in the workplace learning field should read, study, and keep close at hand. It’s a book of transformational value because it teaches us how to think about our jobs as practitioners in utilizing research-informed ideas to build maximally effective learning architectures.

Their book is titled, Evidence-Informed Learning Design: Use Evidence to Create Training Which Improves Performance. The book warns us of learning myths and misconceptions—but it goes deeper to bring us insights in how these myths arise and how we can disarm them in our work.

Here’s a picture of me and my copy! The book officially goes on sale today in the United States.

 

Click to get your copy of the book from Amazon (US).

The book covers the most powerful research-informed learning factors known by science. Those who follow my work will hear familiar terms like Feedback, Retrieval Practice, Spacing; but also, terms like double-barreled learning, direct instruction, nuanced design, and more. I will keep this book handy in my own work as a research-inspired consultant, author, provocateur—but this book is not designed for people like me. Evidence-Informed Learning Design is perfect for everyone with more than a year of experience in the workplace learning field.

The book so rightly laments that “the learning field is cracked at its foundation.” It implores us to open our eyes to what works and what doesn’t, and fundamentally to rethink how we as practitioners work in our teams to bring about effective learning.

The book intrigues as can be seen in sections like, “Why myths are like zombies,” and “No knowledge, no nothing,” and “Pigeonholing galore.”

One of my favorite parts of the book is the interviews of researchers that delve into the practical ramifications of their work. There are interviews with an AI expert, a neuroscientist, and an expert on complex learning, among others. These interviews will wake up more than a few of us.

What makes this book so powerful is that it combines the work of a practitioner and a researcher. Mirjam is one of our field’s most dedicated practitioners in bringing research inspirations to bear on learning practice. Paul is one of the great academic researchers in doing usable research and bringing that research to bear on educational practice. Together, for many years, they’ve published one of the most important blogs in the workplace learning field, the Three-Star Learning blog (https://3starlearningexperiences.wordpress.com/).

Here are some things you will learn in the book:

Big Picture Concepts:

  • What learning myths to avoid.
  • What learning factors to focus on in your learning designs.
  • How to evaluate research claims.

Specific Concepts:

  • Whether Google searches can supplant training.
  • What neuroscience says about learning, if anything.
  • How to train for complex skills.
  • How AI might help learning, now and in the future.
  • Types of research to be highly skeptical of.
  • Whether you need to read scientific research yourself.
  • Whether you should use learning objectives, or not, or when.
  • Whether learning should be fun.
  • The telltale signs of bad research.

This book is so good that it should be required reading for everyone graduating at the university level in learning-and-development.

 

 

Click on the book image to see it on Amazon (US).

 

I’m thrilled and delighted to share the news that Jane Bozarth, research-to-practice advocate, author of Show Your Work, and Director of Research for the eLearning Guild, is pledging $1,000 to the Learning Styles Challenge!!

 

 

Jane has been a vigorous debunker of the Learning-Styles Myth for many, many years! For those of you who don’t know, the Learning-Styles Notion is the idea that different people have different styles of learning and that by designing our learning programs to meet each style—that is, to actually provide different learning content or activities to different learners—learning will be improved. Sounds great, but unfortunately, dozens and dozens of research studies and many major research reviews have found the Learning-Styles Notion to be untrue!

 

“Decades of research suggest that learning styles, or the belief that people learn better when they receive instruction in their dominant way of learning, may be one of the most pervasive myths about cognition.”

Nancekivell, S. E., Shah, P., & Gelman, S. A. (2020).
Maybe they’re born with it, or maybe it’s experience:
Toward a deeper understanding of the learning style myth.
Journal of Educational Psychology, 112(2), 221–235.

 

 

“Several reviews that span decades have evaluated the literature on learning styles (e.g., Arter & Jenkins, 1979; Kampwirth & Bates, 1980; Kavale & Forness, 1987; Kavale, Hirshoren, & Forness, 1998; Pashler et al., 2009; Snider, 1992; Stahl, 1999; Tarver & Dawson, 1978), and each has drawn the conclusion that there is no viable evidence to support the theory.”

Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015).
The scientific status of learning styles theories.
Teaching of Psychology, 42(3), 266-271.

 

With Jane’s contribution, the Learning Styles Challenge is up to $6,000! That is, if someone can demonstrate a beneficial effect from using learning styles to design learning, the underwriters will pay that person or group $6,000.

The Learning Styles Challenge began on August 4th 2006 when I offered $1,000 for the first challenge. In 2014, it expanded to $5,000 when additional pledges were made by Guy Wallace, Sivasailam “Thiagi” Thiagarajan, Bob Carleton, and Bob’s company, Vector Group.

Thank you to Jane Bozarth for her generous contribution to the cause! And check out her excellent research review of the learning-styles literature. Jane’s report is filled with tons of research, but also many very practical recommendations for learning professionals.

For over two years I’ve been compiling and analyzing the research on learning transfer as it relates to workplace learning and development. Today I am releasing my findings to the public.

Here is the Overview from the Research-to-Practice Report:

Learning transfer—or “training transfer” as it is sometimes called—occurs when people learn concepts and/or skills and later utilize those concepts/skills in work situations.1 Because we invest time, effort, and resources to create learning interventions, we hope to get a return on those investments in the form of some tangible benefit—usually some form of improved work outcome. Transfer, then, is our paramount goal. When we transfer, we are successful. When we don’t transfer, we fail.

To be practical about this, it is not enough to help our learners comprehend concepts or understand skills. It is not enough to get them to remember concepts/skills. It is not enough to inspire our learners to be motivated to use what they’ve learned. These results may be necessary, but they are not sufficient. We learning professionals hold transfer sacrosanct because it is the ultimate standard for success and failure.

This research review was conducted to determine factors that can be leveraged by workplace learning professionals to increase transfer success. This effort was not intended to be an exhaustive scientific review, but rather a quick analysis of recent research reviews, meta-analyses, and selected articles from scientific refereed journals. The goal of this review was to distill validated transfer factors—learning design and learning support elements that increase the likelihood that learning will transfer—and make these insights practical for trainers, learning architects, instructional designers, elearning developers, and learning professionals in general. In targeting this goal, this review aligns with transfer researchers’ recent admonition to ensure the scientific research on learning transfer gets packaged in a format that is usable by those who design and develop learning (Baldwin, Ford, Blume, 2017).

Unfortunately, after reviewing the scientific articles referenced in this report as well as others not cited here, my conclusion is that many of the most common transfer approaches have not yet been researched with sufficient rigor or intensity to enable us to have full certainty about how to engineer transfer success. At the end of this report, I make recommendations on how we can have a stronger research base.

Despite the limitations of the research, this quick review did uncover many testable hypotheses about the factors that may support transfer. Factors are presented here in two categories—those with strong support in the research, and those the research identifies as having possible benefits. I begin by highlighting the overall strength of the research.

Special Thanks for Early Sponsorship

Translating scientific research involves a huge investment in time, and to be honest, I am finding it more and more difficult to carve out time to do translational research. So it is with special gratitude that I want to thank Emma Weber of Lever Transfer of Learning for sponsoring me back in 2017 on some of the early research-translation efforts that got me started in compiling the research for this report. Without Lever’s support, this research would not have been started!

Tidbits from the Report

There are 17 research-supported recommended transfer factors and an additional six possible transfer factors. Here are a subset of the supported transfer factors:

  • Transfer occurs most potently to the extent that our learning designs strengthen knowledge and skills.
  • Far transfer hardly ever happens. Near transfer—transfer to contexts similar to those practiced during training or other learning efforts—can happen.
  • Learners who set goals are more likely to transfer.
  • Learners who also utilize triggered action planning will be even more likely to transfer, compared to those who only set goals alone.
  • Learners with supervisors who encourage, support, and monitor learning transfer are more likely to successfully transfer.
  • The longer the time between training and transfer, the less likely that training-generated knowledge create benefits for transfer.
  • The more success learners have in their first attempts to transfer what they’ve learned, the more likely they are to persevere in more transfer-supporting behaviors.

The remaining recommendations can be viewed in the report (available below).

Recommendations to Researchers

While transfer researchers have done a great deal of work in uncovering how transfer works, the research base is not as solid as it should be. For example, much of the transfer research uses learners’ subjective estimates of transfer—rather than actual transfer—as the dependent measure. Transfer researchers themselves recognize the limitations of the research base, but they could be doing more. In the report, I offer several additional recommendations to the improvements they’ve already suggested.

The Research-to-Practice Report

 

Access the report by clicking here…

 

Sign Up for Additional Research-Inspired Practical Recommendations

 

Sign up for Will Thalheimer’s Newsletter here…

 

 

12th December 2019

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2019 Neon Elephant Award, given to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

Click here to learn more about the Neon Elephant Award…

 

2019 Award Winner – David Epstein

David Epstein, is an award-winning writer and journalist, having won awards for his writing from such esteemed bodies as the National Academies of Sciences, Engineering, and Medicine, the Society of Professional Journalists, and the National Center on Disability and Journalism—and has been included in the Best American Science and Nature Writing anthology. David has been a science writer for ProPublica and a senior writer at Sports Illustrated where he helped break the story on baseball legend Alex Rodriguez’s steroid use. David speaks internationally on performance science and the uses (and misuses) of data and his TED talk on human athletic performance has been viewed over eight million times.

Mr. Epstein is the author of two books:

David is honored this year for his new book on human learning and development, Range: Why Generalists Triumph in a Specialized World. The book lays out a very strong case for why most people will become better performers if they focus broadly on their development rather than focusing tenaciously and exclusively on one domain. If we want to raise our children to be great soccer players (aka “football” in most places), we’d be better off having them play multiple sports rather than just soccer. If we want to develop the most innovative cancer researchers, we shouldn’t just train them in cancer-related biology and medicine, we should give them a wealth of information and experiences from a wide range of fields.

Range is a phenomenal piece of art and science. Epstein is truly brilliant in compiling and comprehending the science he reviews, while at the same time telling stories and organizing the book in ways that engage and make complex concepts understandable. In writing the book, David is debunking the common wisdom that performance is improved most rapidly and effectively by focusing practice and learning toward a narrow foci. Where others have only hinted at the power of a broad developmental pathway, Epstein’s Range builds up a towering landmark of evidence that will remain visible on the horizon of the learning field for decades if not millennium.

We in the workplace learning-and-development field should immerse ourselves in Range—not just in thinking about how to design learning and architect learning contexts, but also in thinking about how to evaluate prospects for recruitment and hiring. It’s likely that we currently undervalue people with broad backgrounds and artificially overvalue people with extreme and narrow talents.

Here is a nice article where Epstein wrestles with a question that elucidates an issue we have in our field—what happens when many people in a field are not following research-based guidelines. The article is set in the medical profession, but there are definite parallels to what we face everyday in the learning field.

Epstein is the kind of person we should honor and emulate in the workplace learning field. He is unafraid in seeking the truth, relentless and seemingly inexhaustible in his research efforts, and clear and engaging as a conveyor of information. It is an honor to recognize him as this year’s winner of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

Christian Unkelbach and Fabia Högden, researchers at the Universität zu Köln, reviewed research on how pairing celebrities—or other stimuli—can imbue objects with characteristics that might be beneficial. Their article in Current Directions in Psychological Science (2019, 28(6), 540–546), titled Why Does George Clooney Make Coffee Sexy? The Case for Attribute Conditioning, described earlier research that showed how REPEATED PAIRINGS of George Clooney and the Nespresso brand, in advertisements, imbued the coffee brand with attributes such as cosmopolitan, sophisticated, and seductive. Research on persuasion (see Cialdini, 2009 and here’s a nice blog-post review), also has demonstrated the power of celebrities to gain attention and be persuasive.

 

Can we use the power of celebrity to support our training?

Yes! And first realize that you don’t have to have access to worldwide celebrities. There are always people in our organizations who are celebrities as well; people like our CEOs, our best and brightest, our most beloved. You don’t even really need celebrities to get some kind of transference.

What could celebrity do for us? It could make employees more interested in our training, more likely to pay attention, more likely to apply what they’ve learned, etc.

The only catch I see is that this kind of attribute transference may require multiple pairings, so we’d have to figure out ways to do that without it feeling repetitive.

I, Will Thalheimer, am Available!

George Clooney shouldn’t have all the fun. If you’d like to imbue your learning product or service with a sense of sexy research-inspired sophistication, my services are available. I’m so good, I can even sell overhead transparencies to trainers!

 

I’m joking! Please don’t call! SMILE

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

CEO’s are calling for their companies to be more innovative in the ever-accelerating competitive landscape! Creativity is the key leverage point for innovation. Research I’ve compiled (from the science on creativity) shows that unique and valuable ideas are generated when people and teams look beyond their inner circle to those in their peripheral networks. GIVEN THIS, a smart company will seed themselves with outside influencers who are working with new ideas.

But what are a vast majority of big companies doing that kills their own creativity? They are making it difficult or virtually impossible for their front-line departments to hire small businesses and consultants. It’s allowed, but massive walls are being built! And these walls have exploded over the last five to ten years:

  1. Only fully vetted companies can be hired, requiring small lean companies to waste time in compliance—or turn away in frustration. Also causing large-company managers to favor the vetted companies, even if a small business or consultant would provide better value or more-pertinent products or services.
  2. Master Service Agreements are required (pushing small companies away due to time and legal fees).
  3. Astronomical amounts of insurance are required. Why the hell do consultants need $2 million in insurance, even when they are consulting on non-safety-related issues? Why do they need any insurance at all if they are not impacting critical safety factors?
  4. Companies can’t be hired unless they’ve been in business for 5 or 10 or 15 years, completely eliminating the most unique and innovative small businesses or consultants—those who recently set up shop.
  5. Minimum company revenues are required, often in the millions of dollars.

These barriers, of course, aren’t the only ones pushing large organizations away from small businesses or consultants. Small companies often can’t afford sales forces or marketing budgets so they are less likely to gain large companies’ share of attention. Small companies aren’t seen as safe bets because they don’t have a name, or their website is not as beautiful, or they haven’t yet worked with other big-name companies, or the don’t speak the corporate language. Given these surface characteristics, only the bravest, most visionary frontline managers will take the risk to make the creative hire. And even then, their companies are making it increasingly hard for them to follow through.

Don’t be fooled by the high-visibility anecdotes that show a CEO hiring a book author or someone featured in Wired, HBR, or on some podcast. Yes, CEO’s and senior managers can easily find ways to hire innovators, and the resulting top-down creativity infusion can be helpful. But it can be harmful as well!!!! Too many times senior managers are too far away from knowing what works and what’s needed on the front lines. They push things innocently not knowing that they are distracting the troops from what’s most important, or worse, pushing the frontline teams to do stupid stuff against their best judgment.

Even more troublesome with these anecdotes of top-down innovation is that they are too few and far between. There may be ten senior managers who can hire innovation seeds, but there are dozens or hundreds or thousands of folks who might be doing so but can’t.

A little digression: It’s the frontline managers who know what’s needed—or perhaps more importantly the “leveraging managers” if I can coin a term. These are the managers who are deeply experienced and wise in the work that is getting done, but high enough in the organization to see the business-case big picture. I will specifically exclude “bottle-cap managers” who have little or no experience in a work area, but were placed there because they have business experience. Research shows these kind of hires are particularly counterproductive in innovation.

Let me summarize.

I’m not selling anything here. I’m in the training, talent development, learning evaluation business as a consultant—I’m not an innovation consultant! I’m just sharing this out of my own frustration with these stupid counter-productive barriers that I and my friends in small businesses and consultancies have experienced. I also am venting here to provide a call to action for large organizations to wake the hell up to the harm you are inflicting on yourselves and on the economy in general. By not supporting the most innovative small companies and consultants, you are dumbing-down the workforce for years to come!

Alright! I suppose I should offer to help instead of just gripe! I have done extensive research on creativity. But I don’t have a workshop developed, the research is not yet in publishable form, and it’s not really what I’m focused on right now. I’m focused on innovating in learning evaluation (see my new learning-evaluation model and my new method for capturing valid and meaningful data from learners). These are two of the most important innovations in learning evaluation in the past few years!

However, a good friend of mine did, just last month, suggest that the world should see the research on creativity that I’ve compiled (thanks Mirjam!). Given the right organization, situation, and requirements—and the right amount of money—I might be willing to take a break from my learning-evaluation work and bring this research to your organization. Contact me to try and twist my arm!

I’m serious, I really don’t want to do this right now, but if I can capture funds to reinvest in my learning-evaluation innovations, I just might be persuaded. On the contact-me link, you can set up an appointment with me. I’d love to talk with you if you want to talk innovation or learning evaluation.

For years, we have used the Kirkpatrick-Katzell Four-Level Model to evaluate workplace learning. With this taxonomy as our guide, we have concluded that the most common form of learning evaluation is learner surveys, that the next most common evaluation is learning, then on-the-job behavior, then organizational results.

The truth is more complicated.

In some recent research I led with the eLearning Guild and Jane Bozarth, we used the LTEM model to look for further differentiation. We found it.

Here’s some of the insights from the graphic above:

  • Learner surveys are NOT the most common form of learning evaluation. Program completion and attendance are more common, being done on most training programs in about 83% of organizations.
  • Learners surveys are still very popular, with 72% of respondents saying that they are used in more than one-third of their learning programs.
  • When we measure learning, we go beyond simple quizzes and knowledge checks.
    • Tier 5 assessments, measuring the ability to make realistic decisions, were reported by 24% of respondents to be used in more than one-third of their learning programs.
    • Tier 6 assessments, measuring realistic task performance (during learning), were reported by about 32% of respondents to be used in more than one-third of their learning programs.
    • Unfortunately, we messed up and forgot to include an option on Tier 4 Knowledge questions. However, previous eLearning Guild research in the 2007, 2008, and 2010 found that the percentage of respondents who reported that they measured memory recall of critical information was 60%, 60%, and 63% respectively.
  • Only about 20% of respondents said their organizations are measuring work performance.
  • Only about 16% of respondents said their organizations are measuring the organizational results from learning.
  • Interestingly, where the Four-Level Model puts all types of Results into one bucket, the LTEM framework encourages us to look at other results besides business results.
    • About 12% said their organizations were looking at the effect of the learning on the learner’s success and well-being.
    • Only about 3% said they were measuring the effects of learning on coworkers/family/friends.
    • Only about 3% said they were measuring the effects of learning on the community or society (as has been recommended by Roger Kaufman for years).
    • Only about 1% reported measuring the effects of learning on the environs.

 

Opportunities

The biggest opportunity—or the juiciest low-hanging fruit—is that we can stop just using Tier-1 attendance and Tier-3 learner-perception measures.

We can also begin to go beyond our 60%-rate in measuring Tier-4 knowledge and do more Tier-5 and Tier-6 assessments. As I’ve advocated for years, Tier-5 assessments using well-constructed scenario-based questions are the perfect balance of power and cost. They are aligned with the research on learning, they have moderate costs in terms of resources, and learners see them as challenging and interesting rather than punitive and unhelpful like they often see knowledge checks.

We can also begin to emphasize more Tier-7 evaluations. Shouldn’t we know whether our learning interventions are actually transferring to the workplace? The same is true for Tier-8 measures. We should look for strategic opportunities here—being mindful to the incredible costs of doing good Tier-8 evaluations. We should also consider looking beyond business results—as these are not the only effects our learning interventions are having.

Finally, we can use LTEM to help guide our learning-development efforts and our learning evaluations. By using LTEM, we are prompted to see things that have been hidden from us for decades.

 

The Original eLearning Guild Report

To get the original eLearning Guild report, click here.

 

The LTEM Model

To get the LTEM Model and the 34-page report that goes with it, click here.