Tag Archive for: LTEM

For those of you who don’t know Matt Richter, President of the Thiagi Group, he’s one of the most innovative thinkers when it comes to creating training that both sizzles and supports work performance. Recently, Matt and I began partnering in a new podcast, Truth In Learning, which I’ll have more to say about later once I figure out where the escape hatch is.

NOW, I want to share with you a brilliant new article, that Matt surprised me with, on his efforts to brainstorm innovative ways to use LTEM (The Learning-Transfer Evaluation Model).

You should read his article, but just to give you the list of seven uses for LTEM:

  1. Learning Evaluation—The primary intent of the LTEM framework.
  2. Instructional Design—To negotiate with stakeholders the outcomes desired.
  3. Training Game Design—To ensure games/activities have an instructional purpose.
  4. Coaching—Helping to build a development plan for those who are coached.
  5. Performance Consulting—To focus on performances that matter along the journey.
  6. Keynoting/Presenting—To ensure a focus on meaningful outcomes, not just infotainment.
  7. Sales/Business Development—To keep sales conversations focused on meaningful outcomes.

We are All in this Together

One of the great benefits of publishing LTEM is that since its publication last year I’m regularly being contacted by people whose organizations are finding new and innovative ways to utilize LTEM—and not just for learning evaluation but as a central element of their learning strategy and practice.

I’m especially pleased with those who have taken LTEM really deep, and I’d like to give a shout out to Elham Arabi who is doing her doctoral dissertation using LTEM as a spur to supporting a hospital’s effort to maximize the benefits or their learning interventions. Congrats to her for being accepted as a speaker at the upcoming eLearning Guild Learning Solutions Conference, March 31 to April 2 (2020) in Orlando. The title of her talk is: Using Evaluation Data to Enhance Your Training Programs.

Share Your Examples and Innovations

Please share your innovations and ideas about using LTEM in your workplace, on social media, or by contacting me at https://www.worklearning.com/contact/. I would really love to hear how it’s going, including any obstacles you’ve faced, your success stories, etc.

And, of course, if you’d like me to help your organization utilize LTEM, or just be the face of LTEM to your organization, please contact me so we can set up a time to talk, and consider my LTEM workshop to introduce LTEM to your team.

 

 

Dani Johnson at RedThread Research has just released a wonderful synopsis of Learning Evaluation Models. Comprehensive, Thoughtful, Well-Researched! It also has suggestions of articles to read!!!

This work is part of an ongoing effort to research the learning-evaluation space. With research sponsored by the folks at the adroit learning-evaluation company forMetris, RedThread is looking to uncover new insights about the way we do workplace learning evaluation.

Here’s what Dani says in her summary:

“What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation.”

 

“While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.”

 

“Of the literature we reviewed, several pieces stood out to us. Each of the following authors [detailed in the summary] and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.”

 

I also encourage you to look at this great review! You can see the summary here.

 

 

LTEM, the Learning-Transfer Evaluation Model, was designed as an alternative to the Kirkpatrick-Katzell Four-Level Model of learning evaluation. It was designed specifically to better align learning evaluation with the science of human learning. One way in which LTEM is superior to the Four-Level Model is in the way it highlights gradations of learning outcomes. Where the Four-Level model crammed all “Learning” outcomes into one box (that is, “Level 2”), LTEM separates learning outcomes into Tier-4 Knowledge, Tier-5 Decision-Making Competence, and Tier-6 Task Competence. This simple, yet incredibly powerful categorization, changes everything in terms of learning evaluation. First and foremost, it pushes us to go beyond inconsequential knowledge checks in our learning evaluations (and in our learning designs as well). To learn more about how LTEM creates additional benefits, you can click on this link, where you can access the model and a 34-page report for free, compliments of  me, Will Thalheimer, and Work-Learning Research, Inc.

Using LTEM in Credentialing

LTEM can also be used in credentialing—or less formally in specifying the rigorousness of our learning experiences. So for example, if our training course only asks questions about terminology or facts in its assessments, than we can say that the course provides a Tier-4 credential. If our course asks learners to successfully complete a series of scenario-based decisions, we can say that the course provides a Tier-5 credential.

Wow! Think of the power of naming the credential level of our learning experiences. Not only will it give us—and our business stakeholders—a clear sense of the strength of our learning initiatives, but it will drive our instructional designs to meet high standards of effectiveness. It will also begin to set the bar higher. Let’s admit a dirty truth. Too many of our training programs are just warmed-over presentations that do very little to help our learners make critical decisions or improve their actual skills. By focusing on credentialing, we focus on effectiveness!

 

Using LTEM Credentialing at Work-Learning Research

For the last several months, I’ve been developing an online course to teach learning professionals how to transform their learner surveys into Performance-Focused Smile Sheets. As part of this development process, I realized that I needed more than one learning experience—at least one to introduce the topic and one to give people extensive practice. I also wanted to provide people with a credential each time they successfully completed a learning experience. Finally, I wanted to make the credential meaningful. As the LTEM model suggests, attendance is NOT a meaningful benchmark. Neither is learner satisfaction. Nor is knowledge regurgitation.

Suddenly, it struck me. LTEM already provided a perfect delineation for meaningful credentialing. Tier-5 Decision-Making Competence would provide credentialing for the first learning experience. For people to earn their credential they would have to perform successfully in responding to realistic decision-making scenarios. Tier-6 Task Competence would provide credentialing for the second, application-focused learning experience. Additional credentials would only be earned if people could show results at Tier-7 and/or Tier-8 (Transfer to Work Performance and associated Transfer Effects).

 

 

The Gold-Certification Workshop is now ready for enrollment. The Master-Certification Workshop is coming soon! You can keep up to date or enroll now by going to the Work-Learning Academy page.

 

How You Can Use LTEM Credentialing to Assess Learning Experiences that Don’t Use LTEM

LTEM is practically brand new, having only been released to the public a year ago. So, while many organizations are gaining a competitive advantage by exploring its use, most of our learning infrastructure has yet to be transformed. In this transitional period, each of us has to use our wisdom to assess what’s already out there. How about you give it a try?

Two-Day Classroom Workshop — What Tier Credential?

What about a two-day workshop that gives people credit for completing the experience? Where would that be on the LTEM framework?

Here’s a graphic to help. Or you can access the full model by clicking here.

The two-day workshop would be credentialed at a Tier-1 level, signifying that the experience credentials learners by measuring their attendance or completion.

Two-Day Classroom Workshop with Posttest — What Tier Credential?

What if the same two-day workshop also added a test focused on whether the learners understood the content—and provided the test a week after the program. Note that in the LTEM model, credentialing is encouraged at Tiers 4, 5, and 6 to include assessments that show learners are able to remember, not just comprehend in the short term.

If the workshop added this posttest, we’d credential it at Tier-4, Knowledge Retention.

Half-Day Online Program with Performance-Focused Smile Sheet — What Tier Credential?

What if there was a half day workshop that used one of my Performance-Focused Smile Sheets to evaluate success. At what Tier would this be credentialed?

It would be credentialed at Tier-3, or Tier-3A if we wanted to delineate between learner surveys that assess learning effectiveness and those that don’t.

Three-Session Online Program with Traditional Smile Sheet — What Tier Credential?

This format—using three 90-minute sessions with a traditional smile sheet—is the most common form of credentialing in the workplace learning industry right now. Go look around at those that are providing credentials. They are providing credentials using relatively short presentations and a smile sheet at the end. If this is what they provide, what credentialing Tier do they deserve? Tier-3 or Tier-3B! That’s right! That’s it. They only tell us that learners are satisfied with the learning experience. They don’t tell us whether they can make important decisions or whether they can utilize new skills.

What is this credential really worth?

You can decide for yourself, but I think it could be worth more, if only those making the money provided credentialing at Tier-5, Tier-6, and beyond.

With LTEM we can begin to demand more!

 

Work-Learning Research and Will Thalheimer can Help!

People tell me I need to stop giving stuff away for free, or at least I ought to be more proactive in seeking customers. So, this is a reminder that I am available to help you improve your learning and learning evaluation strategies and tactics. Please reach out to me at my nifty contact form by clicking here.

For years, we have used the Kirkpatrick-Katzell Four-Level Model to evaluate workplace learning. With this taxonomy as our guide, we have concluded that the most common form of learning evaluation is learner surveys, that the next most common evaluation is learning, then on-the-job behavior, then organizational results.

The truth is more complicated.

In some recent research I led with the eLearning Guild and Jane Bozarth, we used the LTEM model to look for further differentiation. We found it.

Here’s some of the insights from the graphic above:

  • Learner surveys are NOT the most common form of learning evaluation. Program completion and attendance are more common, being done on most training programs in about 83% of organizations.
  • Learners surveys are still very popular, with 72% of respondents saying that they are used in more than one-third of their learning programs.
  • When we measure learning, we go beyond simple quizzes and knowledge checks.
    • Tier 5 assessments, measuring the ability to make realistic decisions, were reported by 24% of respondents to be used in more than one-third of their learning programs.
    • Tier 6 assessments, measuring realistic task performance (during learning), were reported by about 32% of respondents to be used in more than one-third of their learning programs.
    • Unfortunately, we messed up and forgot to include an option on Tier 4 Knowledge questions. However, previous eLearning Guild research in the 2007, 2008, and 2010 found that the percentage of respondents who reported that they measured memory recall of critical information was 60%, 60%, and 63% respectively.
  • Only about 20% of respondents said their organizations are measuring work performance.
  • Only about 16% of respondents said their organizations are measuring the organizational results from learning.
  • Interestingly, where the Four-Level Model puts all types of Results into one bucket, the LTEM framework encourages us to look at other results besides business results.
    • About 12% said their organizations were looking at the effect of the learning on the learner’s success and well-being.
    • Only about 3% said they were measuring the effects of learning on coworkers/family/friends.
    • Only about 3% said they were measuring the effects of learning on the community or society (as has been recommended by Roger Kaufman for years).
    • Only about 1% reported measuring the effects of learning on the environs.

 

Opportunities

The biggest opportunity—or the juiciest low-hanging fruit—is that we can stop just using Tier-1 attendance and Tier-3 learner-perception measures.

We can also begin to go beyond our 60%-rate in measuring Tier-4 knowledge and do more Tier-5 and Tier-6 assessments. As I’ve advocated for years, Tier-5 assessments using well-constructed scenario-based questions are the perfect balance of power and cost. They are aligned with the research on learning, they have moderate costs in terms of resources, and learners see them as challenging and interesting rather than punitive and unhelpful like they often see knowledge checks.

We can also begin to emphasize more Tier-7 evaluations. Shouldn’t we know whether our learning interventions are actually transferring to the workplace? The same is true for Tier-8 measures. We should look for strategic opportunities here—being mindful to the incredible costs of doing good Tier-8 evaluations. We should also consider looking beyond business results—as these are not the only effects our learning interventions are having.

Finally, we can use LTEM to help guide our learning-development efforts and our learning evaluations. By using LTEM, we are prompted to see things that have been hidden from us for decades.

 

The Original eLearning Guild Report

To get the original eLearning Guild report, click here.

 

The LTEM Model

To get the LTEM Model and the 34-page report that goes with it, click here.

My Year In Review 2018—Engineering the Future of Learning Evaluation

In 2018, I shattered my collarbone and lay wasting for several months, but still, I think I had one of my best years in terms of the contributions I was able to make. This will certainly sound like hubris, and surely it is, but I can’t help but think that 2018 may go down as one of the most important years in learning evaluation’s long history. At the end of this post, I will get to my failures and regrets, but first I’d like to share just how consequential this year was in my thinking and work in learning evaluation.

It started in January when I published a decisive piece of investigative journalism showing that Donald Kirkpatrick was NOT the originator of the four-level model; that another man, Raymond Katzell, has deserved that honor all along. In February, I published a new evaluation model, LTEM (The Learning-Transfer Evaluation Model)—intended to replace the weak and harmful Kirkpatrick-Katzell Four-Level Model. Already, doctoral students are studying LTEM and organizations around the world are using LTEM to build more effective learning-evaluation strategies.

Publishing these two groundbreaking efforts would have made a great year, but because I still have so much to learn about evaluation, I was very active in exploring our practices—looking for their strengths and weaknesses. I led two research efforts (one with the eLearning Guild and one with my own organization, Work-Learning Research). The Guild research surveyed people like you and your learning-professional colleagues on their general evaluation practices. The Work-Learning Research effort focused specifically on our experiences as practitioners in surveying our learners for their feedback.

Also in 2018, I compiled and published a list of 54 common mistakes that get made in learning evaluation. I wrote an article on how to think about our business stakeholders in learning evaluation. I wrote a post on one of the biggest lies in learning evaluation—how we fool ourselves into thinking that learner feedback gives us definitive data on learning transfer and organizational results. It does not! I created a replacement for the problematic Net Promoter Score. I shared my updated smile-sheet questions, improving those originally put forth in my award winning book, Performance-Focused Smile Sheets. You can access all these publications below.

In my 2018 keynotes, conference sessions, and workshops, I recounted our decades-long frustrations in learning evaluation. We are clearly not happy with what we’ve been able to do in terms of learning evaluation. There are two reasons for this. First, learning evaluation is very complex and difficult to accomplish—doubly so given our severe resource constraints in terms of both budget and time. Second, our learning-evaluation tools are mostly substandard—enabling us to create vanity metrics but not enabling us to capture data in ways that help us, as learning professionals, make our most important decisions.

In 2019, I will continue my work in learning evaluation. I still have so much to unravel. If you see a bit of wisdom related to learning evaluation, please let me know.

Will’s Top Fifteen Publications for 2018

Let me provide a quick review of the top things I wrote this year:

  1. LTEM (The Learning-Transfer Evaluation Model)
    Although published by me in 2018, the model and accompanying 34-page report originated in work begun in 2016 and through the generous and brilliant feedback I received from Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, and Mike Rustici—as well as from attendees in the 2017 ISPI Design-Thinking conference and the 2018 Learning Technologies conference in London. LTEM is designed to replace the Kirkpatrick-Katzell Four-Level Model originally formulated in the 1950s. You can learn about the new model by clicking here.
  2. Raymond Katzell NOT Donald Kirkpatrick
    Raymond Katzell originated the Four-Level Model. Although Donald Kirkpatrick embraced accolades for the Four-Level Model, it turns out that Raymond Katzell was the true originator. I did an exhaustive investigation and offered a balanced interpretation of the facts. You can read the original piece by clicking here. Interestingly, none of our trade associations have reported on this finding. Why is that? LOL
  3. When Training Pollutes. Our Responsibility to Lessen the Environmental Damage of Training
    I wrote an article and placed it on LinkedIn and as far as I can tell, very few of us really want to think about this. But you can get started by reading the article (by clicking here).
  4. Fifty-Four Mistakes in Learning Evaluation
    Of course we as an industry make mistakes in learning evaluation, but who knew we made so many? I began compiling the list because I’d seen a good number of poor practices and false narratives about what is important in learning evaluation, but by the time I’d gotten my full list I was a bit dumbstruck by the magnitude of problem. I’ve come to believe that we are still in the dark ages of learning evaluation and we need a renaissance. This article will give you some targets for improvements. Click here to read it.
  5. New Research on Learning Evaluation — Conducted with The eLearning Guild
    The eLearning Guild and Dr. Jane Bozarth (the Guild’s Director of Research) asked me to lead a research effort to determine what practitioners in the learning/elearning field are thinking and doing in terms of learning evaluation. In a major report released about a month ago, we reveal findings on how people feel about the learning measurement they are able to do, the support they get from their organizations, and their feelings about their current level of evaluation competence. You can read a blog post I wrote highlighting one result from the report—that a full 40% of us are unhappy with what we are able to do in terms of learning evaluation. You can access the full report here (if you’re a Guild member) and an executive summary. Also, stay tuned to my blog or signup for my newsletter to see future posts about our findings.
  6. Current Practices in Gathering Learner Feedback
    We at Work-Learning Research, Inc. conducted a survey focused on gathering learner feedback (i.e., smile sheets, reaction forms, learner surveys) that spanned 2017 and 2018. Since the publication of my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, I’ve spent a ton of time helping organizations build more effective learner surveys and gauging common practices in the workplace learning field. This research survey continued that work. To read my exhaustive report, click here.
  7. One of the Biggest Lies in Learning Evaluation — Asking Learners about Level 3 and 4 (LTEM Tiers 7 and 8)
    This is big! One of the biggest lies in learning evaluation. It’s a lie we like to tell ourselves and a lie our learning-evaluation vendors like to tell us. If we ask our learners questions that relate to their job performance or the organizational impact of our learning programs we are NOT measuring at Kirkpatrick-Katzell Level 3 or 4 (or at LTEM Tiers 7 and 8), we are measuring at Level 1 and LTEM Tier 3. You can read this refutation here.
  8. Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?
    What do you want from the trade organizations in the learning field? Probably “accurate information” is high on your list. But what happens when the information you get is biased and untrustworthy? Could. Never. Happen. Right? Read this article to see how bias might creep in.
  9. Snake Oil. The Story of Clark Stanley as Preface to Clark Quinn’s Excellent Book
    This was one of my favorite pieces of writing in 2018. Did I ever mention that I love writing and would consider giving this all up for a career as a writer? You’ve all heard of “snake oil” but if you don’t know where the term originated, you really ought to read this piece.
  10. Dealing with the Emotional Readiness of Our Learners — My Ski Accident Reflections
    I had a bad accident on the ski slopes in February this year and I got thinking about how our learners might not always be emotionally ready to learn. I don’t have answers in this piece, just reflections, which you can read about here.
  11. The Backfire Effect. Not the Big Worry We Thought it was (for Those Who Would Debunk Learning Myths)
    This article is for those interested in debunking and persuasion. The Backfire Effect was the finding that trying to persuade someone to stop believing a falsehood, might actually make them more inclined to believe the falsehood. The good news is that new research showed that this worry might be overblown. You can read more about this here (if you dare to be persuaded).
  12. Updated Smile-Sheet Questions for 2018
    I published a set of learner-survey questions in my 2016 book, and have been working with clients to use these questions and variations on these questions for over two years since then. I’ve learned a thing or two and so I published some improvements early last year. You can see those improvements here. And note, for 2019, I’ll be making additional improvements—so stay tuned! Remember, you can sign up to be notified of my news here.
  13. Replacement for NPS (The Net Promoter Score)
    NPS is all the rage. Still! Unfortunately, it’s a terribly bad question to include on a learner survey. The good news is that now there is an alternative, which you can see here.
  14. Neon Elephant Award for 2018 to Clark Quinn
    Every year, I give an award for a great research-to-practice contribution in the workplace learning field. This year’s winner is Clark Quinn. See why he won and check out his excellent resources here.
  15. New Debunker Club Website
    The Debunker Club is a group of people who have committed to debunking myths in the learning field and/or sharing research-based information. In 2018, working with a great team of volunteers, we revamped the Debunker Club website to help build a community of debunkers. We now have over 800 members from around the world. You can learn more about why The Debunker Club exists by clicking here. Also, feel free to join us!

 

My Final Reflections on 2018

I’m blessed to be supported by smart passionate clients and by some of the smartest friends and colleagues in the learning field. My Work-Learning Research practice turned 20 years old in 2018. Being a consultant—especially one who focuses on research-to-practice in the workplace learning field—is still a challenging yet emotionally rewarding endeavor. In 2018, I turned my attention almost fully to learning evaluation. You can read about my two-path evaluation approach here. One of my research surveys totally flopped this year. It was focused on the interface between us (as learning professionals) and our organizations’ senior leadership. I wanted to know if what we thought senior leadership wanted was what they actually wanted. Unfortunately, neither I nor any of the respondents could entice a senior leader to comment. Not one! If you or your organization has access to senior managers, I’d love to partner with you on this! Let me know. Indeed, this doesn’t even have to be research. If your CEO would be willing to trade his/her time letting me ask a few questions in exchange for my time answering questions about learning, elearning, learning evaluation, etc., I’d be freakin’ delighted! I failed this year in working out a deal with another evaluation-focused organization to merge our efforts. I was bummed about this failure as the synergies would have been great. I also failed in 2018 to cure myself of the tendency to miss important emails. If you ever can’t get in touch with me, try, try again! Thanks and apologies! I had a blast in 2018 speaking and keynoting at conferences—both big and small conferences. From doing variations on the Learning-Research Quiz Show (a rollicking good time) to talking about innovations in learning evaluation to presenting workshops on my learning-evaluation methods and the LTEM model. Good stuff, if a ton of work. Oh! I did fail again in 2018 turning my workshops into online workshops. I hope to do better in 2019. I also failed in 2018 in finishing up a research review of the training transfer research. I’m like 95% done, but still haven’t had a chance to finish.

2018 broke my body, made me unavailable for a couple of months, but overall, it turned out to be a pretty damn good year. 2019 looks promising too as I have plans to continue working on learning evaluation. It’s kind of interesting that we are still in the dark ages of learning evaluation. We as an industry, and me as a person, have a ton more to learn about learning evaluation. I plan to continue the journey. Please feel free to reach out and let me know what I can learn from you and your organization. And of course, because I need to pay the rent, let me say that I’d be delighted if you wanted me to help you or your organization. You can reach me through the Work-Learning Research contact form.

Thanks for reading and being interested in my work!!!

At a recent online discussion held by the Minnesota Chapter of ISPI, where they were discussing the Serious eLearning Manifesto, Michael Allen offered a brilliant idea for learning professionals.

Michael’s company, Allen Interactions, talks regularly with prospective clients. It is in this capacity that Michael often asks this question (or one with this gist):

What is the last thing you want your learners to be doing in training before they go back to their work?

Michael knows the answer—he is using Socratic questioning here—and the answer should be obvious to those skilled in developing learning. We want people to be practicing what they’ve learned, and hopefully practicing in as realistic a way as possible. Of course!

Of course, but too often we don’t think like this. We have our instructional objectives and we plow through to cover content, hoping against hope that the knowledge seeds we plant will magically turn into performance on the job—as if knowledge can be harvested without any further nurturance.

We must remember the wisdom behind Michael’s question, that it is our job as learning professionals to ensure our learners are not only gaining knowledge, but that they are getting practice in making decisions and practicing realistic tasks.

One way to encourage yourself to engage in these good practices is to utilize the LTEM model, a learning evaluation model designed to support us as learning professionals in measuring what’s truly important in learning. LTEM’s Tier 5 and 6 encourage us to evaluate learners’ proficiency in making work-relevant decisions (Tier 5) and performing work-relevant tasks (Tier 6).

Whatever method you use to encourage your organization and team to engage in this research-based best practice, remember that we are harming our learners when we just teach content. Without practice, very little learning will transfer to the workplace.

The Learning-Transfer Evaluation Model (LTEM) and accompanying Report were updated today with two major changes:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

You can find the latest version of LTEM and the accompanying report by clicking here.

NOTICE OF UPDATE (17 May 2018):

The LTEM Model and accompanying Report were updated today and can be found below.

Two major changes were included:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

 

This blog post introduces a new learning-evaluation model, the Learning-Transfer Evaluation Model (LTEM).

 

Why We Need a New Evaluation Model

It is well past time for a new learning-evaluation model for the workplace learning field. The Kirkpatrick-Katzell Model is over 60 years old. It was born in a time before computers, before cognitive psychology revolutionized the learning field, before the training field was transformed from one that focused on the classroom learning experience to one focused on work performance.

The Kirkpatrick-Katzell model—created by Raymond Katzell and popularized by Donald Kirkpatrick—is the dominant standard in our field. It has also done a tremendous amount of harm, pushing us to rely on inadequate evaluation practices and poor learning designs.

I am not the only critic of the Kirkpatrick-Katzell model. There are legions of us. If you do a Google search starting with these letters, “Criticisms of the Ki,” Google anticipates the following: “Criticisms of the Kirkpatrick Model” as one of the most popular searches.

Here’s what a seminal research review said about the Kirkpatrick-Katzell model (before the model’s name change):

The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders…

The New Model

For the past year or so I’ve been working to develop a new learning-evaluation model. The current version is the eleventh iteration, improved after reflection, after asking some of the smartest people in our industry to provide feedback, after sharing earlier versions with conference attendees at the 2017 ISPI innovation and design-thinking conference and the 2018 Learning Technologies conference in London.

Special thanks to the following people who provided significant feedback that improved the model and/or the accompanying article:

Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, Mike Rustici

The model, which I’ve named the Learning-Transfer Evaluation Model (LTEM, pronounced L-tem) is a one page, eight-level model, augmented with color coding and descriptive explanations. In addition to the model itself, I’ve prepared a 34-page report to describe the need for the model, the rationale for its design, and recommendations on how to use it.

You can access the model and the report by clicking on the following links:

 

 

Release Notes

The LTEM model and report were researched, conceived, and written by Dr. Will Thalheimer of Work-Learning Research, Inc., with significant and indispensable input from others. No one sponsored or funded this work. It was a labor of love and is provided as a valentine for the workplace learning field on February 14th, 2018 (Version 11). Version 12 was released on May 17th, 2018 based on feedback from its use. The model and report are copyrighted by Will Thalheimer, but you are free to share them as is, as long as you don’t sell them.

If you would like to contact me (Will Thalheimer), you can do that at this link: https://www.worklearning.com/contact/

If you would like to sign up for my list, you can do that here: https://www.worklearning.com/sign-up/