Tag Archive for: evaluation

In 2016 I published a book on how to radically transform learner surveys into something useful. The book won an award from ISPI and helped thousands of companies update their smile sheets. Now, I’m updating the book with the knowledge I’ve gained in consulting with companies in the learning-evaluation efforts. The second edition will be titled: Performance-Focused Learner Surveys: A Radical Rethinking of a Dangerous Art Form (Second Edition).

In the first edition, I listed nine benefits of learner surveys, but I had only touched the surface. In the coming book, I will offer 20 benefits. Here’s the current list:

Supporting Learning Design Effectiveness

  1. Red-flagging training programs that are not sufficiently effective.
  2. Gathering ideas for ongoing updates and revisions of learning programs.
  3. Judging the strengths, weaknesses, and viability of program updates and pilots.
  4. Providing learning architects and trainers with feedback to aid their development.
  5. Judging the competence of learning architects and trainers.
  6. Judging the contributions to learning made by people outside of the learning team.
  7. Assessing the contributions of learning supports and organizational practices.

Supporting Learners in Learning and Application

  1. Helping learners reflect on and reinforce what they learned.
  2. Helping learners determine what (if anything) they plan to do with their learning.
  3. Nudging learners to greater learning and application efforts.

Nudging Action Through Stealth Messaging

  1. Guiding learning architects to create more effective learning by sharing survey questions before learning designs are finalized and sharing survey results after data is gathered.
  2. Guiding trainers to utilize more effective learning methods by sharing survey questions before learning designs are finalized and sharing survey results after data is gathered.
  3. Guiding organizational stakeholders to support learning efforts more effectively by sharing survey questions and survey results.
  4. Guiding organizational decision makers to better appreciate the complexity and depth of learning and development—helping the learning team to gain credibility and autonomy.

Supporting Relationships with Learners and Other Key Stakeholders

  1. Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the learning intervention and/or the instructors.
  2. Upholding the spirit of common courtesy by giving learners a chance for feedback.
  3. Enabling learner frustrations to be vented—to limit damage from negative back-channel communications.

Maintaining Organizational Credibility

  1. Engaging in visibly credible efforts to assess learning effectiveness.
  2. Engaging in visibly credible efforts to utilize data to improve effectiveness.
  3. Reporting out data to demonstrate learning effectiveness.

If you want to learn when the new edition is available, sign up for my list. https://www.worklearning.com/sign-up/.

The second edition will include new and improved question wording, additional questions, additional chapters, etc.

Matt Richter and I, in our Truth-in-Learning Podcast, will be discussing learner surveys in our next episode. Matt doesn’t believe in smile sheets and I’m going to convince him of the amazing power of well-crafted learner surveys. This blog post is my first shot across the bow. To join us, subscribe to our podcast in your podcast app.

Dani Johnson at RedThread Research has just released a wonderful synopsis of Learning Evaluation Models. Comprehensive, Thoughtful, Well-Researched! It also has suggestions of articles to read!!!

This work is part of an ongoing effort to research the learning-evaluation space. With research sponsored by the folks at the adroit learning-evaluation company forMetris, RedThread is looking to uncover new insights about the way we do workplace learning evaluation.

Here’s what Dani says in her summary:

“What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation.”

 

“While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.”

 

“Of the literature we reviewed, several pieces stood out to us. Each of the following authors [detailed in the summary] and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.”

 

I also encourage you to look at this great review! You can see the summary here.

 

 

For years, we have used the Kirkpatrick-Katzell Four-Level Model to evaluate workplace learning. With this taxonomy as our guide, we have concluded that the most common form of learning evaluation is learner surveys, that the next most common evaluation is learning, then on-the-job behavior, then organizational results.

The truth is more complicated.

In some recent research I led with the eLearning Guild and Jane Bozarth, we used the LTEM model to look for further differentiation. We found it.

Here’s some of the insights from the graphic above:

  • Learner surveys are NOT the most common form of learning evaluation. Program completion and attendance are more common, being done on most training programs in about 83% of organizations.
  • Learners surveys are still very popular, with 72% of respondents saying that they are used in more than one-third of their learning programs.
  • When we measure learning, we go beyond simple quizzes and knowledge checks.
    • Tier 5 assessments, measuring the ability to make realistic decisions, were reported by 24% of respondents to be used in more than one-third of their learning programs.
    • Tier 6 assessments, measuring realistic task performance (during learning), were reported by about 32% of respondents to be used in more than one-third of their learning programs.
    • Unfortunately, we messed up and forgot to include an option on Tier 4 Knowledge questions. However, previous eLearning Guild research in the 2007, 2008, and 2010 found that the percentage of respondents who reported that they measured memory recall of critical information was 60%, 60%, and 63% respectively.
  • Only about 20% of respondents said their organizations are measuring work performance.
  • Only about 16% of respondents said their organizations are measuring the organizational results from learning.
  • Interestingly, where the Four-Level Model puts all types of Results into one bucket, the LTEM framework encourages us to look at other results besides business results.
    • About 12% said their organizations were looking at the effect of the learning on the learner’s success and well-being.
    • Only about 3% said they were measuring the effects of learning on coworkers/family/friends.
    • Only about 3% said they were measuring the effects of learning on the community or society (as has been recommended by Roger Kaufman for years).
    • Only about 1% reported measuring the effects of learning on the environs.

 

Opportunities

The biggest opportunity—or the juiciest low-hanging fruit—is that we can stop just using Tier-1 attendance and Tier-3 learner-perception measures.

We can also begin to go beyond our 60%-rate in measuring Tier-4 knowledge and do more Tier-5 and Tier-6 assessments. As I’ve advocated for years, Tier-5 assessments using well-constructed scenario-based questions are the perfect balance of power and cost. They are aligned with the research on learning, they have moderate costs in terms of resources, and learners see them as challenging and interesting rather than punitive and unhelpful like they often see knowledge checks.

We can also begin to emphasize more Tier-7 evaluations. Shouldn’t we know whether our learning interventions are actually transferring to the workplace? The same is true for Tier-8 measures. We should look for strategic opportunities here—being mindful to the incredible costs of doing good Tier-8 evaluations. We should also consider looking beyond business results—as these are not the only effects our learning interventions are having.

Finally, we can use LTEM to help guide our learning-development efforts and our learning evaluations. By using LTEM, we are prompted to see things that have been hidden from us for decades.

 

The Original eLearning Guild Report

To get the original eLearning Guild report, click here.

 

The LTEM Model

To get the LTEM Model and the 34-page report that goes with it, click here.

My Year In Review 2018—Engineering the Future of Learning Evaluation

In 2018, I shattered my collarbone and lay wasting for several months, but still, I think I had one of my best years in terms of the contributions I was able to make. This will certainly sound like hubris, and surely it is, but I can’t help but think that 2018 may go down as one of the most important years in learning evaluation’s long history. At the end of this post, I will get to my failures and regrets, but first I’d like to share just how consequential this year was in my thinking and work in learning evaluation.

It started in January when I published a decisive piece of investigative journalism showing that Donald Kirkpatrick was NOT the originator of the four-level model; that another man, Raymond Katzell, has deserved that honor all along. In February, I published a new evaluation model, LTEM (The Learning-Transfer Evaluation Model)—intended to replace the weak and harmful Kirkpatrick-Katzell Four-Level Model. Already, doctoral students are studying LTEM and organizations around the world are using LTEM to build more effective learning-evaluation strategies.

Publishing these two groundbreaking efforts would have made a great year, but because I still have so much to learn about evaluation, I was very active in exploring our practices—looking for their strengths and weaknesses. I led two research efforts (one with the eLearning Guild and one with my own organization, Work-Learning Research). The Guild research surveyed people like you and your learning-professional colleagues on their general evaluation practices. The Work-Learning Research effort focused specifically on our experiences as practitioners in surveying our learners for their feedback.

Also in 2018, I compiled and published a list of 54 common mistakes that get made in learning evaluation. I wrote an article on how to think about our business stakeholders in learning evaluation. I wrote a post on one of the biggest lies in learning evaluation—how we fool ourselves into thinking that learner feedback gives us definitive data on learning transfer and organizational results. It does not! I created a replacement for the problematic Net Promoter Score. I shared my updated smile-sheet questions, improving those originally put forth in my award winning book, Performance-Focused Smile Sheets. You can access all these publications below.

In my 2018 keynotes, conference sessions, and workshops, I recounted our decades-long frustrations in learning evaluation. We are clearly not happy with what we’ve been able to do in terms of learning evaluation. There are two reasons for this. First, learning evaluation is very complex and difficult to accomplish—doubly so given our severe resource constraints in terms of both budget and time. Second, our learning-evaluation tools are mostly substandard—enabling us to create vanity metrics but not enabling us to capture data in ways that help us, as learning professionals, make our most important decisions.

In 2019, I will continue my work in learning evaluation. I still have so much to unravel. If you see a bit of wisdom related to learning evaluation, please let me know.

Will’s Top Fifteen Publications for 2018

Let me provide a quick review of the top things I wrote this year:

  1. LTEM (The Learning-Transfer Evaluation Model)
    Although published by me in 2018, the model and accompanying 34-page report originated in work begun in 2016 and through the generous and brilliant feedback I received from Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, and Mike Rustici—as well as from attendees in the 2017 ISPI Design-Thinking conference and the 2018 Learning Technologies conference in London. LTEM is designed to replace the Kirkpatrick-Katzell Four-Level Model originally formulated in the 1950s. You can learn about the new model by clicking here.
  2. Raymond Katzell NOT Donald Kirkpatrick
    Raymond Katzell originated the Four-Level Model. Although Donald Kirkpatrick embraced accolades for the Four-Level Model, it turns out that Raymond Katzell was the true originator. I did an exhaustive investigation and offered a balanced interpretation of the facts. You can read the original piece by clicking here. Interestingly, none of our trade associations have reported on this finding. Why is that? LOL
  3. When Training Pollutes. Our Responsibility to Lessen the Environmental Damage of Training
    I wrote an article and placed it on LinkedIn and as far as I can tell, very few of us really want to think about this. But you can get started by reading the article (by clicking here).
  4. Fifty-Four Mistakes in Learning Evaluation
    Of course we as an industry make mistakes in learning evaluation, but who knew we made so many? I began compiling the list because I’d seen a good number of poor practices and false narratives about what is important in learning evaluation, but by the time I’d gotten my full list I was a bit dumbstruck by the magnitude of problem. I’ve come to believe that we are still in the dark ages of learning evaluation and we need a renaissance. This article will give you some targets for improvements. Click here to read it.
  5. New Research on Learning Evaluation — Conducted with The eLearning Guild
    The eLearning Guild and Dr. Jane Bozarth (the Guild’s Director of Research) asked me to lead a research effort to determine what practitioners in the learning/elearning field are thinking and doing in terms of learning evaluation. In a major report released about a month ago, we reveal findings on how people feel about the learning measurement they are able to do, the support they get from their organizations, and their feelings about their current level of evaluation competence. You can read a blog post I wrote highlighting one result from the report—that a full 40% of us are unhappy with what we are able to do in terms of learning evaluation. You can access the full report here (if you’re a Guild member) and an executive summary. Also, stay tuned to my blog or signup for my newsletter to see future posts about our findings.
  6. Current Practices in Gathering Learner Feedback
    We at Work-Learning Research, Inc. conducted a survey focused on gathering learner feedback (i.e., smile sheets, reaction forms, learner surveys) that spanned 2017 and 2018. Since the publication of my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, I’ve spent a ton of time helping organizations build more effective learner surveys and gauging common practices in the workplace learning field. This research survey continued that work. To read my exhaustive report, click here.
  7. One of the Biggest Lies in Learning Evaluation — Asking Learners about Level 3 and 4 (LTEM Tiers 7 and 8)
    This is big! One of the biggest lies in learning evaluation. It’s a lie we like to tell ourselves and a lie our learning-evaluation vendors like to tell us. If we ask our learners questions that relate to their job performance or the organizational impact of our learning programs we are NOT measuring at Kirkpatrick-Katzell Level 3 or 4 (or at LTEM Tiers 7 and 8), we are measuring at Level 1 and LTEM Tier 3. You can read this refutation here.
  8. Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?
    What do you want from the trade organizations in the learning field? Probably “accurate information” is high on your list. But what happens when the information you get is biased and untrustworthy? Could. Never. Happen. Right? Read this article to see how bias might creep in.
  9. Snake Oil. The Story of Clark Stanley as Preface to Clark Quinn’s Excellent Book
    This was one of my favorite pieces of writing in 2018. Did I ever mention that I love writing and would consider giving this all up for a career as a writer? You’ve all heard of “snake oil” but if you don’t know where the term originated, you really ought to read this piece.
  10. Dealing with the Emotional Readiness of Our Learners — My Ski Accident Reflections
    I had a bad accident on the ski slopes in February this year and I got thinking about how our learners might not always be emotionally ready to learn. I don’t have answers in this piece, just reflections, which you can read about here.
  11. The Backfire Effect. Not the Big Worry We Thought it was (for Those Who Would Debunk Learning Myths)
    This article is for those interested in debunking and persuasion. The Backfire Effect was the finding that trying to persuade someone to stop believing a falsehood, might actually make them more inclined to believe the falsehood. The good news is that new research showed that this worry might be overblown. You can read more about this here (if you dare to be persuaded).
  12. Updated Smile-Sheet Questions for 2018
    I published a set of learner-survey questions in my 2016 book, and have been working with clients to use these questions and variations on these questions for over two years since then. I’ve learned a thing or two and so I published some improvements early last year. You can see those improvements here. And note, for 2019, I’ll be making additional improvements—so stay tuned! Remember, you can sign up to be notified of my news here.
  13. Replacement for NPS (The Net Promoter Score)
    NPS is all the rage. Still! Unfortunately, it’s a terribly bad question to include on a learner survey. The good news is that now there is an alternative, which you can see here.
  14. Neon Elephant Award for 2018 to Clark Quinn
    Every year, I give an award for a great research-to-practice contribution in the workplace learning field. This year’s winner is Clark Quinn. See why he won and check out his excellent resources here.
  15. New Debunker Club Website
    The Debunker Club is a group of people who have committed to debunking myths in the learning field and/or sharing research-based information. In 2018, working with a great team of volunteers, we revamped the Debunker Club website to help build a community of debunkers. We now have over 800 members from around the world. You can learn more about why The Debunker Club exists by clicking here. Also, feel free to join us!

 

My Final Reflections on 2018

I’m blessed to be supported by smart passionate clients and by some of the smartest friends and colleagues in the learning field. My Work-Learning Research practice turned 20 years old in 2018. Being a consultant—especially one who focuses on research-to-practice in the workplace learning field—is still a challenging yet emotionally rewarding endeavor. In 2018, I turned my attention almost fully to learning evaluation. You can read about my two-path evaluation approach here. One of my research surveys totally flopped this year. It was focused on the interface between us (as learning professionals) and our organizations’ senior leadership. I wanted to know if what we thought senior leadership wanted was what they actually wanted. Unfortunately, neither I nor any of the respondents could entice a senior leader to comment. Not one! If you or your organization has access to senior managers, I’d love to partner with you on this! Let me know. Indeed, this doesn’t even have to be research. If your CEO would be willing to trade his/her time letting me ask a few questions in exchange for my time answering questions about learning, elearning, learning evaluation, etc., I’d be freakin’ delighted! I failed this year in working out a deal with another evaluation-focused organization to merge our efforts. I was bummed about this failure as the synergies would have been great. I also failed in 2018 to cure myself of the tendency to miss important emails. If you ever can’t get in touch with me, try, try again! Thanks and apologies! I had a blast in 2018 speaking and keynoting at conferences—both big and small conferences. From doing variations on the Learning-Research Quiz Show (a rollicking good time) to talking about innovations in learning evaluation to presenting workshops on my learning-evaluation methods and the LTEM model. Good stuff, if a ton of work. Oh! I did fail again in 2018 turning my workshops into online workshops. I hope to do better in 2019. I also failed in 2018 in finishing up a research review of the training transfer research. I’m like 95% done, but still haven’t had a chance to finish.

2018 broke my body, made me unavailable for a couple of months, but overall, it turned out to be a pretty damn good year. 2019 looks promising too as I have plans to continue working on learning evaluation. It’s kind of interesting that we are still in the dark ages of learning evaluation. We as an industry, and me as a person, have a ton more to learn about learning evaluation. I plan to continue the journey. Please feel free to reach out and let me know what I can learn from you and your organization. And of course, because I need to pay the rent, let me say that I’d be delighted if you wanted me to help you or your organization. You can reach me through the Work-Learning Research contact form.

Thanks for reading and being interested in my work!!!

Released Today: Research Report on Learning Evaluation Conducted with The eLearning Guild.

Report Title: Evaluating Learning: Insights from Learning Professionals.

I am delighted to announce that a research effort that I led in conjunction with Dr. Jane Bozarth and the eLearning Guild has been released today. I’ll be blogging about our findings over the next couple of months.

This is a major report — packed into 39 pages — and should be read by everyone in the workplace learning field interested in learning evaluation!

Just a teaser here:

We asked folks to consider the last three learning programs their units developed and to reflect on the learning-evaluation approaches they used.

While a majority were generally happy with their evaluation methods on these recent learning programs, about 40% where dissatisfied. Later, in a more general question about whether learning professionals are able to do the learning measurement they want to do, fully 52% said they were NOT able to do the kind of evaluation they thought was right to do.

In the full report, available only to Guild members, we dig down and explore the practices and perspectives that drive our learning-evaluation efforts. I encourage you to get the full report, as it touches on the methods we use, how we communicate with senior business leaders, what we’d like to do differently, and what we think we’re good at. Also, the report concludes with 12 powerful action strategies for getting the most out of our learning-evaluation efforts.

You can get the full report by clicking here.

 

 

Respondents

Over 200 learning professionals responded to Work-Learning Research’s 2017-2018 survey on current practices in gathering learner feedback, and today I will reveal the results. The survey ran from November 29th, 2017 to September 16th, 2018. The sample of respondents was drawn from Work-Learning Research’s mailing list and through extensive calls for participation in a variety of social media. Because of this sampling methodology, the survey results are likely skewed toward professionals who care and/or pay attention to research-based practice recommendations more than the workplace learning field as a whole. They are also likely more interested and experienced in learning evaluation as well.

Feel free to share this link with others.

Goal of the Research

The goal of the research was to determine what people are doing in the way of evaluating their learning interventions through the practice of asking learners for their perspectives.

Questions the Research Hoped to Answer

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?
  2. How does their use compare with other methods? Are other methods growing in prominence/use?
  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?
  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?
  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?
  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?
  7. What tools are organizations using to gather learner feedback?
  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?
  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

A summary of the findings indexed to these questions can be found at the end of this post.

Situating the Practice of Gathering Learner Feedback

When we gather feedback from learners, we are using a Tier 3 methodology on the LTEM (Learning-Transfer Evaluation Model) or Level 1 on the Kirkpatrick-Katzell Four-Level Model of Training Evaluation.

Demographic Background of Respondents

Respondents came from a wide range of organizations, including small, midsize, and large organizations.

Respondents play a wide range of roles in the learning field.

Most respondents live in the United States and Canada, but there was some significant representation from many predominantly English-speaking countries.

Learner-Feedback Findings

About 67% of respondents report that learners are asked about their perceptions on more than half of their organization’s learning programs, including elearning. Only about 22% report that they survey learners on less than half of their learning programs. This finding is consistent with past findings—surveying learners is the most common form of learning evaluation and is widely practiced.

The two most common question types in use are Likert-like questions and numeric-scale questions. I have argued against their use* and I am pleased that Performance-Focused Smile Sheet questions have been utilized by so many so quickly. Of course, this sample of respondents is comprised of folks on my mailing list so this result surely doesn’t represent current practice in the field as a whole. Not yet! LOL.

*Likert-like questions and numeric-scale questions are problematic for several reasons. First, because they offer fuzzy response choices, learners have a difficult time deciding between them and this likely makes their responses less precise. Second, such fuzziness may inflate bias as there are not concrete anchors to minimize biasing effects of the question stems. Third, Likert-like options and numeric scales likely deflate learner responding because learners are habituated to such scales and because they may be skeptical that data from such scales will actually be useful. Finally, Likert-like options and numeric scales produce indistinct results—averages all in the same range. Such results are difficult to assess, failing to support decision-making—the whole purpose for evaluation in the first place. To learn more, check out Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form (book website here).

The most common tools used to gather feedback from learners were paper surveys and SurveyMonkey. Questions delivered from within an LMS were the next highest. High-end evaluation systems like Metrics that Matter were not highly represented in our respondents.

Our respondents did not rate their learner-feedback efforts as very effective. Their learner surveys were seen as most effective in gauging learner satisfaction. Only about 33% of respondents thought their learner surveys gave them insights on the effectiveness of the learning.

Only about 15% of respondents found their data very useful in providing them feedback about how to improve their learning interventions.

Respondents report that their organizations are somewhat open to alternatives to their current learner-feedback approaches, but overall they are not actively looking for alternatives.

Most respondents report that their organizations are at least “modestly happy” with their learner-feedback assessments. Yet only 22% reported being “generally happy” with them. Combining this finding with the one above showing that lots of organizations are open to alternatives, it seems that organizational satisfaction with current learner-feedback approaches is soft.

We asked respondents about their organizations’ attempts to measure the following:

  • Learner Attendance
  • Whether Learner is Paying Attention
  • Learner Perceptions of the Learning (eg, Smile Sheets, Learner Feedback)
  • Amount or Quality of Learner Participation
  • Learner Knowledge of the Content
  • Learner Ability to Make Realistic Decisions
  • Learner Ability to Complete Realistic Tasks
  • Learner Performance on the Job (or in another future performance situation)
  • Impact of Learning on the Learner
  • Impact of Learning on the Organization
  • Impact of Learning on Coworkers, Family, Friends of the Learner
  • Impact of Learning on the Community or Society
  • Impact of Learning on the Environment

These evaluation targets are encouraged in LTEM (The Learning-Transfer Evaluation Model).

Results are difficult to show—because our question was very complicated (admittedly too complicated)—but I will summarize the findings below.

As you can see, learner attendance and learner perceptions (smile sheets) were the most commonly measured factors, with learner knowledge a distant third. The least common measures involved the impact of the learning on the environment, community/society, and the learner’s coworkers/family/friends.

The flip side—methods rarely utilized in respondents’ organizations—shows pretty much the same thing.

Note that the question above, because it was too complicated, probably produced some spurious results, even if the trends at the extremes are probably indicative of the whole range. In other words, it’s likely that attendance and smile sheets are the most utilized and measures of impact on the environment, community/society, and learners’ coworkers/family/friends are the least utilized.

Questions Answered Based on Our Sample

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?

    Yes! Smile sheets are clearly the most popular evaluation method, along with measuring attendance (if we include that as a metric).

  2. How does their use compare with other methods? Are other methods growing in prominence/use?

    Except for Attendance, nothing else comes close. The next most common method is measuring knowledge. Remarkably, given the known importance of decision-making (Tier 5 in LTEM) and task competence (Tier 6 in LTEM), these are used in evaluation at a relatively low level. Similar low levels are found in measuring work performance (Tier 7 in LTEM) and organizational results (part of Tier 8 in LTEM). We’ve known about these relatively low levels from many previous research surveys.

    Hardly any measurement is being done on the impact of learning on learner or his/her coworkers/family/friends, the impact of the learning on the community/society/environment, or on learner participation/attention.

  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?

    Learning professionals are moderately satisfied.

  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?

    Organizations are open to alternatives, with some actively seeking alternatives and some not looking.

  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?

    Likert-like options and numeric scales are the most commonly used. Thalheimer’s performance-focused smile-sheet method has gained traction in this sample of respondents—people likely more in the know about Thalheimer’s approach than the industry at large.

  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?

    Learning professionals think their current smile sheets are fairly good at measuring the satisfaction of learners. A full one-third of respondents feel that their current approaches are not valid enough to provide them with meaningful insights about the learning interventions.

  7. What tools are organizations using to gather learner feedback?

    The two most common methods for collecting learner feedback are paper surveys and SurveyMonkey. Questions from LMSs are the next most widely used. Sophisticated evaluation tools are not much in use in our respondent sample.

  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?

    This may be the most important question we might ask, given that evaluation is supposed to aid us in maintaining our successes and improving on our deficiencies. Only 15% of respondents found learner feedback “very helpful” in helping them improve their learning. Many found the feedback “somewhat helpful” but a full one-third found the feedback “not very useful” in enabling them to improve learning.

  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

    As described in Question 2 above, many of the targets of LTEM are not being adequately measured at this point in time (November 2017 to September 2018, during the time immediately before and after LTEM was introduced). This indicates that LTEM is poised to help organizations uncover evaluation targets that can be helpful in setting goals for learning improvements.

Lessons to be Drawn

The results of this survey reinforce what we’ve known for years. In the workplace learning industry, we default to learner-feedback questions (smile sheets) as our most common learning-evaluation method. This is a big freakin’ problem for two reasons. First, our learner-feedback methods are inadequate. We often use poor survey methodologies and ones particularly unsuited to learner feedback, including the use of fuzzy Likert-like options and numeric scales. Second, even if we used the most advanced learner-feedback methods, we still would not be doing enough to gain insights into the strengths and weaknesses of our learning interventions.

Evaluation is meant to provide us with data we can use to make our most critical decisions. We need to know, for example, whether our learning designs are supporting learner comprehension, learner motivation to apply what they’ve learned, learner ability to remember what they’ve learned, and the supports available to help learners transfer their learning to their work. We typically don’t know these things. As a result, we don’t make design decisions we ought to. We don’t make improvements in the learning methods we use or the way we deploy learning. The research captured here should be seen as a wake up call.

The good news from this research is that learning professionals are often aware and sensitized to the deficiencies of their learning-evaluation methods. This seems like a good omen. When improved methods are introduced, they will seek to encourage their use.

LTEM, the new learning-evaluation model (which I developed with the help of some of the smartest folks in the workplace learning field) is targeting some of the most critical learning metrics—metrics that have too often been ignored. It is too new to be certain of its impact, but it seems like a promising tool.

Why I have turned my Attention to Evaluation (and why you should too!)

For 20 years, I’ve focused on compiling scientific research on learning in the belief that research-based information—when combined with a deep knowledge of practice—can drastically improve learning results. I still believe that wholeheartedly! What I’ve also come to understand is that we as learning professionals must get valid feedback on our everyday efforts. It’s simply our responsibility to do so.

We have to create learning interventions based on the best blend of practical wisdom and research-based guidance. We have to measure key indices that tell us how our learning interventions are doing. We have to find out what their strengths are and what their weaknesses are. Then we have to analyze and assess and make decisions about what to keep and what to improve. Then we have to make improvements and again measure our results and continue the cycle—working always toward continuous improvement.

Here’s a quick-and-dirty outline of the recommended cycle for using learning to improve work performance. “Quick-and-dirty” means I might be missing something!

  1. Learn about and/or work to uncover performance-improvement needs.
  2. If you determine that learning can help, continue. Otherwise, build or suggest alternative methods to get to improved work performance.
  3. Deeply understand the work-performance context.
  4. Sketch out a very rough draft for your learning intervention.
  5. Specify your evaluation goals—the metrics you will use to measure your intervention’s strengths and weaknesses.
  6. Sketch out a rough draft for your learning intervention.
  7. Specify your learning objectives (notice that evaluation goals come first!).
  8. Review the learning research and consider your practical constraints (two separate efforts subsequently brought together).
  9. Sketch out a reasonably good draft for your learning intervention.
  10. Build your learning intervention and your learning evaluation instruments (Iteratively testing and improving).
  11. Deploy your “ready-to-go” learning intervention.
  12. Measure your results using the previously determined evaluation instruments, which were based on your previously determined evaluation objectives.
  13. Analyze your results.
  14. Determine what to keep and what to improve.
  15. Make improvements.
  16. Repeat (maybe not every step, but at least from Step 6 onward)

And here is a shorter version:

  1. Know the learning research
  2. Understand your project needs.
  3. Outline your evaluation objectives—the metrics you will use.
  4. Design your learning.
  5. Deploy your learning and your measurement.
  6. Analyze your results.
  7. Make Improvements
  8. Repeat.

More Later Maybe

The results shared here are the result from all respondents. If I get the time, I’d like to look at subsets of respondents. For example, I’d like to look at how learning executives and managers might differ from learning practitioners. Let me know how interested you would be in these results.

Also, I will be conducting other surveys on learning-evaluation practices, so stay tuned. We have been too long frustrated with our evaluation practices and more work needs to be done in understanding the forces that keep us from doing what we want to do. We could also use more and better learning-evaluation tools because the truth is that learning evaluation is still a nascent field.

Finally, because I learn a ton by working with clients who challenge themselves to do more effective interventions, please get in touch with me if you’d like a partner in thinking things through and trying new methods to build more effective evaluation practices. Also, please let me know how you’ve used LTEM (The Learning-Transfer Evaluation Model).

Some links to make this happen:

Appreciations

As always, I am grateful to all the people I learn from, including clients, researchers, thought leaders, conference attendees, and more… Thanks also to all who acknowledge and share my work! It means a lot!

NOTICE OF UPDATE (17 May 2018):

The LTEM Model and accompanying Report were updated today and can be found below.

Two major changes were included:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

 

This blog post introduces a new learning-evaluation model, the Learning-Transfer Evaluation Model (LTEM).

 

Why We Need a New Evaluation Model

It is well past time for a new learning-evaluation model for the workplace learning field. The Kirkpatrick-Katzell Model is over 60 years old. It was born in a time before computers, before cognitive psychology revolutionized the learning field, before the training field was transformed from one that focused on the classroom learning experience to one focused on work performance.

The Kirkpatrick-Katzell model—created by Raymond Katzell and popularized by Donald Kirkpatrick—is the dominant standard in our field. It has also done a tremendous amount of harm, pushing us to rely on inadequate evaluation practices and poor learning designs.

I am not the only critic of the Kirkpatrick-Katzell model. There are legions of us. If you do a Google search starting with these letters, “Criticisms of the Ki,” Google anticipates the following: “Criticisms of the Kirkpatrick Model” as one of the most popular searches.

Here’s what a seminal research review said about the Kirkpatrick-Katzell model (before the model’s name change):

The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders…

The New Model

For the past year or so I’ve been working to develop a new learning-evaluation model. The current version is the eleventh iteration, improved after reflection, after asking some of the smartest people in our industry to provide feedback, after sharing earlier versions with conference attendees at the 2017 ISPI innovation and design-thinking conference and the 2018 Learning Technologies conference in London.

Special thanks to the following people who provided significant feedback that improved the model and/or the accompanying article:

Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, Mike Rustici

The model, which I’ve named the Learning-Transfer Evaluation Model (LTEM, pronounced L-tem) is a one page, eight-level model, augmented with color coding and descriptive explanations. In addition to the model itself, I’ve prepared a 34-page report to describe the need for the model, the rationale for its design, and recommendations on how to use it.

You can access the model and the report by clicking on the following links:

 

 

Release Notes

The LTEM model and report were researched, conceived, and written by Dr. Will Thalheimer of Work-Learning Research, Inc., with significant and indispensable input from others. No one sponsored or funded this work. It was a labor of love and is provided as a valentine for the workplace learning field on February 14th, 2018 (Version 11). Version 12 was released on May 17th, 2018 based on feedback from its use. The model and report are copyrighted by Will Thalheimer, but you are free to share them as is, as long as you don’t sell them.

If you would like to contact me (Will Thalheimer), you can do that at this link: https://www.worklearning.com/contact/

If you would like to sign up for my list, you can do that here: https://www.worklearning.com/sign-up/

 

 

 

Let’s find out by asking them!

And, let’s ask ourselves (workplace learning professionals) what we think senior leaders will tell us.

NOTE: This may take some effort on our part. Please complete the survey yourself and ask senior leaders at your organization (if your organization is 1000 people or more) to complete the survey.

 

The Survey Below is for both Senior Organizational Leaders AND for Workplace Learning Professionals.

We will branch you to a separate set of questions!

Answer the survey questions below, or you need it, here is a link to the survey.

 



Send me an email if you want to talk more about learning evaluation...

Dear Readers,

Many of you are now following me and my social-media presence because you’re interested in LEARNING MEASUREMENT. Probably because of my recent book on Performance-Focused Smile Sheets (which you can learn about at the book’s website, SmileSheets.com).

More and more, I’m meeting people who have jobs that focus on learning measurement. For some, that’s their primary focus. For most, it’s just a part of their job.

Today, I got an email from a guy looking for a job in learning measurement and analytics. He’s a good guy, smart and passionate, and so he ought to be able to find a good job where he can really help. So here’s what I’m thinking. You, my readers are some of the best and brightest in the industry — you care about our work and you look to the scientific research as a source of guidance. You are also, many of you, enlightened employers, looking to recruit and hire the best and brightest. So it seems obvious that I should try to connect you…

So here’s what we’ll try. If you’ve got a job in learning measurement, let me know about it. I’ll post it here on my blog. This will be an experiment to see what happens. Maybe nothing… but it’s worth a try.

Now, I know many of you are also loyal readers because of things BESIDES learning measurement, for example, learning research briefs, research-based insights, elearning, subscription learning, learning audits, and great jokes… but let’s keep this experiment to LEARNING MEASUREMENT JOBS at first.

BOTTOM LINE: If you know of a learning-measurement job, let me know. Email me here…

I was asked today the following question from a learning professional in a large company:

It will come as no surprise that we create a great deal of mandatory/regulatory required eLearning here. All of these eLearning interventions have a final assessment that the learner must pass at 80% to be marked as completed; in addition to viewing all the course content as well. The question is around feedback for those assessment questions.

  • One faction says no feedback at all, just a score at the end and the opportunity to revisit any section of the course before retaking the assessment.
  • Another faction says to tell them correct or incorrect after they submit their answer for each question.
  • And a third faction argues that we should give them detailed feedback beyond just correct/incorrect for each question. 

Which approach do you recommend? 

 

 

Here is what I wrote in response:

It all depends on what you’re trying to accomplish…

If this is a high-stakes assessment you may want to protect the integrity of your questions. In such a case, you’d have a large pool of questions and you’d protect the answer choices by not divulging them. You may even have proctored assessments, for example, having the respondent turn on their web camera and submit their video image along with the test results. Also, you wouldn’t give feedback because you’d be concerned that students would share the questions and answers.

If this is largely a test to give feedback to the learners—and to support them in remembering and performance—you’d not only give them detailed feedback, but you’d retest them after a few days or more to reinforce their learning. You might even follow-up to see how well they’ve been able to apply what they’ve learned on the job.

We can imagine a continuum between these two points where you might seek a balance between a focus on learning and a focus on assessment.

This may be a question for the lawyers, not just for us as learning professionals. If these courses are being provided to meet certain legal requirements, it may be most important to consider what might happen in the legal domain. Personally, I think the law may be behind learning science. Based on talking with clients over many years, it seems that lawyers and regulators often recommend learning designs and assessments that do NOT make sense from a learning standpoint. For example, lawyers tell companies that teaching a compliance topic once a year will be sufficient — when we know that people forget and may need to be reminded.

In the learning-assessment domain, lawyers and regulators may say that it is acceptable to provide a quiz with no feedback. They are focused on having a defensible assessment. This may be the advice you should follow given current laws and regulations. However, this seems ultimately indefensible from a learning standpoint. Couldn’t a litigant argue that the organization did NOT do everything they could to support the employee in learning — if the organization didn’t provide feedback on quiz questions? This seems a pretty straightforward argument — and one that I would testify to in a court of law (if I was asked).

By the way, how do you know 80% is the right cutoff point? Most people use an arbitrary cutoff point, but then you don’t really know what it means.

Also, are your questions good questions? Do they ask people to make decisions set in realistic scenarios? Do they provide plausible answer choices (even for incorrect choices)? Are they focused on high-priority information?

Do the questions and the cutoff point truly differentiate between competence and lack of competence?

Are the questions asked after a substantial delay — so that you know you are measuring the learners’ ability to remember?

Bottom line: Decision-making around learning assessments is more complicated than it looks.

Note: I am available to help organizations sort this out… yet, as one may ascertain from my answer here, there are no clear recipes. It comes down to judgment and goals.

If your goal is learning, you probably should provide feedback and provide a delayed follow-up test. You should also use realistic scenario-based questions, not low-level knowledge questions.

If your goal is assessment, you probably should create a large pool of questions, proctor the testing, and withhold feedback.