Dani Johnson at RedThread Research has just released a wonderful synopsis of Learning Evaluation Models. Comprehensive, Thoughtful, Well-Researched! It also has suggestions of articles to read!!!

This work is part of an ongoing effort to research the learning-evaluation space. With research sponsored by the folks at the adroit learning-evaluation company forMetris, RedThread is looking to uncover new insights about the way we do workplace learning evaluation.

Here’s what Dani says in her summary:

“What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation.”

 

“While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.”

 

“Of the literature we reviewed, several pieces stood out to us. Each of the following authors [detailed in the summary] and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.”

 

I also encourage you to look at this great review! You can see the summary here.

 

 

As I preach in my workshops on how to create better learner-survey questions (for example my Gold-Certification workshop on Performance-Focused Smile Sheets), open-ended comment questions are very powerful questions. Indeed, they are critical in our attempts to truly understand our learners’ perspectives.

Unfortunately, to get the most benefit from comment questions, we have to take time to read every response and reflect on the meaning of all the comments taken together. Someday AI may be able to help us parse comment-question data, but currently the technology is not ready to give us a full understanding. Nor are word clouds or other basic text-processing algorithms useful enough to provide valid insights into our data.

It’s good to take the time in analyzing our comment-question data, but if there was a way to quickly get a sense of comment data, wouldn’t we consider using it? Of course!

As most of you know, I’ve been focusing a lot of my attention on learning evaluation over the last few years. While I’ve learned a lot, have been lauded by others as an evaluation thought leader, and have even created some useful innovations like LTEM, I’m still learning. Today, by filling out a survey after going to a CVS MinuteClinic to get a vaccine shot, I learned something pretty cool. Take a look.

This is a question on their survey, delivered to me right after I’d answered a comment question. This gives the survey analyzers a way to quickly categorize the comments. It DOES NOT REPLACE, or should not replace, a deeper look at the comments (for example, my comment was very specific and useful i hope), but it does enable us to ascribe some overall meaning to the results.

Note that this is similar to what I’ve been calling a hybrid question, where we first give people a forced-choice question and then give them a comment question. The forced choice question drives clarity whereas the follow-up comment question enables more specificity and richness.

One warning! Adding a forced choice question after a comment question should be seen as a tool in our toolbox. Let’s not overuse it. More pointedly, let’s use it when it is particularly appropriate.

If we’ve asked two open-ended comment questions—one asking for positive feedback and one asking for constructive criticism—we might not need a follow-up forced choice question, because we’ve already prompted respondents to give us the good and the bad.

The bottom line is that we now have two types of hybrid questions to add to our toolbox:

  1. Forced-choice question followed by clarifying comment question.
  2. Comment question followed by categorizing forced-choice question.

Freakin’ Awesome!

 

Donald Taylor, learning-industry visionary, has just come out with his annual Global Sentiment Survey asking practitioners in the field what topics are the most important right now. The thing that struck me is that the results show that data is becoming more and more important to people, especially as represented in adaptive learning through personalization, artificial intelligence, and learning analytics.

Learning analytics was most important category for the opinion leaders represented in social media. This seems right to me as someone who will be focused mostly on learning evaluation in 2019.

As Don said in the GoodPractice podcast with Ross Dickie and Owen Ferguson, “We don’t have to prove. We have to improve through learning analytics.”

What I love about Don Taylor’s work here is that he’s clear as sunshine about the strengths and limitations of this survey—and, most importantly, that he takes the time to explain what things mean without over-hyping and slight-of-hand. It’s a really simple survey, but the results are fascinating—not necessarily about what we should be doing, but what people in our field think we should be paying attention to. This kind of information is critical to all of us who might need to persuade our teams and stakeholders on how we can be most effective in our learning interventions.

Other findings:

  • Businessy-stuff fell in rated importance, for example, “consulting more deeply in the business,” “showing value,” and “developing the L&D function.”
  • Neuroscience/Cognitive Science fell in importance (most likely I think because some folks have been debunking the neuroscience-and-learning connections). And note: These should not be one category really, especially given that people in the know know that cognitive science, or more generally learning research, has shown to have proven value. Neuroscience not so much.
  • Mobile delivery and artificial intelligence were to two biggest gainers in terms of popularity.
  • Very intriguing that people active on social media (perhaps thought leaders, perhaps the opinionated mob) have different views that a more general population of workplace learning professionals. There is an interesting analysis in the book and a nice discussion in the podcast mentioned above.

For those interested in Don Taylor’s work, check out his website.

 

I’d like to announce that the first certification workshop for my new Work-Learning Academy is almost ready to launch. The first course? Naturally, it’s a course on how to create effective learner surveys—on Performance-Focused Smile Sheets.

I’m thrilled—ecstatic really—because I’ve wanted to do something like this for years and years, but the elements weren’t quite available. I’ve always wanted to provide an online workshop, but the tools tended to push toward just making presentations. As a learning expert, I knew mere presentations—even if they include discussions and some minimal interactions like polling questions—just weren’t good enough to create real learning benefits. I’ve also always wanted a way to provide a meaningful credential—one that was actually worth something, one that went beyond giving people credit for attendance and completion. Finally, I figured out how to bring this all together

And note that LTEM (the Learning-Transfer Evaluation Model), helped me clarify my credentialing strategy. You can read about using LTEM for credentialing here, but, in short, our entry-level certification—our Gold Certification—requires learners to pass a rigorous LTEM Tier-5 assessment, demonstrating competence through realistic decision-making. Those interested in the next level credential—our Master Certification—will have to prove their competence at an LTEM Tier-6 designation. Further certification levels—our Artisan Certification and Research Certification—will require competence demonstrated at Tier-7 and/or Tier-8.

 

For over 20 years, I’ve been plying my research-to-practice craft through Work-Learning Research, Inc. I’m thrilled to announce that I’ll be certifying our first set of Gold Credential professionals within a few months. If you’d like to sign up to be notified when the credential workshop is available—or just learn more—follow this link:

Click here to go to our
Work-Learning Academy information page

Happened to notice these two statements printed in vendor literature at a recent conference. I’ve obscured their names just enough so I’m not obviously picking on them but they will know who they are.

Statement #1 from vendor named “C*g*i*o”

  • “We all know that up to 80% of what learners are taught in training will be lost in 30 days if there is no practice or reinforcement.”

Statement #2: from vendor named “A*ea*”

  • “We have known for more than 150 years that humans forget up to 70% of what they learn within 24 hours!”

These statements are false and misleading. To get a more accurate view of human forgetting, check out this well-researched document.

The Sad Reality of Faux or Misleading Research Citations in Vendor Literature

Buyer beware! Vendors are now utilizing confirmatory-bias methodologies to sprinkle their verbal and visual communications with research-sounding sound bites. Because we are human, this persuasion technique is likely to snare us.

We may even buy a product or service that doesn’t work.

My recommendation: Spend $500 on a research-to-practice expert to save yourself tens or hundreds of thousands of dollars, euros, pounds, etc.

 

 

LTEM, the Learning-Transfer Evaluation Model, was designed as an alternative to the Kirkpatrick-Katzell Four-Level Model of learning evaluation. It was designed specifically to better align learning evaluation with the science of human learning. One way in which LTEM is superior to the Four-Level Model is in the way it highlights gradations of learning outcomes. Where the Four-Level model crammed all “Learning” outcomes into one box (that is, “Level 2”), LTEM separates learning outcomes into Tier-4 Knowledge, Tier-5 Decision-Making Competence, and Tier-6 Task Competence. This simple, yet incredibly powerful categorization, changes everything in terms of learning evaluation. First and foremost, it pushes us to go beyond inconsequential knowledge checks in our learning evaluations (and in our learning designs as well). To learn more about how LTEM creates additional benefits, you can click on this link, where you can access the model and a 34-page report for free, compliments of  me, Will Thalheimer, and Work-Learning Research, Inc.

Using LTEM in Credentialing

LTEM can also be used in credentialing—or less formally in specifying the rigorousness of our learning experiences. So for example, if our training course only asks questions about terminology or facts in its assessments, than we can say that the course provides a Tier-4 credential. If our course asks learners to successfully complete a series of scenario-based decisions, we can say that the course provides a Tier-5 credential.

Wow! Think of the power of naming the credential level of our learning experiences. Not only will it give us—and our business stakeholders—a clear sense of the strength of our learning initiatives, but it will drive our instructional designs to meet high standards of effectiveness. It will also begin to set the bar higher. Let’s admit a dirty truth. Too many of our training programs are just warmed-over presentations that do very little to help our learners make critical decisions or improve their actual skills. By focusing on credentialing, we focus on effectiveness!

 

Using LTEM Credentialing at Work-Learning Research

For the last several months, I’ve been developing an online course to teach learning professionals how to transform their learner surveys into Performance-Focused Smile Sheets. As part of this development process, I realized that I needed more than one learning experience—at least one to introduce the topic and one to give people extensive practice. I also wanted to provide people with a credential each time they successfully completed a learning experience. Finally, I wanted to make the credential meaningful. As the LTEM model suggests, attendance is NOT a meaningful benchmark. Neither is learner satisfaction. Nor is knowledge regurgitation.

Suddenly, it struck me. LTEM already provided a perfect delineation for meaningful credentialing. Tier-5 Decision-Making Competence would provide credentialing for the first learning experience. For people to earn their credential they would have to perform successfully in responding to realistic decision-making scenarios. Tier-6 Task Competence would provide credentialing for the second, application-focused learning experience. Additional credentials would only be earned if people could show results at Tier-7 and/or Tier-8 (Transfer to Work Performance and associated Transfer Effects).

 

 

The Gold-Certification Workshop is now ready for enrollment. The Master-Certification Workshop is coming soon! You can keep up to date or enroll now by going to the Work-Learning Academy page.

 

How You Can Use LTEM Credentialing to Assess Learning Experiences that Don’t Use LTEM

LTEM is practically brand new, having only been released to the public a year ago. So, while many organizations are gaining a competitive advantage by exploring its use, most of our learning infrastructure has yet to be transformed. In this transitional period, each of us has to use our wisdom to assess what’s already out there. How about you give it a try?

Two-Day Classroom Workshop — What Tier Credential?

What about a two-day workshop that gives people credit for completing the experience? Where would that be on the LTEM framework?

Here’s a graphic to help. Or you can access the full model by clicking here.

The two-day workshop would be credentialed at a Tier-1 level, signifying that the experience credentials learners by measuring their attendance or completion.

Two-Day Classroom Workshop with Posttest — What Tier Credential?

What if the same two-day workshop also added a test focused on whether the learners understood the content—and provided the test a week after the program. Note that in the LTEM model, credentialing is encouraged at Tiers 4, 5, and 6 to include assessments that show learners are able to remember, not just comprehend in the short term.

If the workshop added this posttest, we’d credential it at Tier-4, Knowledge Retention.

Half-Day Online Program with Performance-Focused Smile Sheet — What Tier Credential?

What if there was a half day workshop that used one of my Performance-Focused Smile Sheets to evaluate success. At what Tier would this be credentialed?

It would be credentialed at Tier-3, or Tier-3A if we wanted to delineate between learner surveys that assess learning effectiveness and those that don’t.

Three-Session Online Program with Traditional Smile Sheet — What Tier Credential?

This format—using three 90-minute sessions with a traditional smile sheet—is the most common form of credentialing in the workplace learning industry right now. Go look around at those that are providing credentials. They are providing credentials using relatively short presentations and a smile sheet at the end. If this is what they provide, what credentialing Tier do they deserve? Tier-3 or Tier-3B! That’s right! That’s it. They only tell us that learners are satisfied with the learning experience. They don’t tell us whether they can make important decisions or whether they can utilize new skills.

What is this credential really worth?

You can decide for yourself, but I think it could be worth more, if only those making the money provided credentialing at Tier-5, Tier-6, and beyond.

With LTEM we can begin to demand more!

 

Work-Learning Research and Will Thalheimer can Help!

People tell me I need to stop giving stuff away for free, or at least I ought to be more proactive in seeking customers. So, this is a reminder that I am available to help you improve your learning and learning evaluation strategies and tactics. Please reach out to me at my nifty contact form by clicking here.