Tag Archive for: learning measurement

For those of you who don’t know Matt Richter, President of the Thiagi Group, he’s one of the most innovative thinkers when it comes to creating training that both sizzles and supports work performance. Recently, Matt and I began partnering in a new podcast, Truth In Learning, which I’ll have more to say about later once I figure out where the escape hatch is.

NOW, I want to share with you a brilliant new article, that Matt surprised me with, on his efforts to brainstorm innovative ways to use LTEM (The Learning-Transfer Evaluation Model).

You should read his article, but just to give you the list of seven uses for LTEM:

  1. Learning Evaluation—The primary intent of the LTEM framework.
  2. Instructional Design—To negotiate with stakeholders the outcomes desired.
  3. Training Game Design—To ensure games/activities have an instructional purpose.
  4. Coaching—Helping to build a development plan for those who are coached.
  5. Performance Consulting—To focus on performances that matter along the journey.
  6. Keynoting/Presenting—To ensure a focus on meaningful outcomes, not just infotainment.
  7. Sales/Business Development—To keep sales conversations focused on meaningful outcomes.

We are All in this Together

One of the great benefits of publishing LTEM is that since its publication last year I’m regularly being contacted by people whose organizations are finding new and innovative ways to utilize LTEM—and not just for learning evaluation but as a central element of their learning strategy and practice.

I’m especially pleased with those who have taken LTEM really deep, and I’d like to give a shout out to Elham Arabi who is doing her doctoral dissertation using LTEM as a spur to supporting a hospital’s effort to maximize the benefits or their learning interventions. Congrats to her for being accepted as a speaker at the upcoming eLearning Guild Learning Solutions Conference, March 31 to April 2 (2020) in Orlando. The title of her talk is: Using Evaluation Data to Enhance Your Training Programs.

Share Your Examples and Innovations

Please share your innovations and ideas about using LTEM in your workplace, on social media, or by contacting me at https://www.worklearning.com/contact/. I would really love to hear how it’s going, including any obstacles you’ve faced, your success stories, etc.

And, of course, if you’d like me to help your organization utilize LTEM, or just be the face of LTEM to your organization, please contact me so we can set up a time to talk, and consider my LTEM workshop to introduce LTEM to your team.

 

 

People keep asking me for references to the claim that learner surveys are not correlated—or are virtually uncorrelated—with learning results. In this post, I include them, with commentary.

 

 

Major Meta-Analyses

Here are the major meta-analyses (studies that compile the results of many other scientific studies using statistical means to ensure fair and valid comparisons):

For Workplace Training

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Hughes, A. M., Gregory, M. E., Joseph, D. L., Sonesh, S. C., Marlow, S. L., Lacerenza, C. N., Benishek, L. E., King, H. B., Salas, E. (2016). Saving lives: A meta-analysis of team training in healthcare. Journal of Applied Psychology, 101(9), 1266-1304.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

For University Teaching

Uttl, B., White, C. A., Gonzalez (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.

What these Results Say

These four meta-analyses, covering over 200 scientific studies, find correlations between smile-sheet ratings and learning to average about 10%, which is virtually no correlation at all. Statisticians consider correlations below 30% to be weak correlations, and 10% then is very weak.

What these Results Mean

These results suggest that typical learner surveys are not correlated with learning results.

From a practical standpoint:

 

If you get HIGH MARKS on your smile sheets:

You are almost equally likely to have

(1) An Effective Course

(2) An Ineffective Course

 

If you get LOW MARKS on your smile sheets:

You are almost equally likely to have

(1) A Poorly-Designed Course

(2) A Well-Designed Course

 

Caveats

It is very likely that the traditional smile sheets that have been used in these scientific studies, while capturing data on learner satisfaction, have been inadequately designed to capture data on learning effectiveness.

I have developed a new approach to learner surveys to capture data on learning effectiveness. This approach is the Performance-Focused Smile Sheet approach as originally conveyed in my 2016 award-winning book. As of yet, no scientific studies have been conducted to correlate the new smile sheets with measures of learning. However, many many organizations are reporting substantial benefits. Researchers or learning professionals who want my updated list of recommended questions can access them here.

Reflections

  1. Although I have written a book on learner surveys, in the new learning evaluation model, LTEM (Learning-Transfer Evaluation Model), I place these smile sheets at Tier 3, out of eight tiers, less valuable than measures of knowledge, decision-making, task performance, transfer, and transfer effects. Yes, learner surveys are worth doing, if done right, but they should not be the only tool we use when we evaluate learning.
  2. The earlier belief—and one notably advocated by Donald, Jim, and Wendy Kirkpatrick—that there was a causal chain from learner reactions to learning, behavior, and results has been shown to be false.
  3. There are three types of questions we can utilize on our smile sheets: (1) Questions that focus on learner satisfaction and the reputation of the learning, (2) Questions that support learning, and (3) Questions that capture information about learning effectiveness.
  4. It is my belief that we focus too much on learner satisfaction, which has been shown to be uncorrelated with learning results—and we also focus too little on questions that gauge learning effectiveness (the main impetus for the creation of Performance-Focused Smile Sheets).
  5. I do believe that learner satisfaction is important, but it is not most important.

Learning Opportunities regarding Learner Surveys

While I was in London a few months ago, where I talked about learning evaluation, I was interviewed by the LearningNews about learning evaluation.

Some of what I said:

  • “Most of us have been doing the same damn thing we’ve always done [in learning evaluation]. On the other hand, there is a breaking of the logjam.”
  • “A lot of us are defaulting to happy sheets, and happy sheets that aren’t effective.”
  • “Do we in L&D have the skills to be able to do evaluation in the first place?…. My short answer is NO WAY!”
  • “We can’t upskill ourselves fast enough [in terms of learning evaluation].

It was a fun interview and LearningNews did a nice job in editing it. Special thanks to Rob Clarke for the interview, organizing, and video work (along with his great team)!!

Click here to see the interview.

At a recent industry conference, a speaker, offering their expertise on learning evaluation, said this:

“As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.”

Unfortunately, this is one of the most counterproductive memes in learning evaluation. It is counterproductive because it throws our profession under the bus. In this telling, we have no professional principles, no standards, no foundational ethics. We are servants, cleaning the floors the way we are instructed to clean them, even if we know a better way.

Year after year we hear from so-called industry thought leaders that our primary responsibility is to the organizations that pay us. This is a dangerous half truth. Of course we owe our organizations some fealty and of course we want to keep our jobs, but we also have professional obligations that go beyond this simple “tell-me-what-to-do” calculus.

This monomaniacal focus on measuring learning in terms of business outcomes reminds me of the management meme from the 1980s and 90s, that suggested that the goal of a business organization is to increase stakeholder value. This single-bottom-line focus has come under blistering attack for its tendency to skew business operations toward short-term results while ignoring long-term business results and for producing outcomes that harm employees, hurt customers, and destroy the environment.

If we give our business stakeholders the metrics they say that matter to them, but fail to capture the metrics that matter to our success as learning professionals in creating effective learning, then we not only fail ourselves and our learners but we fail our organization as well.

Evaluation What For?

To truly understand learning evaluation, we have to ask ourselves why we’re evaluating learning in the first place! We have to work backwards from the answer to this question.

Why does anyone evaluate? We evaluate to help us make better decisions and take better actions. It’s really that simple! So as learning professionals, we need information to help us make our most important decisions. We should evaluate to support these decisions!

What are our most important decisions? Here’s a few:

  • Which part of the content taught, if any, is relevant and helpful to supporting employees in doing their work? Which parts should be modified or discarded?
  • Which aspects of our learning designs are helpful in supporting comprehension, remembering, and motivation to learn? Which aspects should be modified or discarded?
  • Which after-training supports are helpful in enabling learning to be transferred and utilized by employees in their work? Which supports should be kept? Which need to be modified or discarded?

What are our organizational stakeholders’ most important decisions about learning? Here are a few:

  • Are our learning and development efforts creating optimal learning results? What additional support and resources should the organization supply that might improve learning results? What savings can be found in terms of support and resources—and are these savings worth the lost benefits?
  • Is the leadership of the learning and development function producing a cycle of continuous improvement, generating improved learning outcomes or generating learning outcomes optimized given their resource constraints? If not, can they be influenced to be better or should they be replaced?
  • Is the leadership of the learning and development function creating and utilizing evaluation metrics that enable the learning and development team to get valid feedback about the design factors that are most important in creating our learning results? If not, can they be influenced to use better metrics or should they be replaced?

Two Goals for Learning Evaluation

When we think of learning evaluation, we should have two goals. First, we should create learning-evaluation metrics that enable us to make our most important decisions regarding content, design components (i.e., focused at least on comprehension, remembering, motivation to apply learning), and after-training support. Second, we should do enough in our learning evaluations to gain sufficient credibility with our business stakeholders to continue our good work. Focusing only on the second of these is a recipe for disaster. 

Vanity Metrics

In the business start-up world there is a notion called “vanity metrics,” for example see warnings by Eric Ries, the originator of the lean startup movement. Vanity metrics are metrics that seem to be important, but that are not important. They are metrics that often make us look good even if the underlying data is not really meaningful.

Most calls to provide our business stakeholders with the metrics that matter to them result in beautiful visualizations and data dashboards that focus on vanity metrics. Ubiquitous vanity metrics in learning include the number of trainees trained, the cost per training, the estimates of learners for the value of the learning, complicated benefit/cost analyses of that utilize phantom measures of benefits, etc. By focusing only or primarily on these metrics we don’t have data to improve our learning designs, we don’t have data that enables us create cycles of improvement, we don’t have data that enables us to hold ourselves accountable.

Are Your Smile Sheets Giving You Good Data Larger

In honor of April as “Smile-Sheet Awareness Month,” I am releasing a brand new smile-sheet diagnostic.

Available by clicking here:
http://smilesheets.com/smile-sheet-diagnostic-survey/

This diagnostic is based on wisdom from my award-winning book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, plus the experience I’ve gained helping top companies implement new measurement practices.

The diagnostic is free and asks you 20 questions about your organization’s current practices. It then provides instant feedback.

Here's a great article, from NPR, entitled Higher Ed's Moneyball, about Higher Ed's attempts to use data to provide their instructors with real-time data they can use to support their students.

The examples cited seem to mostly target usage data, not more rigorous learning data, but I can't be sure. Certainly, over time, someone will figure out how to capture data that is more meaningful.

Still, even the usage data seems helpful. For example, students who are inactive for a while can get extra attention, etc.

Something to keep an eye on for the future…

 

 

Social Media is hot, but it is not clear how well we are measuring social media.

A couple of years ago I wrote an article for the eLearning Guild about measuring social media. But it's not clear that we've got this nailed yet.

With this worry in mind, I've created a research survey to begin a process to see how best social-media (of the kind we might use to bolster workplace learning-and-performance) can be measured.

Here's the survey link. Please take the survey yourself. You don't have to be an expert to take it.

Here's my thinking so far on this. Please send wisdom if I've missed something.

  1. We can think about measuring social media the same way we measure any learning intervention.
  2. We can also create a list of all the proposed benefits for social media, and the proposed costs, and all the proposed harms, and we can see how people are measuring these now. The survey will help us with this second approach.

Note: Survey results will be made available for free. If you take the survey, you'll get early releases of the survey results and recommendations.

Also, this is not the kind of survey that needs good representative sampling, so feel free to share this far and wide.

Here is the direct link to the survey:   http://tinyurl.com/4tlslol

Here is the direct link to this blog post:   http://tinyurl.com/465ekpa

Today, Roy Pollock (CLO of the Fort Hill Company) and I release our job aid, "Building Measurement Into Your Training-Development Plan."

It's not rocket science, but it is our attempt to provide some guidance for how you might better utilize learning measurement.

Good learning measurement enables us to:

  1. Boost Learning Results
  2. Improve Our Learning Designs
  3. Prove Learning's Benefits

Unfortunately, in general we aren't very good at measuring learning. This is not only an embarrassment, but a big missed opportunity to improve our practices and our profession–and to grab a competitive advantage for our organizations.

Roy and I wanted to develop a job aid that would help (1) remind us to plan for measurement, (2) see where and how measurement should be integrated into our training-development plans, and (3) provide the reasoning behind the key steps.

There are two ways to use the job aid. You can use it "as is" to guide your training development. Or, you can utilize the wisdom from the job aid and add the key measurement steps to your own training-development process.

Roy and I will be teaching our learning-measurement workshop at the upcoming eLearning Guild conference in March. We'd be delighted if you would join us. Click to learn more…

The eLearning Guild is offering a $400 early-bird discount if you register for their March Annual Gathering by December 19th. Check it out.

Note: I'll be presenting a workshop (with Roy Pollock) on Learning Measurement, and speaking several other times, so this conference is well worth your while. AND, by saving $400, you can easily afford our symposium.