Posts

Michael Allen’s Question about the Last Thing in Training

, ,

At a recent online discussion held by the Minnesota Chapter of ISPI, where they were discussing the Serious eLearning Manifesto, Michael Allen offered a brilliant idea for learning professionals.

Michael’s company, Allen Interactions, talks regularly with prospective clients. It is in this capacity that Michael often asks this question (or one with this gist):

What is the last thing you want your learners to be doing in training before they go back to their work?

Michael knows the answer—he is using Socratic questioning here—and the answer should be obvious to those skilled in developing learning. We want people to be practicing what they’ve learned, and hopefully practicing in as realistic a way as possible. Of course!

Of course, but too often we don’t think like this. We have our instructional objectives and we plow through to cover content, hoping against hope that the knowledge seeds we plant will magically turn into performance on the job—as if knowledge can be harvested without any further nurturance.

We must remember the wisdom behind Michael’s question, that it is our job as learning professionals to ensure our learners are not only gaining knowledge, but that they are getting practice in making decisions and practicing realistic tasks.

One way to encourage yourself to engage in these good practices is to utilize the LTEM model, a learning evaluation model designed to support us as learning professionals in measuring what’s truly important in learning. LTEM’s Tier 5 and 6 encourage us to evaluate learners’ proficiency in making work-relevant decisions (Tier 5) and performing work-relevant tasks (Tier 6).

Whatever method you use to encourage your organization and team to engage in this research-based best practice, remember that we are harming our learners when we just teach content. Without practice, very little learning will transfer to the workplace.

Article on Training and Climate Change

,

Just today I wrote an article on Training and Climate Change and what, if anything, we workplace learning professionals can do about it.

See and comment on LinkedIn where I published the article. Click to go there now.

Announcements of Public Events for Will Thalheimer

Here is list of my public events for the next few months, including conference events, workshops, webinars, and research surveys.

 

OPEN NOW—Research Survey with the eLearning Guild

I’m partnering with the eLearning Guild to conduct a survey on learning-evaluation practices. I would greatly appreciate your time on this—and we estimate it will take you maybe five minutes.

https://www.surveygizmo.com/s3/4568935/Learning-Evaluation-Your-Goals-and-Concerns

 

VIDEO JUST RELEASED—Nigel Paine Interviews Will Thalheimer on LearningNowTV

LearningNowTV is a great resource broadcasting from the UK, and they’ve asked me to join them as a regular guest. I’m delighted.

Our first effort came out pretty good. See if you can see the minor hiccups, including me never saying hello, goodbye, or thank you. LOL

Good content though! I talk about LTEM, the Kirkpatrick-Katzell model, and more. Check it out:

https://learningnow.tv/watch/programme-46—30-august-2018/will-thalheimer-on-course-evaluation-kirkpatrick-and-the-ltem-model.html

 

ONLY A FEW DAYS LEFT TO REGISTER—Kansas City, ATD Conference, Thursday Sept 27

I’ll be delivering the closing keynote, but there is so much more to learn at this regional gem of a conference!

Title of my Keynote: Training Quiz Show: Secrets from the Learning Research (and the earth-shaking LTEM too!)

Last day to register is just two days away! Register here: https://tdkc.org/event-3031307

 

ATLANTA ISPI CHAPTER MEETING—Atlanta Georgia, Thursday Night, November 8th

The ISPI Atlanta chapter invited me to join them for two whirlwind sessions, this one on Thursday night and the following day for a morning workshop (see next item).

Title: Dr. Thalheimer’s Learning Medicine Show and Research Palooza

Registration Information Coming Soon!

 

ATLANTA ISPI WORKSHOP ON LTEM—Atlanta Georgia, Friday Morning, November 9th

This will be my first public workshop on LTEM, the Learning-Transfer Evaluation Model. I’ve spoken about LTEM several times over the last year, but this will be the first in-depth workshop. A half day NOT to be missed!

Registration Information Coming Soon!

 

ELEARNING GUILD COMPLIANCE CONFERENCE (ONLINE)—November 14

The eLearning Guild is hosting a fascinating conference on Compliance Training, and I’ll be talking about how to get valid data.

Title: Getting Valid Data on Your Compliance Training (Not as Easy As it Looks)! My session information.

Register Now: https://www.elearningguild.com/content/5358/compliance-training-summit-2018-home

There’s a whole host of great presenters for this two-day summit. Check it out!

 

ISPI BABS CHAPTER (ONLINE WEBINAR)—November 15

ISPI BABS CHAPTER, which includes geographically close-but-not-contiguous Bay Area and Boise State, always hosts interesting programs, including something called the Oyster-Barrel.

I’ll be presenting a webinar on LTEM (The Learning-Transfer Evaluation Model), which unlike oysters, is guaranteed to act like an aphrodisiac for learning professionals.

Title of Webinar: The Learning-Transfer Evaluation Model (LTEM): A Research-Inspired Alternative to the Four Levels.

Here’s a link to register: https://www.ispibabs.org/event-2934366

 

BERLIN GERMANY OEB CONFERENCE WORKSHOP—December 5 Afternoon

Half-day workshop title: Getting Radically Improved Data from Learner Feedback

Sign up for Workshops at: https://oeb.global/programme#workshops. Mine is Workshop A9.

 

BERLIN GERMANY OEB CONFERENCE—December 6 Spotlight Session

A quick 30-minute version of the The Learning Research Quiz Show.

 

 

 

Research Findings: Current Practices in Gathering Learner Feedback

,

Respondents

Over 200 learning professionals responded to Work-Learning Research’s 2017-2018 survey on current practices in gathering learner feedback, and today I will reveal the results. The survey ran from November 29th, 2017 to September 16th, 2018. The sample of respondents was drawn from Work-Learning Research’s mailing list and through extensive calls for participation in a variety of social media. Because of this sampling methodology, the survey results are likely skewed toward professionals who care and/or pay attention to research-based practice recommendations more than the workplace learning field as a whole. They are also likely more interested and experienced in learning evaluation as well.

Feel free to share this link with others.

Goal of the Research

The goal of the research was to determine what people are doing in the way of evaluating their learning interventions through the practice of asking learners for their perspectives.

Questions the Research Hoped to Answer

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?
  2. How does their use compare with other methods? Are other methods growing in prominence/use?
  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?
  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?
  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?
  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?
  7. What tools are organizations using to gather learner feedback?
  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?
  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

A summary of the findings indexed to these questions can be found at the end of this post.

Situating the Practice of Gathering Learner Feedback

When we gather feedback from learners, we are using a Tier 3 methodology on the LTEM (Learning-Transfer Evaluation Model) or Level 1 on the Kirkpatrick-Katzell Four-Level Model of Training Evaluation.

Demographic Background of Respondents

Respondents came from a wide range of organizations, including small, midsize, and large organizations.

Respondents play a wide range of roles in the learning field.

Most respondents live in the United States and Canada, but there was some significant representation from many predominantly English-speaking countries.

Learner-Feedback Findings

About 67% of respondents report that learners are asked about their perceptions on more than half of their organization’s learning programs, including elearning. Only about 22% report that they survey learners on less than half of their learning programs. This finding is consistent with past findings—surveying learners is the most common form of learning evaluation and is widely practiced.

The two most common question types in use are Likert-like questions and numeric-scale questions. I have argued against their use* and I am pleased that Performance-Focused Smile Sheet questions have been utilized by so many so quickly. Of course, this sample of respondents is comprised of folks on my mailing list so this result surely doesn’t represent current practice in the field as a whole. Not yet! LOL.

*Likert-like questions and numeric-scale questions are problematic for several reasons. First, because they offer fuzzy response choices, learners have a difficult time deciding between them and this likely makes their responses less precise. Second, such fuzziness may inflate bias as there are not concrete anchors to minimize biasing effects of the question stems. Third, Likert-like options and numeric scales likely deflate learner responding because learners are habituated to such scales and because they may be skeptical that data from such scales will actually be useful. Finally, Likert-like options and numeric scales produce indistinct results—averages all in the same range. Such results are difficult to assess, failing to support decision-making—the whole purpose for evaluation in the first place. To learn more, check out Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form (book website here).

The most common tools used to gather feedback from learners were paper surveys and SurveyMonkey. Questions delivered from within an LMS were the next highest. High-end evaluation systems like Metrics that Matter were not highly represented in our respondents.

Our respondents did not rate their learner-feedback efforts as very effective. Their learner surveys were seen as most effective in gauging learner satisfaction. Only about 33% of respondents thought their learner surveys gave them insights on the effectiveness of the learning.

Only about 15% of respondents found their data very useful in providing them feedback about how to improve their learning interventions.

Respondents report that their organizations are somewhat open to alternatives to their current learner-feedback approaches, but overall they are not actively looking for alternatives.

Most respondents report that their organizations are at least “modestly happy” with their learner-feedback assessments. Yet only 22% reported being “generally happy” with them. Combining this finding with the one above showing that lots of organizations are open to alternatives, it seems that organizational satisfaction with current learner-feedback approaches is soft.

We asked respondents about their organizations’ attempts to measure the following:

  • Learner Attendance
  • Whether Learner is Paying Attention
  • Learner Perceptions of the Learning (eg, Smile Sheets, Learner Feedback)
  • Amount or Quality of Learner Participation
  • Learner Knowledge of the Content
  • Learner Ability to Make Realistic Decisions
  • Learner Ability to Complete Realistic Tasks
  • Learner Performance on the Job (or in another future performance situation)
  • Impact of Learning on the Learner
  • Impact of Learning on the Organization
  • Impact of Learning on Coworkers, Family, Friends of the Learner
  • Impact of Learning on the Community or Society
  • Impact of Learning on the Environment

These evaluation targets are encouraged in LTEM (The Learning-Transfer Evaluation Model).

Results are difficult to show—because our question was very complicated (admittedly too complicated)—but I will summarize the findings below.

As you can see, learner attendance and learner perceptions (smile sheets) were the most commonly measured factors, with learner knowledge a distant third. The least common measures involved the impact of the learning on the environment, community/society, and the learner’s coworkers/family/friends.

The flip side—methods rarely utilized in respondents’ organizations—shows pretty much the same thing.

Note that the question above, because it was too complicated, probably produced some spurious results, even if the trends at the extremes are probably indicative of the whole range. In other words, it’s likely that attendance and smile sheets are the most utilized and measures of impact on the environment, community/society, and learners’ coworkers/family/friends are the least utilized.

Questions Answered Based on Our Sample

  1. Are smile sheets (learner-feedback questions) still the most common method of doing learning evaluation?

    Yes! Smile sheets are clearly the most popular evaluation method, along with measuring attendance (if we include that as a metric).

  2. How does their use compare with other methods? Are other methods growing in prominence/use?

    Except for Attendance, nothing else comes close. The next most common method is measuring knowledge. Remarkably, given the known importance of decision-making (Tier 5 in LTEM) and task competence (Tier 6 in LTEM), these are used in evaluation at a relatively low level. Similar low levels are found in measuring work performance (Tier 7 in LTEM) and organizational results (part of Tier 8 in LTEM). We’ve known about these relatively low levels from many previous research surveys.

    Hardly any measurement is being done on the impact of learning on learner or his/her coworkers/family/friends, the impact of the learning on the community/society/environment, or on learner participation/attention.

  3. How satisfied are learning professionals with their organizations’ learner-feedback methods?

    Learning professionals are moderately satisfied.

  4. To what extent are organizations looking for alternatives to their current learner-feedback methods?

    Organizations are open to alternatives, with some actively seeking alternatives and some not looking.

  5. What kinds of questions are used on smile sheets? Has Thalheimer’s new approach, performance-focused questioning, gained any traction?

    Likert-like options and numeric scales are the most commonly used. Thalheimer’s performance-focused smile-sheet method has gained traction in this sample of respondents—people likely more in the know about Thalheimer’s approach than the industry at large.

  6. What do learning professionals think their current smile sheets are good at measuring (Satisfaction, Reputation, Effectiveness, Nothing)?

    Learning professionals think their current smile sheets are fairly good at measuring the satisfaction of learners. A full one-third of respondents feel that their current approaches are not valid enough to provide them with meaningful insights about the learning interventions.

  7. What tools are organizations using to gather learner feedback?

    The two most common methods for collecting learner feedback are paper surveys and SurveyMonkey. Questions from LMSs are the next most widely used. Sophisticated evaluation tools are not much in use in our respondent sample.

  8. How useful are current learner-feedback questions in helping guide improvements in learning design and delivery?

    This may be the most important question we might ask, given that evaluation is supposed to aid us in maintaining our successes and improving on our deficiencies. Only 15% of respondents found learner feedback “very helpful” in helping them improve their learning. Many found the feedback “somewhat helpful” but a full one-third found the feedback “not very useful” in enabling them to improve learning.

  9. How widely are the target metrics of LTEM (The Learning-Transfer Evaluation Model) currently being measured?

    As described in Question 2 above, many of the targets of LTEM are not being adequately measured at this point in time (November 2017 to September 2018, during the time immediately before and after LTEM was introduced). This indicates that LTEM is poised to help organizations uncover evaluation targets that can be helpful in setting goals for learning improvements.

Lessons to be Drawn

The results of this survey reinforce what we’ve known for years. In the workplace learning industry, we default to learner-feedback questions (smile sheets) as our most common learning-evaluation method. This is a big freakin’ problem for two reasons. First, our learner-feedback methods are inadequate. We often use poor survey methodologies and ones particularly unsuited to learner feedback, including the use of fuzzy Likert-like options and numeric scales. Second, even if we used the most advanced learner-feedback methods, we still would not be doing enough to gain insights into the strengths and weaknesses of our learning interventions.

Evaluation is meant to provide us with data we can use to make our most critical decisions. We need to know, for example, whether our learning designs are supporting learner comprehension, learner motivation to apply what they’ve learned, learner ability to remember what they’ve learned, and the supports available to help learners transfer their learning to their work. We typically don’t know these things. As a result, we don’t make design decisions we ought to. We don’t make improvements in the learning methods we use or the way we deploy learning. The research captured here should be seen as a wake up call.

The good news from this research is that learning professionals are often aware and sensitized to the deficiencies of their learning-evaluation methods. This seems like a good omen. When improved methods are introduced, they will seek to encourage their use.

LTEM, the new learning-evaluation model (which I developed with the help of some of the smartest folks in the workplace learning field) is targeting some of the most critical learning metrics—metrics that have too often been ignored. It is too new to be certain of its impact, but it seems like a promising tool.

Why I have turned my Attention to Evaluation (and why you should too!)

For 20 years, I’ve focused on compiling scientific research on learning in the belief that research-based information—when combined with a deep knowledge of practice—can drastically improve learning results. I still believe that wholeheartedly! What I’ve also come to understand is that we as learning professionals must get valid feedback on our everyday efforts. It’s simply our responsibility to do so.

We have to create learning interventions based on the best blend of practical wisdom and research-based guidance. We have to measure key indices that tell us how our learning interventions are doing. We have to find out what their strengths are and what their weaknesses are. Then we have to analyze and assess and make decisions about what to keep and what to improve. Then we have to make improvements and again measure our results and continue the cycle—working always toward continuous improvement.

Here’s a quick-and-dirty outline of the recommended cycle for using learning to improve work performance. “Quick-and-dirty” means I might be missing something!

  1. Learn about and/or work to uncover performance-improvement needs.
  2. If you determine that learning can help, continue. Otherwise, build or suggest alternative methods to get to improved work performance.
  3. Deeply understand the work-performance context.
  4. Sketch out a very rough draft for your learning intervention.
  5. Specify your evaluation goals—the metrics you will use to measure your intervention’s strengths and weaknesses.
  6. Sketch out a rough draft for your learning intervention.
  7. Specify your learning objectives (notice that evaluation goals come first!).
  8. Review the learning research and consider your practical constraints (two separate efforts subsequently brought together).
  9. Sketch out a reasonably good draft for your learning intervention.
  10. Build your learning intervention and your learning evaluation instruments (Iteratively testing and improving).
  11. Deploy your “ready-to-go” learning intervention.
  12. Measure your results using the previously determined evaluation instruments, which were based on your previously determined evaluation objectives.
  13. Analyze your results.
  14. Determine what to keep and what to improve.
  15. Make improvements.
  16. Repeat (maybe not every step, but at least from Step 6 onward)

And here is a shorter version:

  1. Know the learning research
  2. Understand your project needs.
  3. Outline your evaluation objectives—the metrics you will use.
  4. Design your learning.
  5. Deploy your learning and your measurement.
  6. Analyze your results.
  7. Make Improvements
  8. Repeat.

More Later Maybe

The results shared here are the result from all respondents. If I get the time, I’d like to look at subsets of respondents. For example, I’d like to look at how learning executives and managers might differ from learning practitioners. Let me know how interested you would be in these results.

Also, I will be conducting other surveys on learning-evaluation practices, so stay tuned. We have been too long frustrated with our evaluation practices and more work needs to be done in understanding the forces that keep us from doing what we want to do. We could also use more and better learning-evaluation tools because the truth is that learning evaluation is still a nascent field.

Finally, because I learn a ton by working with clients who challenge themselves to do more effective interventions, please get in touch with me if you’d like a partner in thinking things through and trying new methods to build more effective evaluation practices. Also, please let me know how you’ve used LTEM (The Learning-Transfer Evaluation Model).

Some links to make this happen:

Appreciations

As always, I am grateful to all the people I learn from, including clients, researchers, thought leaders, conference attendees, and more… Thanks also to all who acknowledge and share my work! It means a lot!

Kansas City, Kansas City Here I Come! ATD KC Chapter’s Annual Conference.

Famous Song Lyrics:

Kansas City, Kansas City Here I Come!
They have a crazy way of learning there, you know you gotta get you some!

I’ll be keynoting a the ATD Kansas City Chapter’s Annual Conference in a few weeks. If you’re in the area, or want to visit please come join me and the local Kansas City area learning professionals.



Click here to sign up for the conference…

 

Thursday, September 27, 2018
  • 8:30 AM – 4:30 PM
  • Johnson County Community College
  • 12345 College Blvd, Overland Park, KS 66210

 

Reflections for Labor Day, Inspired by Long-Time Efforts Channeling Brilliant Researchers

My research-and-consulting practice, Work-Learning Research, was 20 years old last Saturday. This has given me pause to reflect on where I’ve been and how learning research has involved in the past two decades.

Today, as I’m preparing a conference proposal for next year’s ISPI conference, I found an early proposal I put together for the Great Valley chapter of ISPI to speak at one of their monthly meetings back in 2002. I don’t remember whether they actually accepted my proposal, but here is an excerpt:

 

 

Interesting that even way back then, I had found and compiled research on retrieval practice, spacing, feedback, etc. from the scientific journals and the exhaustive labor of hundreds of academic researchers. I am still talking about these foundational learning principles even today—because they are fundamental and because research and practice continue to demonstrate their power. You can look at recent books and websites that are now celebrating these foundational learning factors (Make it Stick, Design for How People Learn, The Ingredients for Great Teaching, Learning Scientists website, etc.).

Feeling blessed today, as we here in the United States move into a weekend where we honor our workers, that I have been able to use my labor to advance these proven principles, uncovered first by brilliant academic researchers such as Bjork, Bahrick, Mayer, Ebbinghaus, Crowder, Sweller, van Merriënboer, Rothkopf, Runquist, Izawa, Smith, Roediger, Melton, Hintzman, Glenberg, Dempster, Estes, Eich, Ericsson, Davies, Garner, Chi, Godden, Baddeley, Hall, Hintzman, Herz, Karpicke, Butler, Kirschner, Clark, Kulhavy, Moreno, Pashler, Cepeda, and many others.

From these early beginnings, I created a listing of twelve foundational learning factors—factors that I have argued should be our first priority in creating great learning—reviewed here in this document.

Happy Labor Day everyone and special thanks to the researchers who continue to make my work possible—and enable learning professionals of all stripes to build increasingly effective learning!

If you’d like to leave a remembrance in regard to Work-Learning Research’s 20th anniversary, or just read my personal reflections about it, you can do that here.

 

Blogging Since October 2005

This is a personal reminiscence, no valuable content here.

I’ve been blogging since October 10, 2005; when I started using Typepad, an early blogging platform. Today, August 8, 2018, I stopped my Typepad account. Funny, they kept track of every payment I made, my first was on November 25, 2005, the day after the Thanksgiving Holiday here in the United States. I wonder if they had a Black Friday sale. More likely I had extra time to upgrade my blog.

I did a lot of blogging and built a lot of websites over the years. These were my Typepad websites:

  • Work-Learning.com
  • WillAtWorkLearning.com (my blog)
  • SubscriptionLearning.com
  • LearningAudit.com and LearningAudit.net
  • Willsbook.net
  • AudienceResponseLearning.com
  • And two or three others (more personal projects)

I’m not done, but I outgrew the Typepad infrastructure. Now I’m a proud WordPress user. And I still have a few websites:

I want to thank the folks at Typepad for many happy years! I want to thank my readers too!

Later this month I’ll be celebrating Work-Learning Research’s 20th anniversary! I’ll have some more reminiscing to do and more thanks to give!

For today, I’m enjoying turning the page…

Here’s a picture from the Wayback Machine, which captured exactly one day of my blog in 2005, November 5th:

 

One more thing. The very first sentence I ever blogged was this one:

What is the median age when children are potty trained?

You can read that first post here (all my posts have been moved to this website).

And maybe that first sentence was my destiny… I use questions a lot and I’m always trying to get the crap flushed down the toilet!

BIG SMILE

 

 

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

, ,

You won’t believe what a vendor said about a speaker at a conference—when that speaker spoke the truth.

 

Conferences are big business in the workplace learning field.

Conferences make organizers a ton of money. That’s great because pulling off a good conference is not as easy as it looks. In addition to finding a venue and attracting people to come to your event, you also have to find speakers. Some speakers are well-known quantities, but others are unknown.

In the learning field, where we are inundated with fads, myths, and misconceptions; finding speakers who will convey the most helpful messages, and avoid harmful messages, is particularly difficult. Ideally, as attendees, we’d like to hear truth from our speakers rather than fluff and falsehoods.

On the other hand, vendors pay big money to exhibit their products and services at a conference. Their goal is connecting with attendees who are buyers or who can influence buyers. Even conferences that don’t have exhibit halls usually get money from vendors in one way or another.

So, conference owners have two groups of customers to keep happy: attendees and vendors. In an ideal world, both groups would want the most helpful messages to be conveyed. Truth would be a common goal. So for example, let’s say new research is done that shows that freep learning is better than traditional elearning. A speaker at a conference shares the news that freep learning is great. Vendors in the audience hear the news. What will they do?

  • Vendor A hires a handsome and brilliant research practitioner to verify the power of freep learning with the idea of moving forward quickly and providing this powerful new tool to their customers.
  • Vendor B jumps right in and starts building freep learning to ensure their customers get the benefits of this powerful new learning method.
  • Vendor C pulls the conference organizers aside and tells them, “If you ever use that speaker again, we will not be back; you will not get our money any more.”

Impossible you say!

Would never happen you think!

You’re right. Not enough vendors are hiring fadingly-good-lookingly brilliant research-to-practice experts!

Here’s a true story from a conference that took place within the last year or so.

Clark Quinn spoke about learning myths and misconceptions during his session, describing the findings from his wonderful book. Later when he read his conference evaluations he found the following comment among the more admiring testimonials:

“Not cool to debunk some tools that exhibitors pay a lot of money to sell at [this conference] only to hear from a presenter at the conference that in his opinion should be debunked. Why would I want to be an exhibitor at a conference that debunks my products? I will not exhibit again if this speaker speaks at [conference name]” (emphasis added).

This story was recounted by Clark and captured by Jane Bozarth in an article on the myth of learning styles she wrote as the head of research for the eLearning Guild. Note that the conference in question was NOT an eLearning Guild conference.

What can we do?

Corruption is everywhere. Buyer beware! As adults, we know this! We know politicians lie (some more than others!!). We know that we have to take steps not to be ripped off. We get three estimates when we need a new roof. We ask for personal references. We look at the video replay. We read TripAdvisor reviews. We look for iron-clad guarantees that we can return products we purchased.

We don’t get flustered or worried, we take precautions. In the learning field, you can do the following:

  • Look for conference organizers who regularly include research-based sessions (scientific research NOT opinion research).
  • Look for the conferences that host the great research-to-practice gurus. People like Patti Shank, Julie Dirksen, Clark Quinn, Mirjam Neelen, Ruth Clark, Karl Kapp, Jane Bozarth, Dick Clark, Paul Kirschner, and others.
  • Look for conferences that do NOT have sessions—or have fewer sessions—that propagate common myths and misinformation (learning styles, the learning pyramid, MBTI, DISC, millennials learn differently, people only use 10% of their brains, only 10% of learning transfers, neuroscience as a panacea, people have the attention span of a goldfish, etc.).
  • If you want to look into Will’s Forbidden Future, you might look for the following:
    • conferences and/or trade organizations that have hired a content trustee, someone with a research background to promote valid information and cull bad information.
    • conferences that point speakers to a list of learning myths to avoid.
    • conferences that evaluate sessions based on the quality of the content.

Being exposed to false information isn’t just bad for us as professionals. It’s also bad for our organizations. Think of all the wasted effort—the toil, the time, the money—that was flushed down the toilet trying to redesign all our learning to meet the so-called learning-styles approach. Egads! If you need to persuade your management about the danger of learning myths you might try this.

In a previous blog post, I talked about what we can do as attendees of conferences to avoid learning bad information. That’s good reading as well. Check it out here.

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

That’s a damn good question!

 

 

Brinkerhoff Case Method — A Better Name for a Great Learning-Evaluation Innovation

, ,

Updated July 3rd, 2018—a week after the original post. See end of post for the update, featuring Rob Brinkerhoff’s response.

Rob Brinkerhoff’s “Success Case Method” needs a subtle name change. I think a more accurate name would be the “Brinkerhoff Case Method.”

I’m one of Rob’s biggest fans, having selected him in 2008 as the Neon Elephant Award Winner for his evaluation work.

Thirty five years ago, in 1983, Rob published an article where he introduced the “Success Case Method.” Here is a picture of the first page of that article:

In that article, the Success-Case Method was introduced as a way to find the value of training when it works. Rob wrote, “The success-case method does not purport to produce a balanced assessment of the total results of training. It does, however, attempt to answer the question: When training works, how well does it work?” (page 58, which is visible above).

The Success-Case Method didn’t stand still. It evolved and improved as Rob refined it based on his research and his work with clients. In his landmark book that details the methodology in 2006, Telling Training’s Story: Evaluation Made Simple, Credible, and Effective, Rob describes how to first survey learners and then sample some of them for interviews by selecting them based on their level of success in applying the training. “Once the sorting is complete, the next step is to select the interviewees from among the high and low success candidates, and perhaps from the middle categories.” (page 102).

To call this the success-case method seems more aligned with the original naming then the actual recommended practice. For that reason, I recommend that we simply call it the Brinkerhoff Case Method. This gives Rob the credit he deserves, and it more accurately reflects the rigor and balance of the method itself.

As soon as I posted the original post, I reached out to Rob Brinkerhoff to let him know. After some reflection, Rob wrote this and asked me to post it:

“Thank you for raising the issue of the currency of the name Success Case Method (SCM). It is kind of you to also think about identifying it more closely with my name. Your thoughts are not unlike others and on occasion even myself. 

It is true the SCM collects data from extreme portions of the respondent distribution including likely successes, non-successes, and ‘middling’ users of training. Digging into these different groups yields rich and useful information. 

Interestingly the original name I gave to the method some 40 years ago when I first started forging it was the “Pioneer” method since when we studied the impact of a new technology or innovation we felt we learned the most from the early adopters – those out ahead of the pack that tried out new things and blazed a trail for others to follow. I refined that name to a more familiar term but the concept and goal remained identical: accelerate the pace of change and learning by studying and documenting the work of those who are using it the most and the best. Their experience is where the gold is buried. 

Given that, I choose to stick with the “success” name. It expresses our overall intent: to nurture and learn from and drive more success. In a nutshell, this name expresses best not how we do it, but why we do it. 

Thanks again for your thoughtful reflections. We’re on the same page.“ 

Rob’s response is thoughtful, as usual. Yet my feelings on this remain steady. As I’ve written in my report on the new Learning-Transfer Evaluation Model (LTEM), our models should nudge appropriate actions. The same is true for the names we give things. Mining for success stories is good, but it has to be balanced. After all, if evaluation doesn’t look for the full truth—without putting a thumb on the scale—than we are not evaluating, we are doing something else.

I know Rob’s work. I know that he is not advocating for, nor does he engage in, unbalanced evaluations. I do fear that the name Success Case Method may give permission or unconsciously nudge lesser practitioners to find more success and less failure than is warranted by the facts.

Of course, the term “Success Case Method” has one brilliant advantage. Where people are hesitant to evaluate for fear of uncovering unpleasant results, the name “Success Case Method” may lessen the worry of moving forward and engaging in evaluation—and so it may actually enable the balanced evaluation that is necessary to uncover the truth of learning’s level of success.

Whatever we call it, the Success Case Method or the Brinkerhoff Case Method—and this is the most important point—it is one of the best learning-evaluation innovations in the past half century.

I also agree that since Rob is the creator, his voice should have the most influence in terms of what to call his invention.

I will end with one of my all-time favorite quotations from the workplace learning field, from Tim Mooney and Robert Brinkerhoff’s excellent book, Courageous Training:

“The goal of training evaluation is not to prove the value of training; the goal of evaluation is to improve the value of training.” (p. 94-95)

On this we should all agree!

Triggered Action Planning Confirmed with Scientific Research, Producing Huge Benefits

Back in 2008, I began discussing the scientific research on “implementation intentions.” I did this first at an eLearning Guild conference in March of 2008. I also spoke about it in 2008 at a talk to Salem State University, in a Chicago Workshop entitled Creating and Measuring Learning Transfer, and in one of my Brown Bag Lunch sessions delivered online.

In 2014, I wrote about implementation intentions specifically as a way to increase after-training follow-through. Thinking the term “Implementation Intentions” was too opaque and too general, I coined the term “Triggered Action Planning,” and argued that goal-setting at the end of training—what was often called action planning—would not be effective as triggered action planning. Indeed, in recounting the scientific research on implementation intentions, I often talked about how researchers were finding that setting situation-action triggers could create results that were twice as good as goal-setting alone. Doubling the benefits of goal setting! These kinds of results are huge!

I just came across a scientific study that supports the benefits of triggered action planning.

 

Shlomit Friedman and Simcha Ronen conducted two experiments and found similar results in each. I’m going to focus on their second one because it focused on a real training class with real employees. They used a class that taught retail sales managers how to improve interactions with customers. All the participants got the same exact training and were then randomly assigned to two different experimental groups:

  • Triggered Action Planning—Participants were asked to visualize situations with customers and how they would respond to seven typical customer objections.
  • Goal-Reminding Action Planning—Participants were asked to write down the goals of the training program and the aspects of the training program that they felt were most important.

Four weeks after the training, secret shoppers were used. They interacted with the supervisors using the key phrases and rated each supervisor on dichotomously-anchored rating scales from 1 to 10, with ten being best. The secret shoppers were blind to condition—that is they did not know which supervisors had gotten triggered action planning and which received the goal instructions. The findings showed that the triggered action planning produced improvements over the goal-setting condition by 76%, almost doubling the results.

It should be pointed out that this experiment could have been better designed to have the control group select their own goals. There may be some benefit to actual goal-setting compared with being reminded about the goals of the course. The experiment had its strengths too, most notably (1) the use of observers to record real-world performance four weeks after the training, and (2) the fact that all the supervisors had gone through the exact same training and were randomly assigned to either triggered action planning or the goal-reminding condition.

Triggered Action Planning

Triggered Action Planning has great potential to radically improve the likelihood that your learners will actually use what you’ve taught them. The reason it works so well is that it is based on a fundamental characteristic of human cognition. We are triggered to think and act based on cues in our environment. As learning professionals we should do whatever we can to:

  • Figure out what cues our learners will face in their work situations.
  • Teach them what to do when they encounter these cues.
  • Give them a rich array of spaced, repeated practice in handling these situations.

To learn more about how to implement triggered action planning, see my original blog post.

Research Cited

Friedman, S., & Ronen, S. (2015). The effect of implementation intentions on transfer of training. European Journal of Social Psychology, 45(4), 409-416.

This blog post took three hours to write.