Research Benchmarking

Research Benchmarking is the process by which your learning interventions are benchmarked against research-based best practices.

While random-assignment between-group research is likely to be too expensive and time-consuming for most of us, and benchmarking our work against other industry players is likely to push us toward mediocrity, Research Benchmarking offers a potent alternative.

Learning programs are examined to determine how well they (a) create understanding, (b) support long-term remembering (minimize forgetting), and (c) motivate on-the-job performance. They will be research-benchmarked against the 12 most decisive factors in learning design.

If you’d like to discuss research benchmarking further, contact me, Dr. Will Thalheimer, at 1-888-579-9814 or email me.

 

Learning professionals (like me) can often gain insights about our industry from people in the field who have different vantage points than our own. I recently talked with Eric Shepherd, CEO of Questionmark, to get a sense of our industry and how it has been affected by the bad economy. Eric has been a good friend and long-time supporter of my research over the years and I’ve come to value his counsel.

Questionmark is the leading provider of assessment software according to a recent eLearning Guild study. I thought from his perch overseeing all-things-assessment, Eric might be able to give us some unique insight into the learning-and-performance field in general.

Check out my interview with him at the recent Guild conference. I divided it into two parts to make viewing easier.

Part 1: What trends do you see that we may be missing? 

Part 2: How is the bad economy affecting the learning assessment marketplace? 

Last year I wrote at length about my efforts to improve my own smile sheets. It turns out that this is an evolving effort as I continue to learn from my learners and my experience.

Check out my new 2009 version.

You may remember that one of the major improvements in my smile sheet was to ask learners about the value and newness of EACH CONCEPT TAUGHT (or at least each MAJOR concept). This is beneficial because people respond more accurately to specifics than to generalities, they respond better to concrete learning points than to the vague semblance of a full learning experience.

What I forgot in my previous version was the importance of getting specific feedback on how well I taught each concept. Doh!

My latest version adds a column for how well each concept is taught. There is absolutely no more room to add any columns (I didn't think I could fit this latest one in), so I suppose this may allow diminishing returns on any more improvements.

Check it out and let me know what you think.

Update May 2014

Sarah Boehle wrote an article that included Neil Rackham's famous story on the dangers of measuring training only with smile sheets. The story used to be available from Training Magazine directly, but after some earlier disruptions and recoveries at Training Magazine, their digital archive was reconstituted and currently only goes back to 2007.

Fortunately, you can read the article here.

FREE Brown Bag
Learning Webinoshes

My
Brown Bag Learning webinoshes are short, intimate webinars covering one
essential topic in human learning and performance. I add questions, learning
myths, and question-and-answer sessions (where you can ask me anything) to
the mix to keep things interesting. These Brown Bag Learning experiences are
provided using a “Subscription Learning” methodology, so that themes
will be repeated over time for deeper, more impactful learning.

Upcoming Schedule:

Friday
November 7th, Noon U.S. East Coast Time

Can
We Improve Our Smile Sheets?

Link
to Register: https://www1.gotomeeting.com/register/345686876

Friday
November 21st, Noon U.S. East Coast Time

Does
Context Matter?

Link
to Register:

https://www1.gotomeeting.com/register/796752726

New: Now Available
through both the phone and VOIP so folks from around the world can attend.

I’ve been busy again thinking about the nexus between LEARNING and LEARNING MEASUREMENT.

You can peruse some of my previous thoughts on learning measurement by clicking here.

Here is a brand new article that I wrote for the eLearning Guild on how to evaluate Learning 2.0 stuff. Note: Learning 2.0 is defined (by the eLearning Guild) as: The idea of learning through digital connections and peer collaboration, enhanced by technologies driving Web 2.0. Users/Learners are empowered
to search, create, and collaborate, in order to fulfill intrinsic needs to learn new information.
Evaluating Learning 2.0 differs from evaluating traditional Learning 1.0 training for many reasons, one of which is that Learning 2.0 enables (encourages) learners to create their own content.

Steve Wexler, Director of Research and Emerging Technologies at the eLearning Guild, and I are leading a Webinar on Thursday September 4th on the current state of eLearning Measurement. We’ve got some new data that we’re hot to share.

Finally, Roy Pollock, one of the authors of the classic book, Six Disciplines of Breakthrough Learning, and I are leading a one-day symposium on measuring learning at the eLearning Guild’s DevLearn 2008 conference in November. It’s a great chance to go to one of the best eLearning conferences around while working with Roy and I in a fairly intimate workshop, wrangling with the newest thinking in how to measure learning. Choose Symposium S-4. Note that it may not show Roy’s information there yet–the Guild is still working on the webpage–but let me assure you that Roy and I are equal partners in this one.

Will’s 2016 Update: My latest thinking on smile sheets can be found in my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. See www.SmileSheets.com.

Some of what I wrote here, I still believe. Some I don’t. I’m keeping this up for historical purposes only.

 

Original from 2008

Smile sheets (the feedback forms we give learners after learning events) are an almost inevitable practice for training programs throughout the workplace learning industry. Residing at Donald Kirkpatrick’s 1st level—the Reaction level—smile sheets offer some benefits and some difficulties.

On the plus side, smile sheets (a) show the learners that we respect their thoughts and concerns, (b) provide us with a customer satisfaction ratings (with the learners as customers), (c) hint at potential bright spots and trouble spots, and (d) enable us to make changes to improve later programs.

On the minus side, smile sheets (a) do not seem to correlate with learning or behavior (see meta-analysis by Alliger, Tannenbaum, Bennett, Traver, and Shotland, 1997, that showed very weak correlations), (b) are often biased by being provided to learners in the learning context immediately after learning, and (c) are often analyzed in a manner that over-values the data as more meaningful than it really is.

Based on these benefits and difficulties, I recently developed a new smile sheet (one that shows its teeth, so to speak. SMILE) for my workshop Measuring and Creating Learning Transfer.

It has several advantages over traditional smile sheets:

1. Instead of asking learners to respond globally (which they are not very good at), it asks learners to respond to specific learning points covered in the learning intervention. This not only enables the learners to better calibrate their responses, it also gives the learners a spaced repetition (improving later memory retrieval on key learning points).

2. The new smile sheet enables me to capture data about the value of the individual key concepts so that changes can be made in future learning interventions.

3. The smile sheet has only a few overall ratings (when lots of separate ratings are used in traditional smile sheets, most of the time we don’t even analyze or use the data that is collected). There is space for comments on specifics, which obviates the need for specific ratings, and really gets better data as well. The average value is highlighted, which helps the learners compare the current learning intervention to previous learning interventions they have experienced. (You should be able to click on the image to see a bigger version).

4. The smile sheet asks two critical questions related to how likely the information learned will be utilized on the job and how likely the information will be shared with others. In some sense, this is where the rubber hits the road because it asks whether the training is likely to have an impact where it was intended to have an impact.

5. The smile sheet shows some personal touches that encourage the learners that the learning facilitator (trainer, professor, etc., or me in this case) will take the information seriously.

6. Finally, the smile sheet is just a starting point for getting feedback from learners. They are also sent a follow-up survey 2 weeks later, asking them to respond to a few short questions. Here are a few of those questions. Again, you might need to click the image to see a bigger version.

The learners get the following question only if they answer a previous question suggesting that they had not yet shared what they learned with others.

Why I Like My New Smile Sheet

I’m not going to pretend that I’ve created the perfect assessment for my one-day workshop. As I’ve said many times before, I don’t believe in “perfect assessments.” There are simply too many tradeoffs between precision and workability. Also, my new smile sheet and the follow-up survey are really only an improvement on the traditional smile sheet. So much more can be done, as I will detail below.

I like my new evaluation sheet and follow-up survey because they give me actionable information.

  • If my learners tell me that a concept provides little value, I can look for ways to make it valuable and relevant to them, or I can discard it.
  • If my learners find a concept particularly new and valuable, I can reinforce that concept and encourage implementation, or I can highlight this concept in other work that I do (providing value to others).
  • If my learners rate my workshop high at the end of the day, but low after two weeks, I can figure out why and attempt to overcome the obstacles.
  • If my learners think they are likely to implement what they learned (or teach others) at the end of the day, but don’t follow-through after two weeks, I can provide more reminders, encourage more management support, provide more practice to boost long-term retrieval, or provide a follow-up learning experience (maybe a working-learning experience).

I also like the evaluation practice because it supports learning and performance.

  • It provides a spaced repetition of the key learning concepts at the end of the learning event.
  • It provides a further spaced repetition of the key learning concepts at the beginning of the two-week survey.
  • It reminds learners at the end of the learning intervention that they are expected to put their learning into practice and share what they’ve learned with others.
  • It reminds them after 2 weeks back on the job that they are expected to put their learning into practice and share what they’ve learned with others.
  • It provides them with follow-up support 2 weeks out if they feel they need it.

Limitations

  • My new smile sheet and follow-up survey don’t tell me much about how people are actually using what I’ve taught them in their work. They could be implementing things perfectly or completely screwing things up. They might perfectly understand the learning points I was making or they may utterly misunderstand them.
  • The workshop is an open-enrollment workshop, so I don’t really have access to people on the job. When I run the workshop at a client’s site (as opposed to an open-enrollment format), there can be opportunities to actually put things into practice, give feedback, and provide additional information and support. This, by the way, not only improves my learners’ remembering and performance (and my clients’ benefits), it gives me even richer evaluation information than any smile sheet or survey could.
  • While the smile sheet and follow-up survey include the key learning points, they don’t assess retrieval of those learning points or even understanding.
  • Not everyone will complete the follow-up survey.
  • The design I mentioned not only doesn’t track learning, understanding, or retrieval; it also doesn’t compare results to anything except learners’ subjective expectations. If I was going to measure learning or performance or even organiziational results, I would consider control groups, pretests, etc.
  • There is no benchmarking data with other similar learning programs. I don’t know whether my learners are doing better than if they read a book, took a workshop with Ruth Clark, or went and got their masters degree in learning design from Boise State.
  • Bottom line is that my smile sheet and follow-up survey are an improvement over most traditional smile sheets, but they certainly aren’t a complete solution.

Learning Measurement is a Critical Leverage Point

Learning Measurement provides us with a critical leverage point in the work that we do. If we don’t do good measurement, we’re not getting good feedback. If we don’t get good feedback, we’re not able to improve what we’re doing.

My workshop smile sheet and follow-up survey attempt to balance workability and information-gathering. If you find value in this approach, great, and feel free to use the links below to download my smile sheet so you can use it as a template for your evaluation needs. If you have suggestions for improvement, send me an email or leave a comment.

References

Alliger, G. M., Tannenbaum, S. I., Bennett, W. Jr., Traver, H., & Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-358.

I’ve developed three separate several job aids in regards to learning measurement over the last few years, and I decided recently that 1 was enough, so I’ve integrated the wisdom from all three into one job aid, which I now make available to you for free.

Measurementjobaid

This job aid has several advantages:

  1. It’s inspired by honest-to-goodness learning research.
  2. It fits onto one page.
  3. It provides a brief rationale for each point.
  4. It prompts users to audit their current practices.
  5. It prompts users to take action for improvement.
  6. It includes contact information for further inquiries.
  7. It covers critical measurement-design issues.
  8. It’s free.

Click here to download the job aid now.