Tag Archive for: e-learning elearning assessments smile sheets fog bias learning curve forgetting curve

In a webinar this month (December 2006), I asked a group of about 100 e-learning professionals, what was the highest level of assessment they did (based on Kirkpatrick’s Four Levels) on their most recent learning-intervention development project.

  • 11% said they did NO evaluation
  • 26% said they did Level 1 smile sheets
  • 48% said they measured Level 2 learning
  • 15% said they measured Level 3 on-the-job performance
  • 0% said they measured Level 4 business results (or ROI).

Unfortunately, smile sheets are very poor predictors of meaningful learning outcomes, being correlated at less than an r of .2 with learning and performance. See Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Stunning: Even after all the hot air expelled, ink spilled, and electrons excited in the last 10 years regarding how we ought to be measuring business results, nobody is doing it !!!!!!!!!

——————————

When I asked them how they did their most recent assessment, in terms of WHEN they did the assessment—whether they did the assessment immediately at the end of learning or after a delay,

  • 77% said they did the assessment, "At the end of training."
  • 7% said they did the assessment, "After a delay."
  • 14% said they did the assessment, "At end—and after a delay."
  • 2% said, "Never done / Can’t remember."

Unfortunately, the 77% are biasing the results in a positive direction. They are measuring learning when it is top-of-mind, easily accessible from long-term memory. They are measuring the learning intervention’s ability to create short-term learning effects. They are not measuring its ability to support long-term remembering, or its ability to specifically minimize forgetting.

In the graphic depiction below, the top of the left axis (the y axis) represents more remembering, the bottom is less remembering. Consider what happens if we assess learning at the end (or the top) of the first (leftmost) "Learning" curve. If the learners utilize what they’ve learned on the job, such an assessment has a negative bias. However, what typically happens over time is more like the forgetting curve (depicted at the lower right). Unless learners regularly use what they’ve learned, any assessment at the end of the first learning curve is likely to be a poor predictor of future remembering—and show a definite positive bias.

Learningforgettingcurve

——————————

When I asked them how they did their most recent assessment, in terms of WHERE the learners were when they completed the assessment,

  • 70% said they did the assessment, "In the training room/context."
  • 26% said they did the assessment, "In a different room/context."
  • 5% said, "Never done / Can’t remember."

Unfortunately, the 70% are biasing the results of their assessments in a positive direction. When learners are in the same context during retrieval as during learning, they tend to recall more because the background context stimulates improved retrieval. So, providing our training assessments in the training room (or using the same background stimuli in an e-learning course), is not a fair way for us to get feedback on our performance as instructional designers.

For one example of this research paradigm, see the following example. Smith, S. M., Glenberg, A., & Bjork, R. A. (1978). Environmental context and human memory. Memory & Cognition, 6, 342-353.

Testing_in_out_of_contextjpeg

The Bottom Line

First, I’m not blaming these particular folks. This is a common reality. I have regularly failed—and continue to fail often—in validly assessing my own instructional-development efforts. I’m much better when people pay me to evaluate their learning interventions (see LearningAudit.com).

Almost all of us—as far as I can tell—are just not getting valid feedback on our instructional-development efforts. Note that though 48% said they did Level 2 learning evaluation on their most recent project, probably most of those folks delivered the assessment in a way that biased those results. This leaves very few of us who are getting valid feedback on our designs.

We’re in a dark fog about how we’re doing, and so we have massively impoverished information to use to make improvements.

Basically, we live in a shameful, self-imposed fog.

Bring on the fog lights!!