People keep asking me for references to the claim that learner surveys are not correlated—or are virtually uncorrelated—with learning results. In this post, I include them, with commentary.

 

 

Major Meta-Analyses

Here are the major meta-analyses (studies that compile the results of many other scientific studies using statistical means to ensure fair and valid comparisons):

For Workplace Training

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Hughes, A. M., Gregory, M. E., Joseph, D. L., Sonesh, S. C., Marlow, S. L., Lacerenza, C. N., Benishek, L. E., King, H. B., Salas, E. (2016). Saving lives: A meta-analysis of team training in healthcare. Journal of Applied Psychology, 101(9), 1266-1304.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

For University Teaching

Uttl, B., White, C. A., Gonzalez (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.

What these Results Say

These four meta-analyses, covering over 200 scientific studies, find correlations between smile-sheet ratings and learning to average about 10%, which is virtually no correlation at all. Statisticians consider correlations below 30% to be weak correlations, and 10% then is very weak.

What these Results Mean

These results suggest that typical learner surveys are not correlated with learning results.

From a practical standpoint:

 

If you get HIGH MARKS on your smile sheets:

You are almost equally likely to have

(1) An Effective Course

(2) An Ineffective Course

 

If you get LOW MARKS on your smile sheets:

You are almost equally likely to have

(1) A Poorly-Designed Course

(2) A Well-Designed Course

 

Caveats

It is very likely that the traditional smile sheets that have been used in these scientific studies, while capturing data on learner satisfaction, have been inadequately designed to capture data on learning effectiveness.

I have developed a new approach to learner surveys to capture data on learning effectiveness. This approach is the Performance-Focused Smile Sheet approach as originally conveyed in my 2016 award-winning book. As of yet, no scientific studies have been conducted to correlate the new smile sheets with measures of learning. However, many many organizations are reporting substantial benefits. Researchers or learning professionals who want my updated list of recommended questions can access them here.

Reflections

  1. Although I have written a book on learner surveys, in the new learning evaluation model, LTEM (Learning-Transfer Evaluation Model), I place these smile sheets at Tier 3, out of eight tiers, less valuable than measures of knowledge, decision-making, task performance, transfer, and transfer effects. Yes, learner surveys are worth doing, if done right, but they should not be the only tool we use when we evaluate learning.
  2. The earlier belief—and one notably advocated by Donald, Jim, and Wendy Kirkpatrick—that there was a causal chain from learner reactions to learning, behavior, and results has been shown to be false.
  3. There are three types of questions we can utilize on our smile sheets: (1) Questions that focus on learner satisfaction and the reputation of the learning, (2) Questions that support learning, and (3) Questions that capture information about learning effectiveness.
  4. It is my belief that we focus too much on learner satisfaction, which has been shown to be uncorrelated with learning results—and we also focus too little on questions that gauge learning effectiveness (the main impetus for the creation of Performance-Focused Smile Sheets).
  5. I do believe that learner satisfaction is important, but it is not most important.

Learning Opportunities regarding Learner Surveys

Series of Four Interviews

I was recently interviewed by Jeffrey Dalto of Convergence Training. Jeffrey is a big fan of research-based practice. He did a great job compiling the interviews.

Click on the title of each one to read the interview:

Anders Ericsson and Robert Pool — one of whom is the world's leading expert on how expertise develops (Ericsson) — have critiqued Malcolm Gladwell's popularization of the 10,000 Hours Rule in a Salon article, adapted from their new book, Peak: Secrets from the New Science of Expertise.

 

 

 Here are the main points from their article:

  1. "Gladwell did get one thing right, and it is worth repeating because it’s crucial: becoming accomplished in any field in which there is a well-established history of people working to become experts requires a tremendous amount of effort exerted over many years. It may not require exactly ten thousand hours, but it will take a lot."
  2. "There is nothing special or magical about ten thousand hours." It can be more or less. Indeed, in some fields it may take twice as long to reach world-class status.
  3. The number of hours to become an expert varies from field to field.
  4. It's not just about practice (or time spent in an activity). Rather, it is about a very specific form of practice — "deliberate practice, which involves constantly pushing oneself beyond one’s comfort zone, following training activities designed by an expert to develop specific abilities, and using feedback to identify weaknesses and work on them."
  5. There are zero research studies that show that anyone who puts in some requisite number of hours (be it 10,000 or less or more), will achieve preeminent expertise. And, let me add my conclusion: There may be — indeed there are likely to be — other factors that influence the development of expertise, including such things as innate abilities, health, environmental stressors, related experiences, nurturance, et cetera. As the authors stress, not everyone can become an expert in a particular field.
  6. Almost always, people can radically improve their performance in a skill through deliberate practice.

Anders Ericsson is amazing — having been doing great research for decades, putting in certainly more than 10,000 hours, I must think. I've just ordered the book, and I recommend that you order it too!

And here is a nice audio clip with Ericsson.

 

Next Thursday March 10th, I'll be speaking on Performance-Focused Smile Sheets at the Charlotte, North Carolina chapter of ISPI.

Click here for the details…

Wow! What a week! I published my first book on Tuesday and have been hearing from well wishers ever since.

Here are some related links you might find interesting:

And here are some random visuals, but maybe related.

  • In the rush of purchases, Amazon briefly ran out of my book, telling folks they'd have to wait 1-3 weeks — their signal that they have no stock. Later, they found some copies…but the genie was out of the bottle.

Amazon showing sold out

 

 

  • The animal kingdom seems to be behind the book:

Bozarth's Corgi

Olah's Cat

 

 

  • Even one of the United States' presidential candidates has spoken up:

Bernie2

 

What a week!

 

Thank you all!!

 

= Will Thalheimer

 

There appears to be more and more momentum for CPA's to be able to earn credentials for micro-learning.

Click to read more…

Researchers at MIT have coined the term "Wait-Learning" — learning at a time when a person would otherwise be waiting, and hence wasting time… Their research work involves foreign-language learning.

They surmised that instant messaging provided an excellent application to test whether a program could enable wait-learning for language vocabulary. Often while chatting, conversations feels asynchronous; the person who just sent a message waits for a reply.

They built a program, called WaitChatter, that works in Google Chat. It's an experimental program, only able to teach Spanish and French vocabulary to English speakers. They experimented with WaitChatter and got positive results, which they published online in an ACM publication.

Here's what the authors said about the amount of learning:

"In just two weeks of casual usage, participants were on average able to recall 57 new words, equivalent to approximately four words per day."

TechCrunch has a nice article explaining how WaitChatter works.

WaitChatter is not ready for prime time. It's an experimental program and it only works in Chrome and only if you disable Google Hangouts and go back to Google Chat. Still, several concepts about WaitChatter and the concept of wait-learning are intriguing:

  1. Wait-Learning, though not an original concept, is a good one…We learning professionals ought to figure out how to maximize efficiencies in this way. Of course, we'll want to make sure that the additional learning doesn't compromise the main task. We know multitasking is illusory, often hurting one task or another, so we'll need to be careful.
  2. Embedding learning opportunities in other applications may enable such efficiencies, if we do it carefully.
  3. Part of the vocabulary learned was learned based on the words that came up in the chat. So for example, if the word "dog" came up in the chat, WordChatter might focus on the Spanish equivalent "el perro." We know from the general research on learning that alignment between the learning context and the performance context produce learning and remembering benefits, and the authors cite research that such contextual learning benefits language learners as well.

 

The swell of interest in short learning nuggets has another article touting its benefits, this one from Training Industry Magazine and Manjit Sekhon (of Intrepid Learning).

The article makes no mention of the spacing effect, but it does talk of threading nuggets in an intentional sequence. The article emphasizes shorter attention spans of learners, and the benefit of keeping people on the job.

More evidence that subscription-learning and shorter learning in general is on the rise.

 

Clark Quinn and I have been grappling with FUN-da-mental issues in the learning space over the years, and we finally decided to publish some of our dialogue.

In the latest conversation, Clark and I discuss how the tools in the learning field often don't send the right messages about how to design learning–that they unintentionally push us toward poor instructional designs.

You can read the discussion on Clark's world-renowned blog by CLICKING HERE.

——

——

——

Or, read an earlier discussion on how professionalized we are by clicking here.