Tag Archive for: learner surveys

In 2016 I published a book on how to radically transform learner surveys into something useful. The book won an award from ISPI and helped thousands of companies update their smile sheets. Now, I’m updating the book with the knowledge I’ve gained in consulting with companies in the learning-evaluation efforts. The second edition will be titled: Performance-Focused Learner Surveys: A Radical Rethinking of a Dangerous Art Form (Second Edition).

In the first edition, I listed nine benefits of learner surveys, but I had only touched the surface. In the coming book, I will offer 20 benefits. Here’s the current list:

Supporting Learning Design Effectiveness

  1. Red-flagging training programs that are not sufficiently effective.
  2. Gathering ideas for ongoing updates and revisions of learning programs.
  3. Judging the strengths, weaknesses, and viability of program updates and pilots.
  4. Providing learning architects and trainers with feedback to aid their development.
  5. Judging the competence of learning architects and trainers.
  6. Judging the contributions to learning made by people outside of the learning team.
  7. Assessing the contributions of learning supports and organizational practices.

Supporting Learners in Learning and Application

  1. Helping learners reflect on and reinforce what they learned.
  2. Helping learners determine what (if anything) they plan to do with their learning.
  3. Nudging learners to greater learning and application efforts.

Nudging Action Through Stealth Messaging

  1. Guiding learning architects to create more effective learning by sharing survey questions before learning designs are finalized and sharing survey results after data is gathered.
  2. Guiding trainers to utilize more effective learning methods by sharing survey questions before learning designs are finalized and sharing survey results after data is gathered.
  3. Guiding organizational stakeholders to support learning efforts more effectively by sharing survey questions and survey results.
  4. Guiding organizational decision makers to better appreciate the complexity and depth of learning and development—helping the learning team to gain credibility and autonomy.

Supporting Relationships with Learners and Other Key Stakeholders

  1. Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the learning intervention and/or the instructors.
  2. Upholding the spirit of common courtesy by giving learners a chance for feedback.
  3. Enabling learner frustrations to be vented—to limit damage from negative back-channel communications.

Maintaining Organizational Credibility

  1. Engaging in visibly credible efforts to assess learning effectiveness.
  2. Engaging in visibly credible efforts to utilize data to improve effectiveness.
  3. Reporting out data to demonstrate learning effectiveness.

If you want to learn when the new edition is available, sign up for my list. https://www.worklearning.com/sign-up/.

The second edition will include new and improved question wording, additional questions, additional chapters, etc.

Matt Richter and I, in our Truth-in-Learning Podcast, will be discussing learner surveys in our next episode. Matt doesn’t believe in smile sheets and I’m going to convince him of the amazing power of well-crafted learner surveys. This blog post is my first shot across the bow. To join us, subscribe to our podcast in your podcast app.

People keep asking me for references to the claim that learner surveys are not correlated—or are virtually uncorrelated—with learning results. In this post, I include them, with commentary.

 

 

Major Meta-Analyses

Here are the major meta-analyses (studies that compile the results of many other scientific studies using statistical means to ensure fair and valid comparisons):

For Workplace Training

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Hughes, A. M., Gregory, M. E., Joseph, D. L., Sonesh, S. C., Marlow, S. L., Lacerenza, C. N., Benishek, L. E., King, H. B., Salas, E. (2016). Saving lives: A meta-analysis of team training in healthcare. Journal of Applied Psychology, 101(9), 1266-1304.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

For University Teaching

Uttl, B., White, C. A., Gonzalez (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.

What these Results Say

These four meta-analyses, covering over 200 scientific studies, find correlations between smile-sheet ratings and learning to average about 10%, which is virtually no correlation at all. Statisticians consider correlations below 30% to be weak correlations, and 10% then is very weak.

What these Results Mean

These results suggest that typical learner surveys are not correlated with learning results.

From a practical standpoint:

 

If you get HIGH MARKS on your smile sheets:

You are almost equally likely to have

(1) An Effective Course

(2) An Ineffective Course

 

If you get LOW MARKS on your smile sheets:

You are almost equally likely to have

(1) A Poorly-Designed Course

(2) A Well-Designed Course

 

Caveats

It is very likely that the traditional smile sheets that have been used in these scientific studies, while capturing data on learner satisfaction, have been inadequately designed to capture data on learning effectiveness.

I have developed a new approach to learner surveys to capture data on learning effectiveness. This approach is the Performance-Focused Smile Sheet approach as originally conveyed in my 2016 award-winning book. As of yet, no scientific studies have been conducted to correlate the new smile sheets with measures of learning. However, many many organizations are reporting substantial benefits. Researchers or learning professionals who want my updated list of recommended questions can access them here.

Reflections

  1. Although I have written a book on learner surveys, in the new learning evaluation model, LTEM (Learning-Transfer Evaluation Model), I place these smile sheets at Tier 3, out of eight tiers, less valuable than measures of knowledge, decision-making, task performance, transfer, and transfer effects. Yes, learner surveys are worth doing, if done right, but they should not be the only tool we use when we evaluate learning.
  2. The earlier belief—and one notably advocated by Donald, Jim, and Wendy Kirkpatrick—that there was a causal chain from learner reactions to learning, behavior, and results has been shown to be false.
  3. There are three types of questions we can utilize on our smile sheets: (1) Questions that focus on learner satisfaction and the reputation of the learning, (2) Questions that support learning, and (3) Questions that capture information about learning effectiveness.
  4. It is my belief that we focus too much on learner satisfaction, which has been shown to be uncorrelated with learning results—and we also focus too little on questions that gauge learning effectiveness (the main impetus for the creation of Performance-Focused Smile Sheets).
  5. I do believe that learner satisfaction is important, but it is not most important.

Learning Opportunities regarding Learner Surveys

Links of Interest:

 

 

As I preach in my workshops on how to create better learner-survey questions (for example my Gold-Certification workshop on Performance-Focused Smile Sheets), open-ended comment questions are very powerful questions. Indeed, they are critical in our attempts to truly understand our learners’ perspectives.

Unfortunately, to get the most benefit from comment questions, we have to take time to read every response and reflect on the meaning of all the comments taken together. Someday AI may be able to help us parse comment-question data, but currently the technology is not ready to give us a full understanding. Nor are word clouds or other basic text-processing algorithms useful enough to provide valid insights into our data.

It’s good to take the time in analyzing our comment-question data, but if there was a way to quickly get a sense of comment data, wouldn’t we consider using it? Of course!

As most of you know, I’ve been focusing a lot of my attention on learning evaluation over the last few years. While I’ve learned a lot, have been lauded by others as an evaluation thought leader, and have even created some useful innovations like LTEM, I’m still learning. Today, by filling out a survey after going to a CVS MinuteClinic to get a vaccine shot, I learned something pretty cool. Take a look.

This is a question on their survey, delivered to me right after I’d answered a comment question. This gives the survey analyzers a way to quickly categorize the comments. It DOES NOT REPLACE, or should not replace, a deeper look at the comments (for example, my comment was very specific and useful i hope), but it does enable us to ascribe some overall meaning to the results.

Note that this is similar to what I’ve been calling a hybrid question, where we first give people a forced-choice question and then give them a comment question. The forced choice question drives clarity whereas the follow-up comment question enables more specificity and richness.

One warning! Adding a forced choice question after a comment question should be seen as a tool in our toolbox. Let’s not overuse it. More pointedly, let’s use it when it is particularly appropriate.

If we’ve asked two open-ended comment questions—one asking for positive feedback and one asking for constructive criticism—we might not need a follow-up forced choice question, because we’ve already prompted respondents to give us the good and the bad.

The bottom line is that we now have two types of hybrid questions to add to our toolbox:

  1. Forced-choice question followed by clarifying comment question.
  2. Comment question followed by categorizing forced-choice question.

Freakin’ Awesome!