In 2016 I published a book on how to radically transform learner surveys into something useful. The book won an award from ISPI and helped thousands of companies update their smile sheets. Now, I’m updating the book with the knowledge I’ve gained in consulting with companies in the learning-evaluation efforts. The second edition will be titled: Performance-Focused Learner Surveys: A Radical Rethinking of a Dangerous Art Form (Second Edition).

In the first edition, I listed nine benefits of learner surveys, but I had only touched the surface. In the coming book, I will offer 20 benefits. Here’s the current list:

Supporting Learning Design Effectiveness

  1. Red-flagging training programs that are not sufficiently effective.
  2. Gathering ideas for ongoing updates and revisions of learning programs.
  3. Judging the strengths, weaknesses, and viability of program updates and pilots.
  4. Providing learning architects and trainers with feedback to aid their development.
  5. Judging the competence of learning architects and trainers.
  6. Judging the contributions to learning made by people outside of the learning team.
  7. Assessing the contributions of learning supports and organizational practices.

Supporting Learners in Learning and Application

  1. Helping learners reflect on and reinforce what they learned.
  2. Helping learners determine what (if anything) they plan to do with their learning.
  3. Nudging learners to greater learning and application efforts.

Nudging Action Through Stealth Messaging

  1. Guiding learning architects to create more effective learning by sharing survey questions before learning designs are finalized and sharing survey results after data is gathered.
  2. Guiding trainers to utilize more effective learning methods by sharing survey questions before learning designs are finalized and sharing survey results after data is gathered.
  3. Guiding organizational stakeholders to support learning efforts more effectively by sharing survey questions and survey results.
  4. Guiding organizational decision makers to better appreciate the complexity and depth of learning and development—helping the learning team to gain credibility and autonomy.

Supporting Relationships with Learners and Other Key Stakeholders

  1. Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the learning intervention and/or the instructors.
  2. Upholding the spirit of common courtesy by giving learners a chance for feedback.
  3. Enabling learner frustrations to be vented—to limit damage from negative back-channel communications.

Maintaining Organizational Credibility

  1. Engaging in visibly credible efforts to assess learning effectiveness.
  2. Engaging in visibly credible efforts to utilize data to improve effectiveness.
  3. Reporting out data to demonstrate learning effectiveness.

If you want to learn when the new edition is available, sign up for my list. https://www.worklearning.com/sign-up/.

The second edition will include new and improved question wording, additional questions, additional chapters, etc.

Matt Richter and I, in our Truth-in-Learning Podcast, will be discussing learner surveys in our next episode. Matt doesn’t believe in smile sheets and I’m going to convince him of the amazing power of well-crafted learner surveys. This blog post is my first shot across the bow. To join us, subscribe to our podcast in your podcast app.

The LEARNNOVATORS team (specifically Santhosh Kumar) asked if I would join them in their Crystal Balling with Learnnovators interview series, and I accepted! They have some really great people on the series, I recommend that you check it out!

The most impressive thing was that they must have studied my whole career history and read my publication list and watched my videos because they came up with a whole set of very pertinent and important questions. I was BLOWN AWAY—completely IMPRESSED! And, given their dedication, I spent a ton of time preparing and answering their questions.

It’s a two part series and here are the links:

Here are some of the quotes they pulled out and/or I’d like to highlight:

Learning is one of the most wondrous, complex, and important areas of human functioning.

The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning.

Data can be good, but also very very bad.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important.

We will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Our senior managers don’t understand learning, they think it is easy, so they don’t support L&D like they should.

Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness.

Learning is hard and learning that is focused on work is even harder because our learners have other priorities—so we shouldn’t beat ourselves up too much.

We know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements.

Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera

Learnnovators asked me what I hoped for the learning and development field. Here’s what I said:

Nobody is good at predicting the future, so I will share the vision I hope for. I hope we in learning and development continue to be passionate about helping other people learn and perform at their best. I hope we recognize that we have a responsibility not just to our organizations, but beyond business results to our learners, their coworkers/families/friends, to the community, society, and the environs. I hope we become brilliantly professionalized, having rigorous standards, a well-researched body of knowledge, higher salaries, and career paths beyond L&D. I hope we measure better, using our results to improve what we do. I hope we, more-and-more, take a small-S scientific approach to our practices, doing more A-B testing, compiling a database of meaningful results, building virtuous cycles of continuous improvement. I hope we develop better tools to make building better learning—and better performance—easier and more effective. And I hope we continue to feel good about our contributions to learning. Learning is at the heart of our humanity!

For those of you who don’t know Matt Richter, President of the Thiagi Group, he’s one of the most innovative thinkers when it comes to creating training that both sizzles and supports work performance. Recently, Matt and I began partnering in a new podcast, Truth In Learning, which I’ll have more to say about later once I figure out where the escape hatch is.

NOW, I want to share with you a brilliant new article, that Matt surprised me with, on his efforts to brainstorm innovative ways to use LTEM (The Learning-Transfer Evaluation Model).

You should read his article, but just to give you the list of seven uses for LTEM:

  1. Learning Evaluation—The primary intent of the LTEM framework.
  2. Instructional Design—To negotiate with stakeholders the outcomes desired.
  3. Training Game Design—To ensure games/activities have an instructional purpose.
  4. Coaching—Helping to build a development plan for those who are coached.
  5. Performance Consulting—To focus on performances that matter along the journey.
  6. Keynoting/Presenting—To ensure a focus on meaningful outcomes, not just infotainment.
  7. Sales/Business Development—To keep sales conversations focused on meaningful outcomes.

We are All in this Together

One of the great benefits of publishing LTEM is that since its publication last year I’m regularly being contacted by people whose organizations are finding new and innovative ways to utilize LTEM—and not just for learning evaluation but as a central element of their learning strategy and practice.

I’m especially pleased with those who have taken LTEM really deep, and I’d like to give a shout out to Elham Arabi who is doing her doctoral dissertation using LTEM as a spur to supporting a hospital’s effort to maximize the benefits or their learning interventions. Congrats to her for being accepted as a speaker at the upcoming eLearning Guild Learning Solutions Conference, March 31 to April 2 (2020) in Orlando. The title of her talk is: Using Evaluation Data to Enhance Your Training Programs.

Share Your Examples and Innovations

Please share your innovations and ideas about using LTEM in your workplace, on social media, or by contacting me at https://www.worklearning.com/contact/. I would really love to hear how it’s going, including any obstacles you’ve faced, your success stories, etc.

And, of course, if you’d like me to help your organization utilize LTEM, or just be the face of LTEM to your organization, please contact me so we can set up a time to talk, and consider my LTEM workshop to introduce LTEM to your team.

 

 

People keep asking me for references to the claim that learner surveys are not correlated—or are virtually uncorrelated—with learning results. In this post, I include them, with commentary.

 

 

Major Meta-Analyses

Here are the major meta-analyses (studies that compile the results of many other scientific studies using statistical means to ensure fair and valid comparisons):

For Workplace Training

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Hughes, A. M., Gregory, M. E., Joseph, D. L., Sonesh, S. C., Marlow, S. L., Lacerenza, C. N., Benishek, L. E., King, H. B., Salas, E. (2016). Saving lives: A meta-analysis of team training in healthcare. Journal of Applied Psychology, 101(9), 1266-1304.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

For University Teaching

Uttl, B., White, C. A., Gonzalez (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.

What these Results Say

These four meta-analyses, covering over 200 scientific studies, find correlations between smile-sheet ratings and learning to average about 10%, which is virtually no correlation at all. Statisticians consider correlations below 30% to be weak correlations, and 10% then is very weak.

What these Results Mean

These results suggest that typical learner surveys are not correlated with learning results.

From a practical standpoint:

 

If you get HIGH MARKS on your smile sheets:

You are almost equally likely to have

(1) An Effective Course

(2) An Ineffective Course

 

If you get LOW MARKS on your smile sheets:

You are almost equally likely to have

(1) A Poorly-Designed Course

(2) A Well-Designed Course

 

Caveats

It is very likely that the traditional smile sheets that have been used in these scientific studies, while capturing data on learner satisfaction, have been inadequately designed to capture data on learning effectiveness.

I have developed a new approach to learner surveys to capture data on learning effectiveness. This approach is the Performance-Focused Smile Sheet approach as originally conveyed in my 2016 award-winning book. As of yet, no scientific studies have been conducted to correlate the new smile sheets with measures of learning. However, many many organizations are reporting substantial benefits. Researchers or learning professionals who want my updated list of recommended questions can access them here.

Reflections

  1. Although I have written a book on learner surveys, in the new learning evaluation model, LTEM (Learning-Transfer Evaluation Model), I place these smile sheets at Tier 3, out of eight tiers, less valuable than measures of knowledge, decision-making, task performance, transfer, and transfer effects. Yes, learner surveys are worth doing, if done right, but they should not be the only tool we use when we evaluate learning.
  2. The earlier belief—and one notably advocated by Donald, Jim, and Wendy Kirkpatrick—that there was a causal chain from learner reactions to learning, behavior, and results has been shown to be false.
  3. There are three types of questions we can utilize on our smile sheets: (1) Questions that focus on learner satisfaction and the reputation of the learning, (2) Questions that support learning, and (3) Questions that capture information about learning effectiveness.
  4. It is my belief that we focus too much on learner satisfaction, which has been shown to be uncorrelated with learning results—and we also focus too little on questions that gauge learning effectiveness (the main impetus for the creation of Performance-Focused Smile Sheets).
  5. I do believe that learner satisfaction is important, but it is not most important.

Learning Opportunities regarding Learner Surveys

Every year or so, based on work with clients and new analysis, I like to provide to the public an updated recommended set of smile-sheet questions (free).

 

You can access the New Questions by clicking here.

 

Related Resources

CEO’s are calling for their companies to be more innovative in the ever-accelerating competitive landscape! Creativity is the key leverage point for innovation. Research I’ve compiled (from the science on creativity) shows that unique and valuable ideas are generated when people and teams look beyond their inner circle to those in their peripheral networks. GIVEN THIS, a smart company will seed themselves with outside influencers who are working with new ideas.

But what are a vast majority of big companies doing that kills their own creativity? They are making it difficult or virtually impossible for their front-line departments to hire small businesses and consultants. It’s allowed, but massive walls are being built! And these walls have exploded over the last five to ten years:

  1. Only fully vetted companies can be hired, requiring small lean companies to waste time in compliance—or turn away in frustration. Also causing large-company managers to favor the vetted companies, even if a small business or consultant would provide better value or more-pertinent products or services.
  2. Master Service Agreements are required (pushing small companies away due to time and legal fees).
  3. Astronomical amounts of insurance are required. Why the hell do consultants need $2 million in insurance, even when they are consulting on non-safety-related issues? Why do they need any insurance at all if they are not impacting critical safety factors?
  4. Companies can’t be hired unless they’ve been in business for 5 or 10 or 15 years, completely eliminating the most unique and innovative small businesses or consultants—those who recently set up shop.
  5. Minimum company revenues are required, often in the millions of dollars.

These barriers, of course, aren’t the only ones pushing large organizations away from small businesses or consultants. Small companies often can’t afford sales forces or marketing budgets so they are less likely to gain large companies’ share of attention. Small companies aren’t seen as safe bets because they don’t have a name, or their website is not as beautiful, or they haven’t yet worked with other big-name companies, or the don’t speak the corporate language. Given these surface characteristics, only the bravest, most visionary frontline managers will take the risk to make the creative hire. And even then, their companies are making it increasingly hard for them to follow through.

Don’t be fooled by the high-visibility anecdotes that show a CEO hiring a book author or someone featured in Wired, HBR, or on some podcast. Yes, CEO’s and senior managers can easily find ways to hire innovators, and the resulting top-down creativity infusion can be helpful. But it can be harmful as well!!!! Too many times senior managers are too far away from knowing what works and what’s needed on the front lines. They push things innocently not knowing that they are distracting the troops from what’s most important, or worse, pushing the frontline teams to do stupid stuff against their best judgment.

Even more troublesome with these anecdotes of top-down innovation is that they are too few and far between. There may be ten senior managers who can hire innovation seeds, but there are dozens or hundreds or thousands of folks who might be doing so but can’t.

A little digression: It’s the frontline managers who know what’s needed—or perhaps more importantly the “leveraging managers” if I can coin a term. These are the managers who are deeply experienced and wise in the work that is getting done, but high enough in the organization to see the business-case big picture. I will specifically exclude “bottle-cap managers” who have little or no experience in a work area, but were placed there because they have business experience. Research shows these kind of hires are particularly counterproductive in innovation.

Let me summarize.

I’m not selling anything here. I’m in the training, talent development, learning evaluation business as a consultant—I’m not an innovation consultant! I’m just sharing this out of my own frustration with these stupid counter-productive barriers that I and my friends in small businesses and consultancies have experienced. I also am venting here to provide a call to action for large organizations to wake the hell up to the harm you are inflicting on yourselves and on the economy in general. By not supporting the most innovative small companies and consultants, you are dumbing-down the workforce for years to come!

Alright! I suppose I should offer to help instead of just gripe! I have done extensive research on creativity. But I don’t have a workshop developed, the research is not yet in publishable form, and it’s not really what I’m focused on right now. I’m focused on innovating in learning evaluation (see my new learning-evaluation model and my new method for capturing valid and meaningful data from learners). These are two of the most important innovations in learning evaluation in the past few years!

However, a good friend of mine did, just last month, suggest that the world should see the research on creativity that I’ve compiled (thanks Mirjam!). Given the right organization, situation, and requirements—and the right amount of money—I might be willing to take a break from my learning-evaluation work and bring this research to your organization. Contact me to try and twist my arm!

I’m serious, I really don’t want to do this right now, but if I can capture funds to reinvest in my learning-evaluation innovations, I just might be persuaded. On the contact-me link, you can set up an appointment with me. I’d love to talk with you if you want to talk innovation or learning evaluation.

This is a guest post by Annette Wisniewski, Learning Strategist at Judge Learning Solutions. In this post she shares an experience building a better smile sheet for a client.

She also does a nice job showing how to improve questions by getting rid of Likert-like scales and replacing them with more concrete answer choices.

______________________________

Using a “Performance-focused Smile Sheets” Approach for Evaluating a Training Program

Recently, one of our clients had experienced an alarming drop in customer confidence, so they hired us, Judge Learning Solutions, to evaluate the effectiveness of their customer support training program. I was the learning strategist assigned to the project. Since training never works in isolation, I convinced the client to let me evaluate both the training program and the work environment.

I wanted to create the best survey possible to gauge the effectiveness of the training program as well as evaluate the learners’ work environment, including relevant tools, processes, feedback, support, and incentives. I also wanted to create a report that included actionable recommendations on how to improve both the training program and workforce performance.

I had recently finished reading Will’s book, Performance-focused Smile Sheets, so I knew that traditional Likert-based questions are problematic. They are very subjective, don’t give clear distinction between answer choices, and limit respondents to one, sometimes insufficient, option.

For example, most smile sheets ask learners to evaluate their instructor. A traditional smile sheet question might ask learners to rank the instructor using a Likert-scale.

   How would you rate your course instructor?

  1. Very ineffective
  2. Somewhat ineffective
  3. Somewhat effective
  4. Very effective

But the question leaves too much open to interpretation. What does “ineffective” mean? What does “effective” mean? One learner might have completely different criteria for an “effective” instructor than another. What is the difference between “somewhat ineffective” and “somewhat effective”? Could it be the snacks the instructor brought in mid-afternoon? It’s hard to tell. Also, how can the instructor use this feedback to improve next time? There’s just not enough information in this question to make it very useful.

For my evaluation project, I wrote the survey question using Will’s guidelines to provide distinct, meaningful options, and then allowed learners to select as many responses as they wanted.

   What statements are true about your course instructor? Select all that apply.

  1. Was OFTEN UNCLEAR or DISORGANIZED.
  2. Was OFTEN SOCIALLY AWKWARD OR INAPPROPRIATE.
  3. Exhibited UNACCEPTABLE LACK OF KNOWLEDGE.
  4. Exhibited LACK OF REAL-WORLD EXPERIENCE.
  5. Generally PERFORMED COMPETENTLY AS A TRAINER.
  6. Showed DEEP SUBJECT-MATTER KNOWLEDGE.
  7. Demonstrated HIGH LEVELS OF REAL-WORLD EXPERIENCE.
  8. MOTIVATED ME to ENGAGE DEEPLY IN LEARNING the concepts.
  9. Is a PERSON I CAME TO TRUST.

It’s still just one question, but in this case, the learner was able to provide more useful feedback to both the instructor and to the course sponsors. As Will recommended, I added proposed standards, and then tracked percentages of each response to include in my report:

I used this same approach when asking learners about the course learning objectives.

Instead of asking a question using a typical Likert scale:

   After taking the course, I am now able to navigate the system.

  1. Strongly agree
  2. Agree
  3. Neither agree nor disagree
  4. Disagree
  5. Strongly disagree

I created a more robust question that provided better information about how well the learner was able to navigate the system and what the learner felt he/she needed to become more proficient. I formatted the question as a matrix, so  I could ask about all of the learning objectives at once. The learner perceived this to be one question, but I gleaned nine questions-worth of data out of it. Here’s a redacted excerpt of that question as it appeared in my report, shortened to the first four learning objectives.

The questions took a little more time to write, but the same amount of time for respondents to answer. At first, the client was hesitant to use this new approach to survey questions, but it didn’t take them long to see how I would be able to gather much more valuable data.

The descriptive answer choices of the survey, combined with interviews and extant data reviews, allowed me to provide my client with a very thorough evaluation report. The report not only included a clear picture of the current training program, but also provided detailed and prioritized recommendations on how to improve both the training program and the work environment.

The client was thrilled. I had given them not only actionable recommendations but also the evidence they needed to procure funding to make the improvements. When my colleague checked back with them several months later, they had already implemented several of my recommendations and were in the process of implementing more.

I was amazed at how easy it was to improve the quality of the data I gathered, and it certainly impressed my client. I will never write evaluation questions again any other way.

If you plan on conducting a survey, try using Will’s approach to writing performance-focused questions. Whether you are evaluating a training program or looking for insights on improving workforce performance, you will be happy you did!

I’m thrilled to announce that my Gold-Certification Workshop on Performance-Focused Smile Sheets is now open for registration, with access available in about a week on Tuesday May 14 (2019).

This certification workshop is the culmination of years of work and practice. First there was my work with clients on evaluation. Then there was the book. Then I gained extensive experience building and piloting smile sheets with a variety of organizations. I taught classroom and webinar workshops. I spoke at conferences and gave keynotes. And of course, I developed and launched LTEM (The Learning-Transfer Evaluation Model), which is revolutionizing the practice of workplace learning—and providing the first serious alternative to the Kirkpatrick-Katzell Four-Level Model.

Over the last year, I’ve been building an online, asynchronous workshop that was rigorous, comprehensive, and challenging enough to offer a certification. It’s now ready to go!

I’d love if you would enroll and join me and others in learning!

You can learn more about this Gold-Certification Workshop by clicking here.

 

Congratulations to Steve Semler who has become Work-Learning Research’s first certification earner by successfully completing the Work-Learning Academy course on Performance-Focused Smile Sheets!

The certification workshop is not yet available to the public but Steve generously agreed to take the program before its release. Note that certification verification can be viewed here.

Those who want to be notified of the upcoming release date can do that here.

 

Links of Interest: