Posts

As I preach in my workshops on how to create better learner-survey questions (for example my Gold-Certification workshop on Performance-Focused Smile Sheets), open-ended comment questions are very powerful questions. Indeed, they are critical in our attempts to truly understand our learners’ perspectives.

Unfortunately, to get the most benefit from comment questions, we have to take time to read every response and reflect on the meaning of all the comments taken together. Someday AI may be able to help us parse comment-question data, but currently the technology is not ready to give us a full understanding. Nor are word clouds or other basic text-processing algorithms useful enough to provide valid insights into our data.

It’s good to take the time in analyzing our comment-question data, but if there was a way to quickly get a sense of comment data, wouldn’t we consider using it? Of course!

As most of you know, I’ve been focusing a lot of my attention on learning evaluation over the last few years. While I’ve learned a lot, have been lauded by others as an evaluation thought leader, and have even created some useful innovations like LTEM, I’m still learning. Today, by filling out a survey after going to a CVS MinuteClinic to get a vaccine shot, I learned something pretty cool. Take a look.

This is a question on their survey, delivered to me right after I’d answered a comment question. This gives the survey analyzers a way to quickly categorize the comments. It DOES NOT REPLACE, or should not replace, a deeper look at the comments (for example, my comment was very specific and useful i hope), but it does enable us to ascribe some overall meaning to the results.

Note that this is similar to what I’ve been calling a hybrid question, where we first give people a forced-choice question and then give them a comment question. The forced choice question drives clarity whereas the follow-up comment question enables more specificity and richness.

One warning! Adding a forced choice question after a comment question should be seen as a tool in our toolbox. Let’s not overuse it. More pointedly, let’s use it when it is particularly appropriate.

If we’ve asked two open-ended comment questions—one asking for positive feedback and one asking for constructive criticism—we might not need a follow-up forced choice question, because we’ve already prompted respondents to give us the good and the bad.

The bottom line is that we now have two types of hybrid questions to add to our toolbox:

  1. Forced-choice question followed by clarifying comment question.
  2. Comment question followed by categorizing forced-choice question.

Freakin’ Awesome!

 

This week, Brett Christensen published an article on how he’s used a Performance-Focused Smile Sheet to support him in teaching one of ISPI’s flagship workshops.

What I found particularly striking is how Brett used the smile-sheet results to make sense of learning effectiveness. His goal was to help his learners be able to take what they’ve learned and use it back on the job.

One smile-sheet question he used pointed to results that suggested that learners felt they had gained awareness of concepts, but they might not be fully able to put what they learned into practice. This raised a red flag, so Brett examined results from another question on the amount of practice received in the workshop. The learners told him that practice was only a little more than 50% of the workshop, and Brett used this information to consider changes for adding more practice.

He also used a question to get a sense of whether the spacing effect was utilized to support long-term remembering–a key research-based learning approach. He got good news there–so that even in a one-day workshop–many learners felt repetitions were delivered after a delay of an hour or more. Good instructional design!

For a century or more, our learner-feedback questions have focused on satisfaction, course reputation, and other factors that are NOT directly related to learning effectiveness. Now we have a new methodology, first described in the award-winning book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. We ought to use this to get feedback about what we can do better.

Brett offers a wonderful case study from his work teaching a course offered through ISPI (Developed by Dr. Roger Chevalier). We are no longer hogtied with evaluations that provide us with bogus information. We can look for ways to get better feedback, improve our learning interventions, and get better results.

To read Brett’s full article, click here…

One of the most common questions I get when I speak about the Performance-Focused Smile-Sheet approach (see the book’s website at SmileSheets.com) is “What can be done to get higher response rates from my smile sheets?”

Of course, people also refer to smile sheets as evals, level 1’s, happy sheets, hot or warm evaluation, response forms, reaction forms, etc. They also refer to both paper-and-pencil forms and online surveys. Indeed, as smile sheets go online, more and more people are finding that online surveys get a much lower response rate than in-classroom paper surveys.

Before I give you my list for how to get a higher response rate, let me blow this up a bit. The thing is, while we want high response rates, there’s something much more important than response rates. We also want response relevance and precision. We want the questions to relate to learning effectiveness, not just learning reputation and learner satisfaction. We also want the learners to be able to answer the questions knowledgeably and give our questions their full attention.

If we have bad questions — one’s that use Likert-like or numeric scales for example — it won’t matter that we have high response rates. In this post, I’m NOT going to focus on how to write better questions. Instead, I’m just tackling how we can motivate our learners to give our questions more of their full attention, thus increasing the precision of their responding while also increasing our response rates as well.

How to get Better Responses and Higher Response Rates

  1. Ask with enthusiasm, while also explaining the benefits.
  2. Have a trusted person make the request (often an instructor who our learners have bonded with).
  3. Mention the coming smile sheet early in the learning (and more than once) so that learners know it is an integral part of the learning, not just an add-on.
  4. While mentioning the smile sheet, let folks know what you’ve learned from previous smile sheets and what you’ve changed based on the feedback.
  5. Tell learners what you’ll do with the data, and how you’ll let them know the results of their feedback.
  6. Highlight the benefits to the instructor, to the instructional designers, and to the organization. Those who ask can mention how they’ve benefited in the past from smile sheet results.
  7. Acknowledge the effort that they — your learners — will be making, maybe even commiserating with them that you know how hard it can be to give their full attention when it’s the end of the day or when they are back to work.
  8. Put the time devoted to the survey in perspective, for example, “We spent 7 hours today in learning, that’s 420 minutes, and now we’re asking you for 10 more minutes.”
  9. Ensure your learners that the data will be confidential, that the data is aggregated so that an individual’s responses are never shared.
  10. Let your learners know the percentage of people like them who typically complete the survey (caveat: if it’s relatively high).
  11. Use more distinctive answer choices. Avoid Likert-like answer choices and numerical scales — because learners instinctively know they aren’t that useful.
  12. Ask more meaningful questions. Use questions that learners can answer with confidence. Ask questions that focus on meaningful information. Avoid obviously biased questions — as these may alienate your learners.

How to get Better Responses and Higher Response Rates on DELAYED SMILE SHEETS

Sometimes, we’ll want to survey our learners well after a learning event, for example three to five weeks later. Delayed smile sheets are perfectly positioned to find out more about how the learning is relevant to the actual work or to our learners’ post-learning application efforts. Unfortunately, prompting action — that is getting learners to engage our delayed smile sheets — can be particularly difficult when asking for this favor well after learning. Still, there are some things we can do — in addition to the list above — that can make a difference.

  1. Tell learners what you learned from the end-of-learning smile sheet they previously completed.
  2. Ask the instructor who bonded with them to send the request (instead of an unknown person from the learning unit).
  3. Send multiple requests, preferably using a mechanism that only sends these requests to those who still need to complete the survey.
  4. Have the course officially end sometime AFTER the delayed smile sheet is completed, even if that is largely just a perception. Note that multiple-event learning experiences lend themselves to this approach, whereas single-event learning experiences do not.
  5. Share with your learners a small portion of the preliminary data from the delayed smile sheet. “Already, 46% of your fellow learners have completed the survey, with some intriguing tentative results. Indeed, it looks like the most relevant topic as rated by your fellow learners is… and the least relevant is…”
  6. Share with them the names or job titles of some of the people who have completed the survey already.
  7. Share with them the percentage of folks from his/her unit who have responded already or share a comparison across units.

What about INCENTIVES?

When I ask audiences for their ideas for improving responses and increasing response rates, they often mention some sort of incentive, usually based on some sort of lottery or raffle. “If you complete the survey, your name will be submitted to have chance to win the latest tech gadget, a book, time off, lunch with an executive, etc.”

I’m a skeptic. I’m open to being wrong, but I’m still skeptical about the cost/benefit calculation. Certainly for some audiences an incentive will increase rates of completion. Also, for some audiences, the harms that come with incentives may be worth it.

What harms you might ask? When we provide an external incentive, we might be sending a message to some learners that we know the task has no redeeming value or is tedious or difficult. People who see their own motivation as caused by the external incentive are potentially less likely to seriously engage our questions, producing bad data. We’re also not just having an effect on the current smile sheet. When we incentivize people today, they may be less willing next time to engage in answering our questions. They may also be pushed into believing that smile sheets are difficult, worthless, or worse.

Ideally, we’d like our learners to want to provide us with data, to see answering our questions as a worthy and helpful exercise, one that is valuable to them, to us, and to our organization. Incentives push against this vision.

 

Are Your Smile Sheets Giving You Good Data Larger

In honor of April as “Smile-Sheet Awareness Month,” I am releasing a brand new smile-sheet diagnostic.

Available by clicking here:
http://smilesheets.com/smile-sheet-diagnostic-survey/

This diagnostic is based on wisdom from my award-winning book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, plus the experience I’ve gained helping top companies implement new measurement practices.

The diagnostic is free and asks you 20 questions about your organization’s current practices. It then provides instant feedback.

Is my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, award worthy?

I think so, buy I'm hugely biased! SMILE.

Boxshot-rendering redrawn-no-shadow2

Here's what I wrote today on an award-submission application:

Performance-Focused Smile Sheets: A Radical Rethinking of Dangerous Art Form is a book, published in February 2016, written by Will Thalheimer, PhD, President of Work-Learning Research, Inc.

The book reviews research on smile sheets (learner feedback forms), demonstrates the limitations of traditional smile sheets, and provides a completely new formulation on how to design and deploy smile sheets.

The ideas in the book — and the example questions provided — help learning professionals focus on "learning effectiveness" in supporting post-learning performance. Where traditional smile sheets focus on learner satisfaction and the credibility of training, Performance-Focused Smile Sheets can also focus on science-of-learning factors that matter. Smile sheets can be transformed by focusing on learner comprehension, factors that influence long-term remembering, learner motivation to apply what they've learned, and after-learning supports for learning transfer and application of learning to real-world job tasks.

Smile sheets can also be transformed by looking beyond Likert-like responses and numerical averages that dumb-down our metrics and lead to bias and paralysis. We can go beyond meaningless averages ("My course is a 4.1!") and provide substantive information to ourselves and our stakeholders.

The book reviews research that shows that so-called "learner-centric" formulations are filled with dangers — as research shows that learners don't always know how they learn best. Smile-sheet questions must support learners in making smile-sheet decisions, not introduce biases that warp the data.

For decades our industry has been mired in the dishonest and disempowering practice of traditional smile sheets. Thankfully, a new approach is available to us.

Sure! I'd love to see my work honored. More importantly, I'd love to see the ideas from my book applied wisely, improved, and adopted for training evaluation, student evaluations, conference evaluations, etc. 

You can help by sharing, by piloting, by persuading, by critiquing and improving! That will be my greatest award!

Update January 2018: To see my latest recommendations for smile-sheet question design, go to this web page.

===================

The Original Post:

The response to the book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, has been tremendous! Since February, when it was published, I’ve received hundreds of thank you’s from folks the world over who are thrilled to have a new tool — and to finally have a way to get meaningful data from learner surveys. At the ATD conference where I spoke recently, the book sold out it was so popular! If you want to buy the book, the best place is still SmileSheets.com, the book’s website.

Since publication, I’ve begun a research effort to learn how companies are utilizing the new smile-sheet approach — and to learn what’s working, what the roadblocks are, and what new questions they’ve developed. As I said in the book in the chapter that offers 26 candidate questions, I hope that folks tailor questions, improve them, and develop new questions. This is happening, and I couldn’t be more thrilled. If your company is interested in being part of my research efforts, please contact me by clicking here. Likewise, if you’ve got new questions to offer, let me know as well.

Avoiding Issues of Traditional Smile Sheets

Traditional smile sheets tend to focus on learners’ satisfaction and learners’ assessments of the value of the learning experience. Scientific research shows us that such learner surveys are not likely to be correlated with learning results. Performance-focused smile sheets offer several process improvements:

  1. Avoid Likert-like scales and numerical scales which create a garbage-in garbage-out problem, which don’t offer clear delineations between answer choices, which don’t support respondent decision-making, and which open responding to bias.
  2. Instead, utilize concrete answer choices, giving respondents more granularity, and enabling much more meaningful results.
  3. In addition to, or instead of, focusing on factors related to satisfaction and perceived value; target factors that are related to learning effectiveness.

New Example Questions

As new questions come to my attention, I’ll share them here on my blog and elsewhere. You can sign up for my email newsletter if you want to increase the likelihood that you’ll see new smile-sheet questions (and for other learning-research related information as well).

Please keep in mind that there are no perfect assessment items, no perfect learning metrics, and no perfect smile-sheet questions. I’ve been making improvements to my own workshop smile sheets for years, and every time I update them, I find improvements to make. If you see something you don’t like in the questions below, that’s wonderful! When evaluating an assessment item, it’s useful to ask whether the item (1) is targeting something important and (2) is it better than other items that we could use or that we’ve used in the past.

Question Example — A Question for Learners’ Managers

My first example comes from a world renowned data and media company. They decided to take one of the book’s candidate questions, which was designed for learners to answer, and modify the question to ask learners’ managers to answer. Their reasoning: The training is strategically important to their business and they wanted to go beyond self-report data. Also, they wanted to send “stealth messages” to learners’ managers that they as managers had a role to play in ensuring application of the training to the job.

Here’s the question (aimed at learners’ managers):

In regard to the course topics taught, HOW EFFECTIVELY WAS YOUR DIRECT REPORT ABLE to put what he/she learned into practice in order to PERFORM MORE EFFECTIVELY ON THE JOB?

A. He/she has NOT AT ALL ABLE to put the concepts into practice.

B. He/she has GENERAL AWARENESS of the concepts taught, but WILL NEED MORE TRAINING / GUIDANCE to put the concepts into practice.

C. He/she WILL NEED MORE HANDS-ON EXPERIENCE to be fully competent in using the concepts taught.

D. He/she is at a FULLY COMPETENT LEVEL in using the concepts taught.

E. He/she is at an EXPERT LEVEL in using the concepts taught.

Question Example — Tailoring a Question to the Topic

In writing smile-sheet questions, there’s a tradeoff between generalization and precision. Sometimes we need a question to be relevant to multiple courses. We want to compare courses to one another. Personally, I think we overvalue this type of comparison, even when we might be comparing apples to oranges. For example, do we really want to compare scores on courses that teach such disparate topics as sexual harassment, word processing, leadership, and advanced statistical techniques? Still, there are times when such comparisons make sense.

The downside of generalizability is that we lose precision. Learners are less able to calibrate their answers. Analyzing the results becomes less meaningful. Also, learners see the learner-survey process as less valuable when questions are generic, so they give less energy and thought to answering the questions, and our data become less valuable and more biased.

Here is a question I developed for my own workshop (on how to create better smile sheets, by the way SMILE):

How READY are you TO WRITE QUESTIONS for a Performance-Focused Smile Sheet?

CIRCLE ONE OR MORE ANSWERS

AND/OR WRITE YOUR REASONING BELOW

A. I’m STILL NOT SURE WHERE TO BEGIN.

B. I KNOW ENOUGH TO GET STARTED.

C. I CAN TELL A GOOD QUESTION FROM A BAD ONE.

D. I CAN WRITE MY OWN QUESTIONS, but I’d LIKE to get SOME FEEDBACK before using them.

E. I CAN WRITE MY OWN QUESTIONS, and I’m CONFIDENT they will be reasonably WELL DESIGNED.

More
Thoughts?

 

 

Note several things about this question. First to restate. It is infinitely more tailored than a generic question could be. It encourages more thoughtful responding and creates more meaningful feedback.

Second, you might wonder why all the CAPS! I advocate CAPS because (1) CAPS have been shown in research to slow reading speed. Too often, our learners burn through our smile-sheet questions. Anything we can do to make them attend more fully is worth trying. Also, (2) respondents often read the full question and then skim back over it when determining how to respond. I want them to have an easy way to parse the options. Full disclosure. To my knowledge, all CAPS has not been studied yet for smile sheets. At this point, my advocacy for all CAPS is based on my intuition about how people process smile-sheet questions. If you’d like to work with me to test this in a scientifically rigorous fashion, please contact me.

Third, notice the opportunity for learners to write clarifying comments. Open-ended questions, though not easily quantifiable, can be the most important questions on smile sheets. They can provide intimate granularity — a real sense of the respondents’ perceptions. In these questions, we’re using a hybrid format, a forced choice question followed by an open-ended opportunity for clarification. This not only enables the benefits of open-ended responding, but it also enables us to get clarifying meaning. In addition, in some way it provides a reality-check on our question design. If we notice folks responding in ways that aren’t afforded in the answer choices given, we can improve our question for later versions.

 

Question Example — Simplifying The Wording

In writing smile-sheet questions, there’s another tradeoff to consider. More words add more precision, but fewer words add readability and motivation to engage the question fully. In the book, I talk about what I once called, “The World’s Best Smile Sheet Question.” I liked it partly because the answer choices were more precise than a Likert-like scale. It did have one drawback; it used a lot of words. For some audiences this might be fine, but for others it might be entirely inappropriate.

Recently, in working with a company to improve their smile sheet, a first draft included the so-called World’s Best Smile Sheet Question. But they were thinking of piloting the new smile sheet for a course to teach basic electronics to facilities professionals. Given the topic and audience, I recommended a simpler version:

How able will you be to put what you’ve learned into practice on the job?  Choose one.

A. I am NOT AT ALL ready to use the skills taught.
B. I need MORE GUIDANCE to be GOOD at using these skills
C. I need MORE EXPERIENCE to be GOOD at using these skills.
D. I am FULLY COMPETENT in using these skills.
E. I am CAPABLE at an EXPERT LEVEL in using these skills.

This version nicely balances precision with word count.

 

Question Example — Dealing with the Sticky Problem of “Motivation”

In the book, I advocate a fairly straightforward question asking learners about their motivation to apply what they’ve learned. In many organizations — in many organizational cultures — this will work fine. However in others, our trainers may be put off by this. They’ll say, “Hey, I can’t control people’s motivations.” They’re right, of course. They can’t control learners’ motivations, but they can influence them. Still, it’s critical to realize that motivation is a multidimensional concept. When we speak of motivation, we could be talking simply about a tendency to take action. We could be talking about how inspired learners are, or how much they believe in the value of the concepts, or how much self-efficacy they might have. It’s okay to ask about motivation in general, but you might generate clearer data if you ask about one of the sub-factors that comprise motivation.

Here is a question I developed recently for my Smile-Sheet Workshop:

How motivated are you to IMPLEMENT PERFORMANCE-FOCUSED SMILE SHEETS in your organization?

CIRCLE ONLY ONE ANSWER. ONLY ONE!

A. I’m NOT INTERESTED IN WORKING TOWARD IMPLEMENTING this.

B. I will confer with my colleagues to SEE IF THERE IS INTEREST.

C. I WILL ADVOCATE FOR performance-focused smile sheet questions.

D. I WILL VIGOROUSLY CHAMPION performance-focused smile sheet questions.

E. Because I HAVE AUTHORITY, I WILL MAKE THIS HAPPEN.

More
Thoughts?

In this question, I’m focusing on people’s predilection to act. Here I’ve circumnavigated any issues in asking learners to divulge their internal motivational state, and instead I’ve focused the question on the likelihood that they will utilize their newly-learned knowledge in developing, deploying, and championing performance-focused smile sheets.

 

Final Word

It’s been humbling to work on smile sheet improvements over many years. My earlier mistakes are still visible in the digital files on my hard drive. I take solace in making incremental improvements — and in knowing that the old way of creating smile-sheet questions is simply no good at all, as it provides us with perversely-irrelevant information.

As an industry — and the learning industry is critically important to the world — we really need to work on our learning evaluations. Smile sheets are just one tool in this. Unfortunately, poorly constructed smile sheets have become our go-to tool, and they have led us astray for decades.

I hope you find value in my book (SmileSheets.com). More importantly, I hope you’ll participate along with some of the world’s best-run companies and organizations in developing improved smile sheet questions. Again, please email me with your questions, your question improvements, and alternatively, with examples of poorly-crafted questions as well.

Original post appeared in 2011. I update it here.

Updated Article

When companies think of evaluation, they often first think of benchmarking their performance against other companies. There are important reasons to be skeptical of this type of approach, especially as a sole source of direction.

I often add this warning to my workshops on how to create more effective smile sheets: Watch out! There are vendors in the learning field who will attempt to convince you that you need to benchmark your smile sheets against your industry. You will spend (waste) a lot of money with these extra benchmarking efforts!

Two forms of benchmarking are common, (1) idea-generation, and (2) comparison. Idea-generation involves looking at other company’s methodologies and then assessing whether particular methods would work well at our company. This is a reasonable procedure only to the extent that we can tell whether the other companies have similar situations to ours and whether the methodologies have really been successful at those other companies.

Comparison benchmarking for training and development looks further at a multitude of learning methods and results and specifically attempts to find a wide range of other companies to benchmark against. This approach requires stringent attempts to create valid comparisons. This type of benchmarking is valuable only to the extent that we can determine whether we are comparing our results to good companies or bad and whether the comparison metrics are important in the first place.

Both types of benchmarking require exhaustive efforts and suffer from validity problems. It is just too easy to latch on to other company’s phantom results (i.e., results that seem impressive but evaporate upon close examination). Picking the right metrics are difficult (i.e., a business can be judged on its stock price, its revenues, profits, market share, etc.). Comparing companies between industries presents the proverbial apple-to-orange problem. It’s not always clear why one business is better than another (e.g., It is hard to know what really drives Apple Computer’s current success: its brand image, its products, its positioning versus its competitors, its leaders, its financial savvy, its customer service, its manufacturing, its project management, its sourcing, its hiring, or something else). Finally, and most pertinent here, it is extremely difficult to determine which companies are really using best practices (e.g., see Phil Rosenweig’s highly regarded book on The Halo Effect) because companies’ overall results usually cloud and obscure the on-the-job realities of what’s happening.

The difficulty of assessing best practices in general pales in comparison to the difficulties of assessing its training-and-development practices. The problem is that there just aren’t universally accepted and comparable metrics to utilize for training and development. Where baseball teams have wins and losses, runs scored, and such; and businesses have revenues and profits and the like; training and development efforts produce more fuzzy numbers—certainly ones that aren’t comparable from company to company. Reviews of the research literature on training evaluation have found very low levels of correlation (usually below .20) between different types of learning assessments (e.g., Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008).

Of course, we shouldn’t dismiss all benchmarking efforts. Rigorous benchmarking efforts that are understood with a clear perspective can have value. Idea-generation brainstorming is probably more viable than a focus on comparison. By looking to other companies’ practices, we can gain insights and consider new ideas. Of course, we will want to be careful not to move toward the mediocre average instead of looking to excel.

The bottom line on benchmarking from other companies is: be careful, be willing to spend lots of time and money, and don’t rely on cross-company comparisons as your only indicator.

Finally, any results generated by brainstorming with other companies should be carefully considered and pilot-tested before too much investment is made.

 

Smile Sheet Issues

Both of the meta-analyses cited above found that smile sheets were correlated with an r = 0.09, which is virtually no correlation at all. I have detailed smile-sheet design problems in detail in my book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form. In short, most smile sheets focus on learner satisfaction, and fail to focus on factors related to actual learning effectiveness. Most smile sheets utilize Likert-like scales or numeric scales that offer learners very little granularity between answer choices, opening up responding to bias, fatigue, and disinterest. Finally, most learners have fundamental misunderstandings about their own learning (Brown, Roediger & McDaniel, 2014; Kirschner & van Merriënboer, 2013), so asking for their perceptions with general questions about their perceptions is too often a dubious undertaking.

The bottom line is that traditional smile sheets are providing almost everyone with meaningless data in terms of learning effectiveness. When we benchmark our smile sheets against other companies’ smile sheets we compound our problems.

 

Wisdom from Earlier Comments

Ryan Watkins, researcher and industry guru, wrote:

I would add to this argument that other companies are no more static than our own — thus if we implement in September 2011 what they are doing in March 2011 from our benchmarking study, then we are still behind the competition. They are continually changing and benchmarking will rarely help you get ahead. Just think of all the companies that tried to benchmark the iPod, only to later learn that Apple had moved on to the iPhone while the others were trying to “benchmark” what they were doing with the iPod. The competition may have made some money, but Apple continues to win the major market share.

Mike Kunkle, sales training and performance expert, wrote:

Having used benchmarking (carefully and prudently) with good success, I can’t agree with avoiding it, as your title suggests, but do agree with the majority of your cautions and your perspectives later in the post.

Nuance and context matter greatly, as do picking the right metrics to compare, and culture, which is harder to assess. 70/20/10 performance management somehow worked at GE under Welch’s leadership. I’ve seen it fail miserably at other companies and wouldn’t recommend it as a general approach to good people or performance management.

In the sales performance arena, at least, benchmarking against similar companies or competitors does provide real benefit, especially in decision-making about which solutions might yield the best improvement. Comparing your metrics to world-class competitors and calculating what it would mean to you to move in that direction, allows for focus and prioritization, in a sea of choices.

It becomes even more interesting when you can benchmark internally, though. I’ve always loved this series of examples by Sales Benchmark Index:
http://www.salesbenchmarkindex.com/Portals/23541/docs/why-should-a-sales-professional-care-about-sales-benchmarking.pdf

 

Citations

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Brown, P. C., Roediger, H. L., III, & McDaniel, M. A. (2014). Make It Stick: The Science of Successful Learning. Cambridge, MA: Belknap Press of Harvard University Press.

Kirschner, P. A., & van Merriënboer, J. J. G. (2013). Do learners really know best? Urban legends in education. Educational Psychologist, 48(3), 169–183.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

OMG! The best deal ever for a full-day workshop on how to radically improve your smile-sheet designs! Sponsored by the Hampton Roads Chapter of ISPI. Free book and subscription-learning thread too!

 

Friday, June 10, 2016

Reed Integration

7007 Harbour View Blvd #117

Suffolk, VA

 

Click here to register now…

 

Performance Objectives:

By completing this workshop and the after-course subscription-learning thread, you will know how to:

  1. Avoid the three most troublesome biases in measuring learning.

  2. Persuade your stakeholders to improve your organization’s smile sheets.

  3. Create more effective smile sheet questions.

  4. Create evaluation standards for each question to avoid bias.

  5. Envision learning measurement as a bulwark for improved learning design.

 

Recommended Audience:

The content of this workshop will be suitable to those who have at least some background and experience in the training field. It will be especially valuable to those who are responsible for learning evaluation or who manage the learning function.

 

Format:

This is a full-day workshop. Participants are encouraged to bring laptops if they prefer to use a computer to write their questions.  

 

Bonus Take-Away:

Each Participant will receive a copy of Dr. Thalheimer’s Book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form.

Wow! What a week! I published my first book on Tuesday and have been hearing from well wishers ever since.

Here are some related links you might find interesting:

And here are some random visuals, but maybe related.

  • In the rush of purchases, Amazon briefly ran out of my book, telling folks they'd have to wait 1-3 weeks — their signal that they have no stock. Later, they found some copies…but the genie was out of the bottle.

Amazon showing sold out

 

 

  • The animal kingdom seems to be behind the book:

Bozarth's Corgi

Olah's Cat

 

 

  • Even one of the United States' presidential candidates has spoken up:

Bernie2

 

What a week!

 

Thank you all!!

 

= Will Thalheimer

 

Wow!!

I almost can't believe it. Finally, after 17 years of research and writing, I'm finally a published author.

Today is the day!

It's kind of funny really.

When I began this journey back in 1997 I had a well-paying job running a leadership-development product line, building multimedia simulations, and managing and working with a bunch of great folks.

As I looked around the training-and-development field — that's what we called it back then — I saw that we jumped from one fad to another and held on sanctimoniously to learning methods that didn't work that well. I concluded that what was needed was someone to play a role in bridging the gap between the research side and the practice side.

I had a very naive idea about how I might help. I thought the field needed a book that would specify the fundamental learning factors that should be baked into every learning design. I thought I could write such a book in two or three years, that I'd get it published, that consulting gigs would roll in, that I'd make good money, that I'd make a difference.

Hah! The blind optimism of youth and entrepreneurship!

I've now written over 700 pages on THAT book…without an end in sight.

 

How The Smile-Sheet Book Got its Start

Back in 2007, as I was mucking around in the learning research, I began to see biases in how we were measuring learning. I noticed, for instance, that we always measured at the top of the learning curve, before the forgetting curve had even begun. We measured with trivial multiple-choice questions on definitions and terminology — when these clearly had very little relevance for on-the-job performance. I wrote a research-to-practice report on these learning measurement biases and suddenly I was getting invited to give keynotes…

In my BIG book, I wrote hundreds of paragraphs on learning measurement. I talked about our learning-measurement blind spots to clients, at conferences, and on my blog.

Where feedback is the lifeblood of improvement, we as learning professionals were getting very little good feedback. We were practicing in the dark.

I'd also come to ruminate on the meta-analytic research findings that showed that traditional smile sheets were virtually uncorrelated with learning results. If smile sheets were feeding us bad information, maybe we should just stop using them.

It was about three or four years ago that I saw a big client get terrible advice about their smile sheets from a well-known learning-measurement vendor. And, of course, because the vendor had an industry-wide reputation, the client almost couldn't help buying into their poor smile-sheet designs.

I concluded that smile-sheets were NOT going away. They were too entrenched and there were some good reasons to use them.

I also concluded that smile sheets could be designed to be more effective, more aligned with the research on learning, and designed to better support learners in making smile-sheet decisions.

I decided to write a shorter book than the aforementioned BIG book. That was about 2.5 years ago.

I wrote a draft of the book and I knew I had something. I got feedback from learning-measurement luminaries like Rob Brinkerhoff, Jack Phillips, and Bill Coscarelli. I got feedback from learning gurus Julie Dirksen, Clark Quinn, and Adam Neaman. I made major improvement based on the feedback from these wonderful folks. The book then went through several rounds of top-tier editing, making it a much better read. 

As the publication process unfolded, I realized that I didn't have enough money on hand to fund the printing of the book. Kickstarter and 227 people raised their hands to help, reserving over 300 books in return for their generous Kickstarter contributions. I will be forever indepted to them.

Others reached out to help as well, from people on my newsletter list, to my beloved clients, to folks in trade organizations and publications, to people I've met through the years, to people I haven't met, to followers on Twitter, to the industry luminaries who agreed to write testimonials after getting advanced drafts of the book, to family members, to friends.

Today, all the hard work, all the research, all the client work, all the love and support comes together for me in gratitude.

Thank you!

 

= Will Thalheimer

 

P.S. To learn more about the book, or buy it:  SmileSheets.com