Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo.

Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus.

 

Read more

 

 

21st December 2013

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, announces the winner of the 2013 Neon Elephant Award, given this year to Gary Klein for his many years doing research and practice in naturalistic decision making, cognitive task analysis, and insight learning–and for reminding us that real-world explorations of human behavior are essential in enabling us to distill key insights.

Click here to learn more about the Neon Elephant Award…

2013 Award Winner –   Gary Klein

Gary Klein is a research psychologist who specializes in how people make decisions and gain insights in real-world situations. His research on how firefighters made decisions in their work showed that laboratory models of decision making were not fully accurate. His work in developing cognitive task analysis and his co-authorship of the book Working Minds have provided the training-and-development field with a seminal guide. His recent work on how people develop real-world insights is reorienting the field of creativity research and practice. In 1969 Klein received his Ph.D. in Experimental Psychology from the University of Pittsburgh. In the 1990’s he founded his own R&D company, Klein Associates, which he sold in 2005. He was one of the leaders in redesigning the White House’s Situation Room. He continues to be a leading research-to-practice professional.

Klein is honored this year for his lifetime of work straddling the research and practice sides of the learning-and-performance field. By doing great research and great practice and using both to augment the other, he has been able to advance science and practice to new levels.

One of Klein’s most important contributions is the insight that real-world human behavior cannot always be distilled from laboratory experiments. He has brought this wisdom to firefighting, military decision-making, and most recently to everyday insight-creation.

While we in the learning-and-performance field may focus on Klein’s work on cognitive task analysis as his most important contribution to our field, as we now look to discover ways to support employees in on-the-job learning–what some have called informal learning–Klein’s focus on naturalistic decision-making and insight-development are also likely to be seminal contributions.

For deeply exploring naturalistic decision-making and real-world insight–the workplace learning-and-performance field owes a grateful thanks to Gary Klein.

 

Some Key Links:

 

Some Key Publications:

  • Klein, G., & Jarosz, A. (2011). A naturalistic study of insight. Journal of Cognitive Engineering and Decision Making, 5, 335-351.
  • Klein, G. (2008). Naturalistic decision making. Human Factors, 50(3), 456-460.
  • Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 138–147). Norwood, NJ: Ablex.
  • Klein, G., & Hoffman, R. (2008). Macrocognition, mental models, and cognitive task analysis methodology. In Schraagen, J. M., Militello, L., Ormerod, T., & Lipshitz, R. (Eds.). Naturalistic decision making and macrocognition. (pps. 3-25). Ashgate: Hampshire, U.K.
  • Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64, 515-526.

 

Click here to learn more about the Neon Elephant Award…

A subcription-learning program wins Apple's App of the Year contest!! Read more.

I'm telling you. The time for subscription learning is now!!! Indeed, if you attended my keynote address earlier this year on subscription learning, you would have learned all about Duolingo and other subscription-learning applications. Stay tuned for more. For example, I've been invited to give a Featured Session at the eLearning Guild's Learning Solutions conference coming up in March 2014.

My daughter and I use Duolingo to learn Spanish. It is super impressive. It nicely blends many of the most important learning factors (as outlined in the Decisive Dozen)–for example retrieval practice, repetition, spacing, context-alignment, and feedback–into a package that also includes what I might call gentle gamification and social-learning strategies. When my daughter's friend Emma passed me one day, Duolingo notified me. And let me tell you how motivating that was! I was NOT to be OUTDONE by a fifth grader; so I got immediately online and learned some more Spanish…

Anybody interested in elearning should check out Duolingo now…

 

 

More and more training departments are considering the use of the Net Promoter Score as a question–or the central question–on their smile sheets.

This is one of the stupidest ideas yet for smile sheets, but I understand the impetus–traditional smile sheets provide poor information. In this blog post I am going to try and put a finely-honed dagger through the heart of this idea.

Note that I have written a replacement question for the Net Promoter Score (for getting learner responses).

What is the Net Promoter Score?

Here’s what the folks who wrote the book on the Net Promoter Score say it is:

The Net Promoter Score, or NPS®, is based on the fundamental perspective that every company’s customers can be divided into three categories: Promoters, Passives, and Detractors.

By asking one simple question — How likely is it that you would recommend [your company] to a friend or colleague? — you can track these groups and get a clear measure of your company’s performance through your customers’ eyes. Customers respond on a 0-to-10 point rating scale and are categorized as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

To calculate your company’s NPS, take the percentage of customers who are Promoters and subtract the percentage who are Detractors.

So, the NPS is about Customer Perceptions, Right?

Yes, its intended purpose is to measure customer loyalty. It was designed as a marketing tool. It was specifically NOT designed to measure training outcomes. Therefore, we might want to be skeptical before using it.

It kind of makes sense for marketing right? Marketing is all about customer perceptions of a given product, brand, or company? Also, there is evidence–yes, actual evidence–that customers are influenced by others in their purchasing decisions. So again, asking about whether someone might recommend a company or product to another person seems like a reasonable thing to ask.

Of course, just because something seems reasonable, doesn’t mean it is. Even for its intended purpose, the Net Promoter Score has a substantial number of critics. See wikipedia for details.

But Why Not for Training?

To measure training with a Net-Promoter approach, we would ask a question like, “How likely is it that you would recommend this training course to a friend or colleague?” 

Some reasonable arguments for why the NPS is stupid as a training metric:

  1. First we should ask, what is the causal pathway that would explain how the Net Promoter Score is a good measure of training effectiveness? We shouldn’t willy-nilly take a construct from another field and apply it to our field without having some “theory-of-causality” that supports its likely effectiveness.Specifically we should ask whether it is reasonable to assume that a learner’s recommendation about a training program tells us SOMETHING important about the effectiveness of that training program? And, for those using the NPS as the central measure of training effectiveness–which sends shivers down my spine–the query than becomes, is it reasonable to assume that a learner’s recommendation about a training program tells us EVERYTHING important about the effectiveness of that training program?Those who would use the Net Promoter Score for training must have one of the following beliefs:
    • Learners know whether or not training has been effective.
    • Learners know whether their friends/colleagues are likely to have the same beliefs about the effectiveness of training as they themselves have.

    The second belief is not worth much, but it is probably what really happens. It is the first belief that is critical, so we should examine that belief in more depth. Are learners likely to be good judges of training effectiveness?

  2. Scientific evidence demonstrates that learners are not very good at judging their own learning. They have been shown to have many difficulties adequately judging how much they know and how much they’ll be able to remember. For example, learners fail to utilize retrieval practice to support long-term remembering, even though we know this is one of the most powerful learning methods (e.g., Karpicke, Butler, & Roediger, 2009). Learners don’t always overcome their incorrect prior knowledge when reading (Kendeou & van den Broek, 2005). Learners often fail to utilize examples in ways that would foster deeper learning (Renkl, 1997). These are just a few examples of many.
  3. Similarly, two meta-analyses on the potency of traditional smile sheets, which tend to measure the same kind of beliefs as NPS measures, have shown almost no correlation between learner responses and actual learning results (Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008).
  4. Similarly, when we assess learning in the training context at the end of learning, several cognitive biases creep in to make learners perform much better than they would perform if they were in a more realistic situation back on the job at a later time (Thalheimer, 2007).
  5. Even if we did somehow prove that NPS was a good measure for training, is there evidence that it is the best measure? Obviously not!
  6. Should it be used as the most important measure. No! As stated in the Science of Training review article from last year: “The researchers [in talking about learning measurement] noted that researchers, authors, and practitioners are increasingly cognizant of the need to adopt a multidimensional perspective on learning [when designing learning measurement approaches].”
    Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012).
  7. Finally, we might ask are there better types of questions to ask on our smile sheets? The answer to that is an emphatic YES! Performance-Focused Smile Sheets provide a whole new approach to smile sheet questions. You can learn more by attending my workshop on how to create and deploy these more powerful questions.

The Bottom Line

The Net Promoter Score was designed to measure customer loyalty and is not relevant for training. Indeed, it is likely to give us dangerously misguided information.

When we design courses solely so that learners like the courses, we create learning that doesn’t stick, that fails to create long-term remembering, that fails to push for on-the-job application, etc.
Seriously, this is one of the stupidest ideas to come along for learning measurement in a long time. Buyers beware!! Please!