Posts

Reflections This Morning On Brushing My Teeth

,

I use a toothbrush that has a design that research shows maximizes the benefits of brushing. It spins, and spinning is better than oscillations. It also has a timer, telling me when I’ve brushed for two minutes. Ever since a hockey stick broke up my mouth when I was twenty, I’ve been sensitive about the health of my teeth.

But what the heck does this have to so with learning and development? Well, let’s see.

Maybe my toothbrush is a performance-support exemplar. Maybe no training is needed. I didn’t read any instructions. I just used it. The design is intuitive. There’s an obvious button that turns it on, an obvious place to put toothpaste (on the bristles), and it’s obvious that the bristles should be placed against the teeth. So, the tool itself seems like it needs no training.

But I’m not so sure. Let’s do a thought experiment. If I give a spinning toothbrush to a person who’s never brushed their teeth, would they use it correctly? Would they use it at all? Doubtful!

What is needed to encourage or enable good tooth-brushing?

  • People probably need something to compel them to brush, perhaps knowledge that brushing prevents dental calamities like tooth decay, gum disease, bad breath—and may even prevent cognitive decline as in Alzheimer’s. Training may help motivate action.
  • People will probably be more likely to brush if they know other people are brushing. Tons of behavioral economics studies have shown that people are very attuned to social comparisons. Again, training may help motivate action. Interestingly, people may be more likely to brush with a spinning toothbrush if others around them are also brushing with spinning toothbrushes. Training coworkers (or in this case other family members) may also help motivate action.
  • People will probably brush more effectively if they know to brush all their teeth, and to brush near their gums as well—not just the biting surfaces of their teeth. Training may provide this critical knowledge.
  • People will probably brush more effectively if they are set up—probably if they set themselves up—to be triggered by environmental cues. For example, tooth-brushing is often most effectively triggered when people brush right after breakfast and right before they go to bed. Training people to set up situation-action triggering may increase later follow through.
  • People will probably brush more effectively if they know that they should brush for two minutes or so rather than just brushing quickly. Training may provide this critical knowledge. Note, of course, that the toothbrush’s two-minute timer may act to support this behavior. Training and performance support can work together to enable effective behavior.
  • People will be more likely to use an effective toothbrush if the cost of the toothbrush is reasonable given the benefits. The costs of people’s tools will affect their use.
  • People will be more likely to use a toothbrush if the design is intuitive and easy to use. The design of tools will affect their use.

I’m probably missing some things in the list above, but it should suffice to show the complex interplay between our workplace tools/practices/solutions and training and prompting mechanisms (i.e., performance support and the like).

But what insights, or dare we say wisdom, can we glean from these reflections? How about these for starters:

  • We could provide excellent training, but if our tools/practices/solutions are poorly designed they won’t get used.
  • We could provide excellent training, but if our tools/practices/solutions are too expensive they won’t get used.
  • Let’s not forget the importance of prior knowledge. Most of us know the basics of tooth brushing. It would waste time, and be boring, to repeat that in a training. The key is to know, to really know, not just guess, what our learners know—and compare that to what they really need to know.
  • Even when we seem to have a perfectly intuitive, well-designed tool/practice/solution let’s not assume that no training is needed. There might be knowledge or motivational gaps that need to be bridged (yes, the pun was intended! SMILE). There might be situation-action triggering sets that can be set up. There might be reminders that would be useful to maintain motivation and compel correct technique.
  • Learning should not be separated from design of tools/practices/solutions. We can support better designs by reminding the designers and developers of these objects/procedures that training can’t fix a bad design. Better yet, we can work hand in hand involved in prototyping the tool/training bundle to enable the most pertinent feedback during the design process itself.
  • Training isn’t just about knowledge, it’s also about motivation.
  • Motivation isn’t just the responsibility of training. Motivation is an affordance of the tools/practices/solutions themselves, it is borne in the social environment, it is subject to organizational influence, particularly through managers and peers.
  • Training shouldn’t be thought of as a one-time event. Reminders may be valuable as well, particularly around the motivational aspects (for simple tasks), and to support remembering (for tasks that are easily forgotten or misunderstood).

One final note. We might also train people to use the time when they are engaged in automated tasks—tooth-brushing for example—to reflect on important aspects of their lives, gaining from the learning that might occur or the thoughts that may enable future learning. And adding a little fun into mundane tasks. Smile for the tiny nooks and crannies of our lives that may illuminate our thinking!

 

Dealing with Emotional Readiness — What Should We be Doing?

,

I included this piece in my newsletter this morning (which you can sign up for here) and it seemed to really resonate with people, so I’m including it here.

I’ve always had a high tolerance for pain, but breaking my collarbone at the end of February really sent me crashing down a mountain. Lying in bed, I got thinking about the emotional side of workplace performance. I don’t have brilliant insights here, just maybe some thoughts that will get you thinking.

Skiing with my family in Vermont, it had been a very good week. My wife and I, skiing together on our next-to-last day on the mountain, went to look for the kids who told us they’d be skiing in the terrain park (where the jumps are). My wife skied down first, then I went. There was a little jump, about a foot high, of the kind I’d jumped many times. But this time would be different.

As I sailed over the jump — slowly because I’m wary of going too fast and flying too far — I looked down and saw, NOT snow, but stairs. WTF? Every other time I took a small jump there was snow on the other side. Not metal stairs. Not dry metal stairs. In mid-air my thought was, “okay, just stay calm, you’ll ski over the stairs back to snow.” Alas, what happened was that I came crashing down on my left shoulder, collarbone splintering into five or six pieces, and lay 20 feet down the hill. I knew right away that things were bad. I knew that my life would be upended for weeks or months. I knew that miserable times lay ahead.

I got up quickly. I was in shock and knew it. I looked up the mountain back at the jump. Freakin’ stairs!! What they hell were they doing there? I was rip-roaring mad! One of my skis was still on the stairs. The dry surface must have grabbed it, preventing me from skiing further down the slope. I retrieved my ski. A few people skied by me. My wife was long gone down the mountain. I was in shock and I was mad as hell and I couldn’t think straight, but I knew I shouldn’t sit down so I just stood there for five or ten minutes in a daze. Finally someone asked if I was okay, and I yelled crazy loud for the whole damn mountain to hear, “NO!” He was nice, said he’d contact the ski patrol.

I’ll spare you the details of the long road to recovery — a recovery not yet complete — but the notable events are that I had badly broken my collarbone, badly sprained my right thumb and mildly sprained my left thumb, couldn’t button my shirts or pants for a while, had to lie in bed in one position or the pain would be too great, watched a ton of Netflix (I highly recommend Seven Seconds!), couldn’t do my work, couldn’t help around the house, got surgery on my collarbone, got pneumonia, went to physical therapy, etc… Enough!

Feeling completely useless, I couldn’t help reflect on the emotional side of learning, development, and workplace performance in general. In L&D, we tend to be helping people who are able to learn and take actions — but maybe not all the people we touch are emotionally present and able. Some are certainly dealing with family crises, personal insecurities, previous job setbacks, and the like. Are we doing enough for them?

I’m not a person prone to depression, but I was clearly down for the count. My ability to do meaningful work was nil. At first it was the pain and the opiates. Later it was the knowledge that I just couldn’t get much work done, that I was unable to keep up with promises I’d made, that I was falling behind. I knew, intellectually, that I just had to wait it out — and this was a great comfort. But still, my inability to think and to work reminded me that as a learning professional I ought to be more empathetic with learners who may be suffering as well.

Usually, dealing with emotional issues of an employee falls to the employee and his or her manager. I used to be a leadership trainer and I don’t remember preparing my learners for how to deal with direct reports who might be emotionally unready to fully engage with work. Fortunately today we are willing to talk about individual differences, but I think we might be forgetting the roller-coaster ride of being human, that we may differ in our emotional readiness on any given day. Managers/supervisors rightly are the best resource for dealing with such issues, but we in L&D might have a role to play as well.

I don’t have answers here. I wish I did. Probably it begins with empathy. We also can help more when we know our learners more — and when we can look them in the eyes. This is tricky business though. We’re not qualified to be therapists and simple solutions like being nice and kind and keeping things positive is not always the answer. We know from the research that challenging people with realistic decision-making challenges is very beneficial. Giving honest feedback on poor performance is beneficial.

We should probably avoid scolding and punishment and reprimands. Competition has been shown to harmful in at least some learning situations. Leaderboards may make emotional issues worse, and generally the limited research suggests they aren’t very useful anyway. But these negative actions are rarely invoked, so we have to look deeper.

I wish I had more wisdom about this. I wish there was research-based evidence I could draw on. I wish I could say more than just be human, empathetic, understanding.

Now that I’m aware of this, I’m going to keep my eyes and ears open to learning more about how we as learning professionals can design learning interventions to be more sensitive to the ups and downs of our fellow travelers.

If you’ve got good ideas, please send them my way or use the LinkedIn Post generated from this to join the discussion.

Preparing for Attending a Learning Conference in 2018 and Beyond

, ,

Conferences can be beautiful things—helping us learn, building relationships that help us grow and bring us joy, prompting us to see patterns in our industry we might miss otherwise, helping us set our agenda for what we need to learn more fully.

 

Conferences can be ugly things—teaching us myths, reinforcing our misconceptions, connecting us to people who steer us toward misinformation, echo chambers of bad thinking, a vendor-infested shark tank that can lead us to buy stuff that’s not that helpful or is actually harmful, pushing us to set our learning agenda on topics that distract us from what’s really important.

Given this dual reality, your job as a conference attendee is to be smart and skeptical, and work to validate your learning. In the Training Maximizers model, the first goal is ensuring our learning interventions are built from a base of “valid, credible content.” In conferences, where we curate our own learning, we have to be sure we are imbibing the good stuff and avoiding the poison. Here, I’ll highlight a few things to keep in mind as you attend a conference. I’ll aim to make this especially relevant for this year, 2018, when you are likely to encounter certain memes and themes.

Drinking the Good Stuff

  • Look for speakers who have a background doing two things, (1) studying the scientific research (not opinion research), and (2) working with real-world learning professionals in implementing research-based practices.
  • If speakers make statements without evidence, ask for the evidence or the research—or be highly skeptical.
  • If things seem almost too good to be true, warn yourself that learning is complicated and there are no magic solutions.
  • Be careful not to get sucked into group-think. Just because others seem to like something, doesn’t necessarily make it good. Think for yourself.
  • Remember that correlation does not mean causation. Just because some factors seem to move in the same direction doesn’t mean that one caused the other. It could be the other way around. Or some third factor may have caused both to move in the same direction.

Prepare Yourself for This Year’s Shiny Objects

  • Learning Styles — Learning Styles is bogus, but it keeps coming up every year. Don’t buy into it. Learn about it first. The Debunker.Club has a nice post on why we should avoid learning styles. Read it. And don’t let people tell you that learning styles if bad but learning preferences is good. They’re pulling the wool.
  • Dale’s Cone with Percentages — People do NOT remember 10% of what they read, 20% of what they read, 30% of what they see (or anything similar). Here’s the Internet’s #1 URL debunking this silly myth.
  • Neuroscience and Learning — It’s a very hot topic with vendors touting neuroscience to entice you to be impressed. But neuroscience at this time has nothing to say about learning.
  • Microlearning — Because it’s a hot topic, vendors and consultants are yapping about microlearning endlessly. But microlearning is not a thing. It’s many things. Here’s the definitive definition of microlearning, if I must say so myself.
  • AI, Machine Learning, and Big Data — Sexy stuff certainly, but it’s not clear whether these things can be applied to learning, or whether they can be applied now (given the state of our knowledge). Beware of taking these claims too seriously. Be open, but skeptical.
  • Gamification — We are almost over this fad thankfully. Still, keep in mind that gamification, like microlearning, is comprised of multiple learning methods. Gamification is NOT a thing.
  • Personalization — Personalization is a great idea, if carried out properly. Be careful if what someone calls personalization is just another way of saying learning styles. Also, don’t buy into the idea that personalization is new. It’s quite old. See Skinner and Keller back in the early 1900’s.
  • Learning Analytics — There is a lot of movement in learning evaluation, but much of it is wrong-headed focus on pretty dashboards, and a focus only on business impact. Look for folks who are talking about how to get better feedback to make learning better. I’ll tout my own effort to develop a new approach to gathering learner feedback. But beware and do NOT just do smile sheets (said by the guy who wrote a book on smile sheets)! Beware of vendors telling you to focus only on measuring behavior and business results. Read why here.
  • Kirkpatrick-Katzell Four-Level Model of Evaluation — Always a constant in the workplace learning field for the past 60 years. But even with recent changes it still has too many problems to be worthwhile. See the new Learning-Transfer Evaluation Model (LTEM), a worthy replacement.

Wow! So much to be worried about.

Well, sorry to say, I surely missing some stuff. It’s up to you to be smart and skeptical at the same time you stay open to new ideas.

You might consider joining the Debunker Club, folks who have agreed on the importance of debunking myths in the learning field.

The Learning-Transfer Evaluation Model (LTEM)

NOTICE OF UPDATE (17 May 2018):

The LTEM Model and accompanying Report were updated today and can be found below.

Two major changes were included:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

 

This blog post introduces a new learning-evaluation model, the Learning-Transfer Evaluation Model (LTEM).

 

Why We Need a New Evaluation Model

It is well past time for a new learning-evaluation model for the workplace learning field. The Kirkpatrick-Katzell Model is over 60 years old. It was born in a time before computers, before cognitive psychology revolutionized the learning field, before the training field was transformed from one that focused on the classroom learning experience to one focused on work performance.

The Kirkpatrick-Katzell model—created by Raymond Katzell and popularized by Donald Kirkpatrick—is the dominant standard in our field. It has also done a tremendous amount of harm, pushing us to rely on inadequate evaluation practices and poor learning designs.

I am not the only critic of the Kirkpatrick-Katzell model. There are legions of us. If you do a Google search starting with these letters, “Criticisms of the Ki,” Google anticipates the following: “Criticisms of the Kirkpatrick Model” as one of the most popular searches.

Here’s what a seminal research review said about the Kirkpatrick-Katzell model (before the model’s name change):

The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders…

The New Model

For the past year or so I’ve been working to develop a new learning-evaluation model. The current version is the eleventh iteration, improved after reflection, after asking some of the smartest people in our industry to provide feedback, after sharing earlier versions with conference attendees at the 2017 ISPI innovation and design-thinking conference and the 2018 Learning Technologies conference in London.

Special thanks to the following people who provided significant feedback that improved the model and/or the accompanying article:

Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, Mike Rustici

The model, which I’ve named the Learning-Transfer Evaluation Model (LTEM, pronounced L-tem) is a one page, eight-level model, augmented with color coding and descriptive explanations. In addition to the model itself, I’ve prepared a 34-page report to describe the need for the model, the rationale for its design, and recommendations on how to use it.

You can access the model and the report by clicking on the following links:

 

 

Release Notes

The LTEM model and report were researched, conceived, and written by Dr. Will Thalheimer of Work-Learning Research, Inc., with significant and indispensable input from others. No one sponsored or funded this work. It was a labor of love and is provided as a valentine for the workplace learning field on February 14th, 2018 (Version 11). Version 12 was released on May 17th, 2018 based on feedback from its use. The model and report are copyrighted by Will Thalheimer, but you are free to share them as is, as long as you don’t sell them.

If you would like to contact me (Will Thalheimer), you can do that at this link: https://www.worklearning.com/contact/

If you would like to sign up for my list, you can do that here: https://www.worklearning.com/sign-up/

 

 

The Backfire Effect is NOT Prevalent: Good News for Debunkers, Humans, and Learning Professionals!

, , ,

An exhaustive new research study reveals that the backfire effect is not as prevalent as previous research once suggested. This is good news for debunkers, those who attempt to correct misconceptions. This may be good news for humanity as well. If we cannot reason from truth, if we cannot reliably correct our misconceptions, we as a species will certainly be diminished—weakened by realities we have not prepared ourselves to overcome. For those of us in the learning field, the removal of the backfire effect as an unbeatable Goliath is good news too. Perhaps we can correct the misconceptions about learning that every day wreak havoc on our learning designs, hurt our learners, push ineffective practices, and cause an untold waste of time and money spent chasing mythological learning memes.

 

 

The Backfire Effect

The backfire effect is a fascinating phenomenon. It occurs when a person is confronted with information that contradicts an incorrect belief that they hold. The backfire effect results from the surprising finding that attempts at persuading others with truthful information may actually make the believer believe the untruth even more than if they hadn’t been confronted in the first place.

The term “backfire effect” was coined by Brendan Nyhan and Jason Reifler in a 2010 scientific article on political misperceptions. Their article caused an international sensation, both in the scientific community and in the popular press. At a time when dishonesty in politics seems to be at historically high levels, this is no surprise.

In their article, Nyhan and Reifler concluded:

“The experiments reported in this paper help us understand why factual misperceptions about politics are so persistent. We find that responses to corrections in mock news articles differ significantly according to subjects’ ideological views. As a result, the corrections fail to reduce misperceptions for the most committed participants. Even worse, they actually strengthen misperceptions among ideological subgroups in several cases.”

Subsequently, other researchers found similar backfire effects, and notable researchers working in the area (e.g., Lewandowsky) have expressed the rather fatalistic view that attempts at correcting misinformation were unlikely to work—that believers would not change their minds even in the face of compelling evidence.

 

Debunking the Myths in the Learning Field

As I have communicated many times, there are dozens of dangerously harmful myths in the learning field, including learning styles, neuroscience as fundamental to learning design, and the myth that “people remember 10% of what they read, 20% of what they hear, 30% of what they see…etc.” I even formed a group to confront these myths (The Debunker Club), although, and I must apologize, I have not had the time to devote to enabling our group to be more active.

The “backfire effect” was a direct assault on attempts to debunk myths in the learning field. Why bother if we would make no difference? If believers of untruths would continue to believe? If our actions to persuade would have a boomerang effect, causing false beliefs to be believed even more strongly? It was a leg-breaking, breath-taking finding. I wrote a set of recommendations to debunkers in the learning field on how best to be successful in debunking, but admittedly many of us, me included, were left feeling somewhat paralyzed by the backfire finding.

Ironically perhaps, I was not fully convinced. Indeed, some may think I suffered from my own backfire effect. In reviewing a scientific research review in 2017 on how to debunk, I implored that more research be done so we could learn more about how to debunk successfully, but I also argued that misinformation simply couldn’t be a permanent condition, that there was ample evidence to show that people could change their minds even on issues that they once believed strongly. Racist bigots have become voices for diversity. Homophobes have embraced the rainbow. Religious zealots have become agnostic. Lovers of technology have become anti-technology. Vegans have become paleo meat lovers. Devotees of Coke have switched to Pepsi.

The bottom line is that organizations waste millions of dollars every year when they use faulty information to guide their learning designs. As a professional in the learning field, it’s our professional responsibility to avoid the danger of misinformation! But is this even possible?

 

The Latest Research Findings

There is good news in the latest research! Thomas Wood and Ethan Porter just published an article (2018) that could not find any evidence for a backfire effect. They replicated the Nyhan and Reifler research, they expanded tenfold the number of misinformation instances studied, they modified the wording of their materials, they utilized over 10,000 participants in their research, and they varied their methods for obtaining those participants. They did not find any evidence for a backfire effect.

“We find that backfire is stubbornly difficult to induce, and is thus unlikely to be a characteristic of the public’s relationship to factual information. Overwhelmingly, when presented with factual information that corrects politicians—even when the politician is an ally—the average subject accedes to the correction and distances himself from the inaccurate claim.”

There is additional research to show that people can change their minds, that fact-checking can work, that feedback can correct misconceptions. Rich and Zaragoza (2016) found that misinformation can be fixed with corrections. Rich, Van Loon, Dunlosky, and  Zaragoza (2017) found that corrective feedback could work, if it was designed to be believed. More directly, Nyhan and Reifler (2016), in work cited by the American Press Institute Accountability Project, found that fact checking can work to debunk misinformation.

 

Some Perspective

First of all, let’s acknowledge that science sometimes works slowly. We don’t yet know all we will know about these persuasion and information-correction effects.

Also, let’s please be careful to note that backfire effects, when they are actually evoked, are typically found in situations where people are ideologically inclined to a system of beliefs for which they strongly identify. Backfire effects have been studied most of in situations where someone identifies themselves as a conservative or liberal—when this identity is singularly or strongly important to their self identity. Are folks in the learning field such strong believers in a system of beliefs and self-identity to easily suffer from the backfire effect? Maybe sometimes, but perhaps less likely than in the area of political belief which seems to consume many of us.

Here are some learning-industry beliefs that may be so deeply held that the light of truth may not penetrate easily:

  • Belief that learners know what is best for their learning.
  • Belief that learning is about conveying information.
  • Belief that we as learning professionals must kowtow to our organizational stakeholders, that we have no grounds to stand by our own principles.
  • Belief that our primary responsibility is to our organizations not our learners.
  • Belief that learner feedback is sufficient in revealing learning effectiveness.

These beliefs seem to undergird other beliefs and I’ve seen in my work where these beliefs seem to make it difficult to convey important truths. So let me clarify and first say that it is speculative on my part that these beliefs have substantial influence. This is a conjecture on my part. Note also that given that the research on the “backfire effect” has now been shown to be tenuous, I’m not claiming that fighting such foundational beliefs will cause damage. On the contrary, it seems like it might be worth doing.

 

Knowledge May Be Modifiable, But Attitudes and Belief Systems May Be Harder to Change

The original backfire effect showed that people believed facts more strongly when confronted with correct information, but this misses an important distinction. There are facts and there are attitudes, belief systems, and policy preferences.

A fascinating thing happened when Wood and Porter looked for—but didn’t find—the backfire effect. They talked with the original researchers, Nyhan and Reifler, and they began working together to solve the mystery. Why did the backfire effect happen sometimes but not regularly?

In a recent podcast (January 28, 2018) from the “You Are Not So Smart” podcast, Wood, Porter, and Nyhan were interviewed by David McRaney and they nicely clarified the distinction between factual backfire and attitudinal backfire.

Nyhan:

“People often focus on changing factual beliefs with the assumption that it will have consequences for the opinions people hold, or the policy preferences that they have, but we know from lots of social science research…that people can change their factual beliefs and it may not have an effect on their opinions at all.”

“The fundamental misconception here is that people use facts to form opinions and in practice that’s not how we tend to do it as human beings. Often we are marshaling facts to defend a particular opinion that we hold and we may be willing to discard a particular factual belief without actually revising the opinion that we’re using it to justify.”

Porter:

“Factual backfire if it exits would be especially worrisome, right? I don’t really believe we are going to find it anytime soon… Attitudinal backfire is less worrisome, because in some ways attitudinal backfire is just another description for failed persuasion attempts… that doesn’t mean that it’s impossible to change your attitude. That may very well just mean that what I’ve done to change your attitude has been a failure. It’s not that everyone is immune to persuasion, it’s just that persuasion is really, really hard.”

McRaney (Podcast Host):

“And so the facts suggest that the facts do work, and you absolutely should keep correcting people’s misinformation because people do update their beliefs and that’s important, but when we try to change people’s minds by only changing their [factual] beliefs, you can expect to end up, and engaging in, belief whack-a-mole, correcting bad beliefs left and right as the person on the other side generates new ones to support, justify, and protect the deeper psychological foundations of the self.”

Nyhan:

“True backfire effects, when people are moving overwhelmingly in the opposite direction, are probably very rare, they are probably on issues where people have very strong fixed beliefs….”

 

Rise Up! Debunk!

Here’s the takeaway for us in the learning field who want to be helpful in moving practice to more effective approaches.

  • While there may be some underlying beliefs that influence thinking in the learning field, they are unlikely to be as strongly held as the political beliefs that researchers have studied.
  • The research seems fairly clear that factual backfire effects are extremely unlikely to occur, so we should not be afraid to debunk factual inaccuracies.
  • Persuasion is difficult but not impossible, so it is worth making attempts to debunk. Such attempts are likely to be more effective if we take a change-management approach, look to the science of persuasion, and persevere respectfully and persistently over time.

Here is the message that one of the researchers, Tom Wood, wants to convey:

“I want to affirm people. Keep going out and trying to provide facts in your daily lives and know that the facts definitely make some difference…”

Here are some methods of persuasion from a recent article by Flynn, Nyhan, and Reifler (2017) that have worked even with people’s strongly-held beliefs:

  • When the persuader is seen to be ideologically sympathetic with those who might be persuaded.
  • When the correct information is presented in a graphical form rather than a textual form.
  • When an alternative causal account of the original belief is offered.
  • When credible or professional fact-checkers are utilized.
  • When multiple “related stories” are also encountered.

The stakes are high! Bad information permeates the learning field and makes our learning interventions less effective, harming our learners and our organizations while wasting untold resources.

We owe it to our organizations, our colleagues, and our fellow citizens to debunk bad information when we encounter it!

Let’s not be assholes about it! Let’s do it with respect, with openness to being wrong, and with all our persuasive wisdom. But let’s do it. It’s really important that we do!

 

Research Cited

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions.
Political Behavior, 32(2), 303–330.

Nyhan, B., & Zaragoza, J. (2016). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Available at: www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.
Rich, P. R., Van Loon, M. H., Dunlosky, J., & Zaragoza, M. S. (2017). Belief in corrective feedback for common misconceptions: Implications for knowledge revision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(3), 492-501.
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62-74. http://dx.doi.org/10.1037/xlm0000155
Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence, Political Behavior, Advance Online Publication.

 

What Do Senior Business Leaders Want in Terms of Learning Evaluation?

,

 

Let’s find out by asking them!

And, let’s ask ourselves (workplace learning professionals) what we think senior leaders will tell us.

NOTE: This may take some effort on our part. Please complete the survey yourself and ask senior leaders at your organization (if your organization is 1000 people or more) to complete the survey.

 

The Survey Below is for both Senior Organizational Leaders AND for Workplace Learning Professionals.

We will branch you to a separate set of questions!

Answer the survey questions below, or you need it, here is a link to the survey.

 



Send me an email if you want to talk more about learning evaluation...

Replacement for the Net Promoter Score—For Learning Assessments

The Net Promoter Score is one of the most popular smile-sheet questions in use. Unfortunately, it is fatally flawed for learning. I’ve written about NPS’s problems before. Essentially, NPS was designed for marketing purposes to get people’s feelings about the products they were using. NPS was NOT designed for learning. Also, the wording and choices of the question are too fuzzy to be meaningful. Finally, and most damning, NPS follows traditional smile sheets in focusing on learner satisfaction and course reputation—even though research has shown that traditional smile sheets are uncorrelated with learning!!

Despite these problems, organizations continue their blind allegiance to NPS.

Oftentimes, we are forced into doing stupid things by our organizational stakeholders, mostly because there seems to be no alternative. Let me provide one.

Can we gauge learner satisfaction in a way that focuses the question toward learning effectiveness and less on entertainment, enjoyment, ease of attendance, etc.? Yes. We. Can!

 

Net Effectiveness Score (NES)

Here’s the question:

If someone asked you about the effectiveness of the learning experience, would you recommend the learning to them? CHOOSE ONE.

  • The learning was TOO INEFFECTIVE to recommend.
  • The learning was INEFFECTIVE ENOUGH THAT I WOULD BE HESITANT to recommend it.
  • The learning was NOT FULLY EFFECTIVE, BUT I would recommend it IF IMPROVEMENTS WERE MADE to the learning.
  • The learning was NOT FULLY EFFECTIVE, BUT I would still recommend it EVEN IF NO CHANGES WERE MADE to the learning.
  • The learning was EFFECTIVE, SO I WOULD RECOMMEND IT.
  • The learning was VERY EFFECTIVE, SO I WOULD HIGHLY RECOMMEND IT.

This question has several benefits over the NPS question.

  1. It focuses on learning.
  2. It prompts learners to think about learning effectiveness.
  3. It has concrete answer choices, not fuzzy numeric ones.
  4. It will create meaningful results.

By the way, this question should be delivered after other smile-sheet questions that nudge learners to think about learning factors that really matter.

To learn more about performance-focused learner-feedback questions, either get in touch with me or check out my book.

 

Big Data and Learning — A Wild Goose Chase?

, ,

Geese are everywhere these days, crapping all over everything. Where we might have nourishment, we get poop on our shoes.

Big data is everywhere these days…

Even flocking into the learning field.

For big-data practitioners to NOT crap up the learning field, they’ll need to find good sources of data (good luck with that!), use intelligence about learning to know what it means (blind arrogance will prevent this, at least at first), and then find where the data is actually useful in practice (will there be patience and practice or just shiny new objects for sale?).

Beware of the wild goose chase! It’s already here.

Seek Research-to-Practice Experts as Your Trusted Advisors

, , ,

I added these words to the sidebar of my blog, and I like them so much that I’m sharing them as a blog post itself.

Please seek wisdom from research-to-practice experts — the dedicated professionals who spend time in two worlds to bring the learning field insights based on science. These folks are my heroes, given their often quixotic efforts to navigate through an incomprehensible jungle of business and research obstacles.

These research-to-practice professionals should be your heroes as well. Not mythological heroes, not heroes etched into the walls of faraway mountains. These heroes should be sought out as our partners, our fellow travelers in learning, as people we hire as trusted advisors to bring us fresh research-based insights.

The business case is clear. Research-to-practice experts not only enlighten and challenge us with ideas we might not have considered — ideas that make our learning efforts more effective in producing business results — research-to-practice professionals also prevent us from engaging in wasted efforts, saving our organizations time and money, all the while enabling us to focus more productively on learning factors that actually matter.

Getting Better Responses on Your Smile Sheets.

,

One of the most common questions I get when I speak about the Performance-Focused Smile-Sheet approach (see the book’s website at SmileSheets.com) is “What can be done to get higher response rates from my smile sheets?”

Of course, people also refer to smile sheets as evals, level 1’s, happy sheets, hot or warm evaluation, response forms, reaction forms, etc. They also refer to both paper-and-pencil forms and online surveys. Indeed, as smile sheets go online, more and more people are finding that online surveys get a much lower response rate than in-classroom paper surveys.

Before I give you my list for how to get a higher response rate, let me blow this up a bit. The thing is, while we want high response rates, there’s something much more important than response rates. We also want response relevance and precision. We want the questions to relate to learning effectiveness, not just learning reputation and learner satisfaction. We also want the learners to be able to answer the questions knowledgeably and give our questions their full attention.

If we have bad questions — one’s that use Likert-like or numeric scales for example — it won’t matter that we have high response rates. In this post, I’m NOT going to focus on how to write better questions. Instead, I’m just tackling how we can motivate our learners to give our questions more of their full attention, thus increasing the precision of their responding while also increasing our response rates as well.

How to get Better Responses and Higher Response Rates

  1. Ask with enthusiasm, while also explaining the benefits.
  2. Have a trusted person make the request (often an instructor who our learners have bonded with).
  3. Mention the coming smile sheet early in the learning (and more than once) so that learners know it is an integral part of the learning, not just an add-on.
  4. While mentioning the smile sheet, let folks know what you’ve learned from previous smile sheets and what you’ve changed based on the feedback.
  5. Tell learners what you’ll do with the data, and how you’ll let them know the results of their feedback.
  6. Highlight the benefits to the instructor, to the instructional designers, and to the organization. Those who ask can mention how they’ve benefited in the past from smile sheet results.
  7. Acknowledge the effort that they — your learners — will be making, maybe even commiserating with them that you know how hard it can be to give their full attention when it’s the end of the day or when they are back to work.
  8. Put the time devoted to the survey in perspective, for example, “We spent 7 hours today in learning, that’s 420 minutes, and now we’re asking you for 10 more minutes.”
  9. Ensure your learners that the data will be confidential, that the data is aggregated so that an individual’s responses are never shared.
  10. Let your learners know the percentage of people like them who typically complete the survey (caveat: if it’s relatively high).
  11. Use more distinctive answer choices. Avoid Likert-like answer choices and numerical scales — because learners instinctively know they aren’t that useful.
  12. Ask more meaningful questions. Use questions that learners can answer with confidence. Ask questions that focus on meaningful information. Avoid obviously biased questions — as these may alienate your learners.

How to get Better Responses and Higher Response Rates on DELAYED SMILE SHEETS

Sometimes, we’ll want to survey our learners well after a learning event, for example three to five weeks later. Delayed smile sheets are perfectly positioned to find out more about how the learning is relevant to the actual work or to our learners’ post-learning application efforts. Unfortunately, prompting action — that is getting learners to engage our delayed smile sheets — can be particularly difficult when asking for this favor well after learning. Still, there are some things we can do — in addition to the list above — that can make a difference.

  1. Tell learners what you learned from the end-of-learning smile sheet they previously completed.
  2. Ask the instructor who bonded with them to send the request (instead of an unknown person from the learning unit).
  3. Send multiple requests, preferably using a mechanism that only sends these requests to those who still need to complete the survey.
  4. Have the course officially end sometime AFTER the delayed smile sheet is completed, even if that is largely just a perception. Note that multiple-event learning experiences lend themselves to this approach, whereas single-event learning experiences do not.
  5. Share with your learners a small portion of the preliminary data from the delayed smile sheet. “Already, 46% of your fellow learners have completed the survey, with some intriguing tentative results. Indeed, it looks like the most relevant topic as rated by your fellow learners is… and the least relevant is…”
  6. Share with them the names or job titles of some of the people who have completed the survey already.
  7. Share with them the percentage of folks from his/her unit who have responded already or share a comparison across units.

What about INCENTIVES?

When I ask audiences for their ideas for improving responses and increasing response rates, they often mention some sort of incentive, usually based on some sort of lottery or raffle. “If you complete the survey, your name will be submitted to have chance to win the latest tech gadget, a book, time off, lunch with an executive, etc.”

I’m a skeptic. I’m open to being wrong, but I’m still skeptical about the cost/benefit calculation. Certainly for some audiences an incentive will increase rates of completion. Also, for some audiences, the harms that come with incentives may be worth it.

What harms you might ask? When we provide an external incentive, we might be sending a message to some learners that we know the task has no redeeming value or is tedious or difficult. People who see their own motivation as caused by the external incentive are potentially less likely to seriously engage our questions, producing bad data. We’re also not just having an effect on the current smile sheet. When we incentivize people today, they may be less willing next time to engage in answering our questions. They may also be pushed into believing that smile sheets are difficult, worthless, or worse.

Ideally, we’d like our learners to want to provide us with data, to see answering our questions as a worthy and helpful exercise, one that is valuable to them, to us, and to our organization. Incentives push against this vision.