Posts

Brinkerhoff Case Method — A Better Name for a Great Learning-Evaluation Innovation

, ,

Updated July 3rd, 2018—a week after the original post. See end of post for the update, featuring Rob Brinkerhoff’s response.

Rob Brinkerhoff’s “Success Case Method” needs a subtle name change. I think a more accurate name would be the “Brinkerhoff Case Method.”

I’m one of Rob’s biggest fans, having selected him in 2008 as the Neon Elephant Award Winner for his evaluation work.

Thirty five years ago, in 1983, Rob published an article where he introduced the “Success Case Method.” Here is a picture of the first page of that article:

In that article, the Success-Case Method was introduced as a way to find the value of training when it works. Rob wrote, “The success-case method does not purport to produce a balanced assessment of the total results of training. It does, however, attempt to answer the question: When training works, how well does it work?” (page 58, which is visible above).

The Success-Case Method didn’t stand still. It evolved and improved as Rob refined it based on his research and his work with clients. In his landmark book that details the methodology in 2006, Telling Training’s Story: Evaluation Made Simple, Credible, and Effective, Rob describes how to first survey learners and then sample some of them for interviews by selecting them based on their level of success in applying the training. “Once the sorting is complete, the next step is to select the interviewees from among the high and low success candidates, and perhaps from the middle categories.” (page 102).

To call this the success-case method seems more aligned with the original naming then the actual recommended practice. For that reason, I recommend that we simply call it the Brinkerhoff Case Method. This gives Rob the credit he deserves, and it more accurately reflects the rigor and balance of the method itself.

As soon as I posted the original post, I reached out to Rob Brinkerhoff to let him know. After some reflection, Rob wrote this and asked me to post it:

“Thank you for raising the issue of the currency of the name Success Case Method (SCM). It is kind of you to also think about identifying it more closely with my name. Your thoughts are not unlike others and on occasion even myself. 

It is true the SCM collects data from extreme portions of the respondent distribution including likely successes, non-successes, and ‘middling’ users of training. Digging into these different groups yields rich and useful information. 

Interestingly the original name I gave to the method some 40 years ago when I first started forging it was the “Pioneer” method since when we studied the impact of a new technology or innovation we felt we learned the most from the early adopters – those out ahead of the pack that tried out new things and blazed a trail for others to follow. I refined that name to a more familiar term but the concept and goal remained identical: accelerate the pace of change and learning by studying and documenting the work of those who are using it the most and the best. Their experience is where the gold is buried. 

Given that, I choose to stick with the “success” name. It expresses our overall intent: to nurture and learn from and drive more success. In a nutshell, this name expresses best not how we do it, but why we do it. 

Thanks again for your thoughtful reflections. We’re on the same page.“ 

Rob’s response is thoughtful, as usual. Yet my feelings on this remain steady. As I’ve written in my report on the new Learning-Transfer Evaluation Model (LTEM), our models should nudge appropriate actions. The same is true for the names we give things. Mining for success stories is good, but it has to be balanced. After all, if evaluation doesn’t look for the full truth—without putting a thumb on the scale—than we are not evaluating, we are doing something else.

I know Rob’s work. I know that he is not advocating for, nor does he engage in, unbalanced evaluations. I do fear that the name Success Case Method may give permission or unconsciously nudge lesser practitioners to find more success and less failure than is warranted by the facts.

Of course, the term “Success Case Method” has one brilliant advantage. Where people are hesitant to evaluate for fear of uncovering unpleasant results, the name “Success Case Method” may lessen the worry of moving forward and engaging in evaluation—and so it may actually enable the balanced evaluation that is necessary to uncover the truth of learning’s level of success.

Whatever we call it, the Success Case Method or the Brinkerhoff Case Method—and this is the most important point—it is one of the best learning-evaluation innovations in the past half century.

I also agree that since Rob is the creator, his voice should have the most influence in terms of what to call his invention.

I will end with one of my all-time favorite quotations from the workplace learning field, from Tim Mooney and Robert Brinkerhoff’s excellent book, Courageous Training:

“The goal of training evaluation is not to prove the value of training; the goal of evaluation is to improve the value of training.” (p. 94-95)

On this we should all agree!

Triggered Action Planning Confirmed with Scientific Research, Producing Huge Benefits

Back in 2008, I began discussing the scientific research on “implementation intentions.” I did this first at an eLearning Guild conference in March of 2008. I also spoke about it in 2008 at a talk to Salem State University, in a Chicago Workshop entitled Creating and Measuring Learning Transfer, and in one of my Brown Bag Lunch sessions delivered online.

In 2014, I wrote about implementation intentions specifically as a way to increase after-training follow-through. Thinking the term “Implementation Intentions” was too opaque and too general, I coined the term “Triggered Action Planning,” and argued that goal-setting at the end of training—what was often called action planning—would not be effective as triggered action planning. Indeed, in recounting the scientific research on implementation intentions, I often talked about how researchers were finding that setting situation-action triggers could create results that were twice as good as goal-setting alone. Doubling the benefits of goal setting! These kinds of results are huge!

I just came across a scientific study that supports the benefits of triggered action planning.

 

Shlomit Friedman and Simcha Ronen conducted two experiments and found similar results in each. I’m going to focus on their second one because it focused on a real training class with real employees. They used a class that taught retail sales managers how to improve interactions with customers. All the participants got the same exact training and were then randomly assigned to two different experimental groups:

  • Triggered Action Planning—Participants were asked to visualize situations with customers and how they would respond to seven typical customer objections.
  • Goal-Reminding Action Planning—Participants were asked to write down the goals of the training program and the aspects of the training program that they felt were most important.

Four weeks after the training, secret shoppers were used. They interacted with the supervisors using the key phrases and rated each supervisor on dichotomously-anchored rating scales from 1 to 10, with ten being best. The secret shoppers were blind to condition—that is they did not know which supervisors had gotten triggered action planning and which received the goal instructions. The findings showed that the triggered action planning produced improvements over the goal-setting condition by 76%, almost doubling the results.

It should be pointed out that this experiment could have been better designed to have the control group select their own goals. There may be some benefit to actual goal-setting compared with being reminded about the goals of the course. The experiment had its strengths too, most notably (1) the use of observers to record real-world performance four weeks after the training, and (2) the fact that all the supervisors had gone through the exact same training and were randomly assigned to either triggered action planning or the goal-reminding condition.

Triggered Action Planning

Triggered Action Planning has great potential to radically improve the likelihood that your learners will actually use what you’ve taught them. The reason it works so well is that it is based on a fundamental characteristic of human cognition. We are triggered to think and act based on cues in our environment. As learning professionals we should do whatever we can to:

  • Figure out what cues our learners will face in their work situations.
  • Teach them what to do when they encounter these cues.
  • Give them a rich array of spaced, repeated practice in handling these situations.

To learn more about how to implement triggered action planning, see my original blog post.

Research Cited

Friedman, S., & Ronen, S. (2015). The effect of implementation intentions on transfer of training. European Journal of Social Psychology, 45(4), 409-416.

This blog post took three hours to write.

Thankful for So Much!! Paying Off My Student Loans at 60 Years of Age

, ,

Today, after turning 60 a few months ago, I finally paid off my student loans—the loans that made it possible for me to get my doctorate from Columbia University. I was in school for eight years from 1988 to 1996, studying with some of the brightest minds in learning, development, and psychology (Rothkopf, Black, Peverly, Kuhn, Higgins, Dweck, Mischel, Darling-Hammond, not to mention my student cohort). If my math is right, that’s 22 years to pay off my student-loan debt. A ton of interest paid too!

I’m eternally grateful! Without the federal government funding my education, my life would have been so much different. I would never have learned how to understand the research on learning. My work at Work-Learning Research, Inc.—attempting to bridge the gap between research and practice—would not have been possible. Thank you to my country—the United States of America—and fellow citizens for giving me the opportunity of a lifetime!! Thanks also must go to my wife for marrying into the forever-string of monthly payments. Without her tolerance and support I certainly would be lost in a different life.

I’ve often reflected on my good fortune in being able to pursue my interests, and wondered why we as a society don’t do more to give our young people an easier road to pursue their dreams. Even when I hear about the brilliant people winning MacArthur fellowships, I wonder why only those who have proven their genius are being boosted. They are deserving of course, but where is our commitment to those who might be teetering on a knife edge of opportunity and economic desperation? I was even lucky as an undergrad back in the late 1970’s, paying relatively little for a good education at a state school and having parents who funded my tuition and living expenses. College today is wicked expensive, cutting out even more of our promising youth from realizing their potential.

Economic mobility is not as easy as we might like it. The World Bank just released a report showing that worldwide only 12% of young adults have been able to obtain more education than their parents. The United States iis no longer the land of opportunity we once liked to imagine.

This is crazy short-sighted, and combine this with our tendency to underfund our public schools, it has the smell of a societal suicide.

That’s depressing! Today I’m celebrating my ability to get student loans two-and-a-half decades ago and pay them off over the last twenty-some years! Hooray!

Seems not so important when put into perspective. It’s something though.

 

 

Reflections This Morning On Brushing My Teeth

,

I use a toothbrush that has a design that research shows maximizes the benefits of brushing. It spins, and spinning is better than oscillations. It also has a timer, telling me when I’ve brushed for two minutes. Ever since a hockey stick broke up my mouth when I was twenty, I’ve been sensitive about the health of my teeth.

But what the heck does this have to so with learning and development? Well, let’s see.

Maybe my toothbrush is a performance-support exemplar. Maybe no training is needed. I didn’t read any instructions. I just used it. The design is intuitive. There’s an obvious button that turns it on, an obvious place to put toothpaste (on the bristles), and it’s obvious that the bristles should be placed against the teeth. So, the tool itself seems like it needs no training.

But I’m not so sure. Let’s do a thought experiment. If I give a spinning toothbrush to a person who’s never brushed their teeth, would they use it correctly? Would they use it at all? Doubtful!

What is needed to encourage or enable good tooth-brushing?

  • People probably need something to compel them to brush, perhaps knowledge that brushing prevents dental calamities like tooth decay, gum disease, bad breath—and may even prevent cognitive decline as in Alzheimer’s. Training may help motivate action.
  • People will probably be more likely to brush if they know other people are brushing. Tons of behavioral economics studies have shown that people are very attuned to social comparisons. Again, training may help motivate action. Interestingly, people may be more likely to brush with a spinning toothbrush if others around them are also brushing with spinning toothbrushes. Training coworkers (or in this case other family members) may also help motivate action.
  • People will probably brush more effectively if they know to brush all their teeth, and to brush near their gums as well—not just the biting surfaces of their teeth. Training may provide this critical knowledge.
  • People will probably brush more effectively if they are set up—probably if they set themselves up—to be triggered by environmental cues. For example, tooth-brushing is often most effectively triggered when people brush right after breakfast and right before they go to bed. Training people to set up situation-action triggering may increase later follow through.
  • People will probably brush more effectively if they know that they should brush for two minutes or so rather than just brushing quickly. Training may provide this critical knowledge. Note, of course, that the toothbrush’s two-minute timer may act to support this behavior. Training and performance support can work together to enable effective behavior.
  • People will be more likely to use an effective toothbrush if the cost of the toothbrush is reasonable given the benefits. The costs of people’s tools will affect their use.
  • People will be more likely to use a toothbrush if the design is intuitive and easy to use. The design of tools will affect their use.

I’m probably missing some things in the list above, but it should suffice to show the complex interplay between our workplace tools/practices/solutions and training and prompting mechanisms (i.e., performance support and the like).

But what insights, or dare we say wisdom, can we glean from these reflections? How about these for starters:

  • We could provide excellent training, but if our tools/practices/solutions are poorly designed they won’t get used.
  • We could provide excellent training, but if our tools/practices/solutions are too expensive they won’t get used.
  • Let’s not forget the importance of prior knowledge. Most of us know the basics of tooth brushing. It would waste time, and be boring, to repeat that in a training. The key is to know, to really know, not just guess, what our learners know—and compare that to what they really need to know.
  • Even when we seem to have a perfectly intuitive, well-designed tool/practice/solution let’s not assume that no training is needed. There might be knowledge or motivational gaps that need to be bridged (yes, the pun was intended! SMILE). There might be situation-action triggering sets that can be set up. There might be reminders that would be useful to maintain motivation and compel correct technique.
  • Learning should not be separated from design of tools/practices/solutions. We can support better designs by reminding the designers and developers of these objects/procedures that training can’t fix a bad design. Better yet, we can work hand in hand involved in prototyping the tool/training bundle to enable the most pertinent feedback during the design process itself.
  • Training isn’t just about knowledge, it’s also about motivation.
  • Motivation isn’t just the responsibility of training. Motivation is an affordance of the tools/practices/solutions themselves, it is borne in the social environment, it is subject to organizational influence, particularly through managers and peers.
  • Training shouldn’t be thought of as a one-time event. Reminders may be valuable as well, particularly around the motivational aspects (for simple tasks), and to support remembering (for tasks that are easily forgotten or misunderstood).

One final note. We might also train people to use the time when they are engaged in automated tasks—tooth-brushing for example—to reflect on important aspects of their lives, gaining from the learning that might occur or the thoughts that may enable future learning. And adding a little fun into mundane tasks. Smile for the tiny nooks and crannies of our lives that may illuminate our thinking!

 

The Snake Oil Story—Preface to Clark Quinn’s Book on Debunking

, , ,

This is my preface to Clark Quinn’s book on debunking the myths in the learning field, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions. (available from Amazon here).

Clark Stanley worked as cowboy and later as a very successful entrepreneur, selling medicine in the United States that he made based on secrets he learned from an Arizona Hopi Indian medicine man. His elixir was made from rattlesnake oil, and was marketed in the 1890’s through public events in which Stanley killed live rattlesnakes and squeezed out their oil in front of admiring crowds. After his medicine gained a wide popularity, Stanley was able to set up production facilities in Massachusetts and Rhode Island with the help of a pharmacist. Stanley made himself a rich man.

You may not know his name, but you’ve certainly heard of his time and place. It was the era of patent medicines—false and sometimes dangerous elixirs sold to men and women of all stripes. Dr. Kilmer’s Swamp Root. Oxien. Kickapoo Indian Sagwa. Dr. Morse’s Indian Root Pills. Enzyte. Bonnore’s Electro Magnetic Bathing Fluid. Radithor. Liquozone. And of course, Clark Stanley’s Snake Oil Liniment.

These medicines were bought by the millions. Fortunes were made. Millions of people were bamboozled, made sick, killed or murdered depending on how you see it. It turns out that, upon being tested, Stanley’s elixir was found to be made mostly from mineral oil, a worthless potion sold by a charlatan. His story of the medicine man and the rattlesnake juice was a more potent concoction than his famous elixir, which when tested was found to have no snake oil anyway.

What causes men and women to miss the truth, to fail to see, to continue happily in harming themselves and those around them? This, unfortunately, is not a question just for the era of patent medicines. It is eternal. It goes back to the dawn of humanity and continues today as well. I have no answer except to assume that our credulity is part of our humanity—and should guide us to be on guard at all times.

What stopped the patent-medicine pandemic of poison, persuasion, and placebo? Did we the people rise up on our own and throw out the scoundrels, the money-grubbers, the snake-oil salesmen? Did we see that we were deceived, or too hopeful, or too blind? Did we as a community heed our senses and find a way to overcome the dangers hidden from us?

No! We did not!

It was not a mass movement back to rationality and truth that saved us. It was the work of a few intrepid agitators who made all the difference. Journalists began reporting on deaths, sicknesses, and addictions resulting from the use of patent medicines. In 1905, Collier’s Weekly published a cover story that exploded the industry. Written by Samuel Hopkins Adams a former crime reporter, with the title, “The Great American Fraud: The Patent Medicine Evil,” the long expose contained sections with headings like, “Medicine or Liquor?”, “The Men Who Back the Fake,” “Absolutely False Claims,” “Drugs that Deprave,” “Prescribing Without Authority,” and “Where the Money Goes.”

The article—or series of articles that today we would call investigative journalism—opened the floodgates and led directly to the Pure Food and Drug Act in 1906, followed later by additional regulations and requirements that continue to this day, with some success, protecting our health and safety.

The ugly truth is that we need help in seeing what we don’t see. This is true too in the learning industry and has been true since at least the early 1900’s when thought leaders in our industry floated bogus claims that people remember 10% of what they read, 20% of what they hear, 30% of what they see, et cetera. Indeed, it was partly the bogus claims floating around the learning industry in the late 1990’s that made me optimistic that starting a research-based consulting practice would find an audience, that perhaps the learning field could be protected from snake oil charlatans.

Bogus claims are not merely inert flotsam to be navigated around. At a minimum, they take attention away from learning practices that are more fundamental and effective, pushing us to waste time and resources. More insidious is that they proactively cause harm, hurting learners and weakening our learning outcomes.

I wish I could report that starting Work-Learning Research twenty years ago has had the influence that Samuel Hopkins Adams had in his journalism. Alas, I am a faint voice in the howling wind of our industry. Fortunately, there are many muckraking research-to-practice practitioners today, including folks like Paul Kirschner, Patti Shank, Guy Wallace, Pedro De Bruyckere, Julie Dirksen, Donald Clark, Ruth Clark, Mirjam Neelen, Jane Bozarth, and more. There are also legions of academic researchers who do the science necessary to enable research-to-practice wisdom to be compiled and conveyed to trainers, instructional designers, elearning developers and learning executives.

I am especially optimistic now that Clark Quinn has compiled, for the first time, the myths, misconceptions, and confusions that imbue the workplace learning field with faulty decision making and ineffective learning practices. As Clark rightly advises, don’t read the book in one sitting. You will find it too much—too many misconceptions and malingering falsehoods, and too much heartache to think that our field could tolerate so much snake oil.

Here’s what we don’t realize. Today’s workplace-learning snake oil is costing us billions of dollars in wasted effort, misspent resources, ill-advised decisions, and distraction from the science-of-learning fundamentals that have proven to be effective! Every time a trainer reads an article on learning styles and adjusts his or her training to make it suitable for visual, auditory, kinesthetic, and olfactory learners; time is wasted, money is spent, and learning is hurt. Every time an instructional designer goes to a conference and hears that neuroscience should guide learning design, he or she takes this faulty meme back to colleagues and infects them with false hope and ineffective learning strategies. Every time a Chief Learning Officer hears that learning events should be shrunk to 4-minute microlearning videos, that storytelling is everything, that all learning is social, that virtual reality is the future of learning—every time our learning executives jump on a bandwagon and send us to training or conferences or hire experts in these multitudinous fascinations—we are diverted from the veritable essence of learning. We waste our own developmental budgets with snake-oil rostrums. We waste time organizing ourselves around wrong-headed initiatives. We ignore what really works, all the while costing our organizations billions of dollars in waste and ineffective learning practices.

Let us start anew today. We can begin with Clark’s book. It is a veritable treasure chest of wisdom. But let’s keep going. Let’s stay skeptical. Let’s look to the scientific research for knowledge. Let’s become more demanding and knowledgeable ourselves, knowing that we all have more to learn. Let’s look to the research translators who know the work that we do as instructional designers, trainers, and developers. Let’s do our own testing. Let’s improve our evaluation systems so that we get better feedback day by day. Let’s pilot, rework, improve, and continue to learn!

As the history of patent medicine shows, we must be forever vigilant against our own blindness and against those who will sell us the miraculous hope of snake-oil cure-alls.

The Learning-Transfer Evaluation Model (LTEM) Updated to Version 12

The Learning-Transfer Evaluation Model (LTEM) and accompanying Report were updated today with two major changes:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

You can find the latest version of LTEM and the accompanying report by clicking here.

Dealing with Emotional Readiness — What Should We be Doing?

,

I included this piece in my newsletter this morning (which you can sign up for here) and it seemed to really resonate with people, so I’m including it here.

I’ve always had a high tolerance for pain, but breaking my collarbone at the end of February really sent me crashing down a mountain. Lying in bed, I got thinking about the emotional side of workplace performance. I don’t have brilliant insights here, just maybe some thoughts that will get you thinking.

Skiing with my family in Vermont, it had been a very good week. My wife and I, skiing together on our next-to-last day on the mountain, went to look for the kids who told us they’d be skiing in the terrain park (where the jumps are). My wife skied down first, then I went. There was a little jump, about a foot high, of the kind I’d jumped many times. But this time would be different.

As I sailed over the jump — slowly because I’m wary of going too fast and flying too far — I looked down and saw, NOT snow, but stairs. WTF? Every other time I took a small jump there was snow on the other side. Not metal stairs. Not dry metal stairs. In mid-air my thought was, “okay, just stay calm, you’ll ski over the stairs back to snow.” Alas, what happened was that I came crashing down on my left shoulder, collarbone splintering into five or six pieces, and lay 20 feet down the hill. I knew right away that things were bad. I knew that my life would be upended for weeks or months. I knew that miserable times lay ahead.

I got up quickly. I was in shock and knew it. I looked up the mountain back at the jump. Freakin’ stairs!! What they hell were they doing there? I was rip-roaring mad! One of my skis was still on the stairs. The dry surface must have grabbed it, preventing me from skiing further down the slope. I retrieved my ski. A few people skied by me. My wife was long gone down the mountain. I was in shock and I was mad as hell and I couldn’t think straight, but I knew I shouldn’t sit down so I just stood there for five or ten minutes in a daze. Finally someone asked if I was okay, and I yelled crazy loud for the whole damn mountain to hear, “NO!” He was nice, said he’d contact the ski patrol.

I’ll spare you the details of the long road to recovery — a recovery not yet complete — but the notable events are that I had badly broken my collarbone, badly sprained my right thumb and mildly sprained my left thumb, couldn’t button my shirts or pants for a while, had to lie in bed in one position or the pain would be too great, watched a ton of Netflix (I highly recommend Seven Seconds!), couldn’t do my work, couldn’t help around the house, got surgery on my collarbone, got pneumonia, went to physical therapy, etc… Enough!

Feeling completely useless, I couldn’t help reflect on the emotional side of learning, development, and workplace performance in general. In L&D, we tend to be helping people who are able to learn and take actions — but maybe not all the people we touch are emotionally present and able. Some are certainly dealing with family crises, personal insecurities, previous job setbacks, and the like. Are we doing enough for them?

I’m not a person prone to depression, but I was clearly down for the count. My ability to do meaningful work was nil. At first it was the pain and the opiates. Later it was the knowledge that I just couldn’t get much work done, that I was unable to keep up with promises I’d made, that I was falling behind. I knew, intellectually, that I just had to wait it out — and this was a great comfort. But still, my inability to think and to work reminded me that as a learning professional I ought to be more empathetic with learners who may be suffering as well.

Usually, dealing with emotional issues of an employee falls to the employee and his or her manager. I used to be a leadership trainer and I don’t remember preparing my learners for how to deal with direct reports who might be emotionally unready to fully engage with work. Fortunately today we are willing to talk about individual differences, but I think we might be forgetting the roller-coaster ride of being human, that we may differ in our emotional readiness on any given day. Managers/supervisors rightly are the best resource for dealing with such issues, but we in L&D might have a role to play as well.

I don’t have answers here. I wish I did. Probably it begins with empathy. We also can help more when we know our learners more — and when we can look them in the eyes. This is tricky business though. We’re not qualified to be therapists and simple solutions like being nice and kind and keeping things positive is not always the answer. We know from the research that challenging people with realistic decision-making challenges is very beneficial. Giving honest feedback on poor performance is beneficial.

We should probably avoid scolding and punishment and reprimands. Competition has been shown to harmful in at least some learning situations. Leaderboards may make emotional issues worse, and generally the limited research suggests they aren’t very useful anyway. But these negative actions are rarely invoked, so we have to look deeper.

I wish I had more wisdom about this. I wish there was research-based evidence I could draw on. I wish I could say more than just be human, empathetic, understanding.

Now that I’m aware of this, I’m going to keep my eyes and ears open to learning more about how we as learning professionals can design learning interventions to be more sensitive to the ups and downs of our fellow travelers.

If you’ve got good ideas, please send them my way or use the LinkedIn Post generated from this to join the discussion.

Will Thalheimer Interviewed by Jeffrey Dalto

, , ,

Series of Four Interviews

I was recently interviewed by Jeffrey Dalto of Convergence Training. Jeffrey is a big fan of research-based practice. He did a great job compiling the interviews.

Click on the title of each one to read the interview:

Preparing for Attending a Learning Conference in 2018 and Beyond

, ,

Conferences can be beautiful things—helping us learn, building relationships that help us grow and bring us joy, prompting us to see patterns in our industry we might miss otherwise, helping us set our agenda for what we need to learn more fully.

 

Conferences can be ugly things—teaching us myths, reinforcing our misconceptions, connecting us to people who steer us toward misinformation, echo chambers of bad thinking, a vendor-infested shark tank that can lead us to buy stuff that’s not that helpful or is actually harmful, pushing us to set our learning agenda on topics that distract us from what’s really important.

Given this dual reality, your job as a conference attendee is to be smart and skeptical, and work to validate your learning. In the Training Maximizers model, the first goal is ensuring our learning interventions are built from a base of “valid, credible content.” In conferences, where we curate our own learning, we have to be sure we are imbibing the good stuff and avoiding the poison. Here, I’ll highlight a few things to keep in mind as you attend a conference. I’ll aim to make this especially relevant for this year, 2018, when you are likely to encounter certain memes and themes.

Drinking the Good Stuff

  • Look for speakers who have a background doing two things, (1) studying the scientific research (not opinion research), and (2) working with real-world learning professionals in implementing research-based practices.
  • If speakers make statements without evidence, ask for the evidence or the research—or be highly skeptical.
  • If things seem almost too good to be true, warn yourself that learning is complicated and there are no magic solutions.
  • Be careful not to get sucked into group-think. Just because others seem to like something, doesn’t necessarily make it good. Think for yourself.
  • Remember that correlation does not mean causation. Just because some factors seem to move in the same direction doesn’t mean that one caused the other. It could be the other way around. Or some third factor may have caused both to move in the same direction.

Prepare Yourself for This Year’s Shiny Objects

  • Learning Styles — Learning Styles is bogus, but it keeps coming up every year. Don’t buy into it. Learn about it first. The Debunker.Club has a nice post on why we should avoid learning styles. Read it. And don’t let people tell you that learning styles if bad but learning preferences is good. They’re pulling the wool.
  • Dale’s Cone with Percentages — People do NOT remember 10% of what they read, 20% of what they read, 30% of what they see (or anything similar). Here’s the Internet’s #1 URL debunking this silly myth.
  • Neuroscience and Learning — It’s a very hot topic with vendors touting neuroscience to entice you to be impressed. But neuroscience at this time has nothing to say about learning.
  • Microlearning — Because it’s a hot topic, vendors and consultants are yapping about microlearning endlessly. But microlearning is not a thing. It’s many things. Here’s the definitive definition of microlearning, if I must say so myself.
  • AI, Machine Learning, and Big Data — Sexy stuff certainly, but it’s not clear whether these things can be applied to learning, or whether they can be applied now (given the state of our knowledge). Beware of taking these claims too seriously. Be open, but skeptical.
  • Gamification — We are almost over this fad thankfully. Still, keep in mind that gamification, like microlearning, is comprised of multiple learning methods. Gamification is NOT a thing.
  • Personalization — Personalization is a great idea, if carried out properly. Be careful if what someone calls personalization is just another way of saying learning styles. Also, don’t buy into the idea that personalization is new. It’s quite old. See Skinner and Keller back in the early 1900’s.
  • Learning Analytics — There is a lot of movement in learning evaluation, but much of it is wrong-headed focus on pretty dashboards, and a focus only on business impact. Look for folks who are talking about how to get better feedback to make learning better. I’ll tout my own effort to develop a new approach to gathering learner feedback. But beware and do NOT just do smile sheets (said by the guy who wrote a book on smile sheets)! Beware of vendors telling you to focus only on measuring behavior and business results. Read why here.
  • Kirkpatrick-Katzell Four-Level Model of Evaluation — Always a constant in the workplace learning field for the past 60 years. But even with recent changes it still has too many problems to be worthwhile. See the new Learning-Transfer Evaluation Model (LTEM), a worthy replacement.

Wow! So much to be worried about.

Well, sorry to say, I surely missing some stuff. It’s up to you to be smart and skeptical at the same time you stay open to new ideas.

You might consider joining the Debunker Club, folks who have agreed on the importance of debunking myths in the learning field.

Guest Post by Brett Christensen: How I Was Fooled by Dale’s Cone

,

This is a guest post by Brett Christensen of Workplace Performance Consulting (www.workplaceperformance.ca/)

In this post, Brett tells us a story he recounted at a gathering of Debunker Club members at the 2018 ISPI conference in Seattle. It was such a telling story that I asked him if he would write a blog post sharing his lessons learned with you. It’s a cautionary tale about how easy it is to be fooled by information about learning that is too good to be true.

One thing to know before you read Brett’s post. He’s Canadian, which explains two things about what you will read, one of which is that he uses Canadian spellings. I’ll let you figure out the other thing.

______________________________

How I Was Fooled by Dale’s Cone

Why do we debunk?

A handful of members of the Debunker Club had the rare opportunity to meet in person on the morning of 09 April 2018 at the Starbucks Reserve Roastery in sunny (sic) Seattle prior to the second day of the International Society of Performance Improvement’s (ISPI) annual conference.

After introducing ourselves and learning that we had a “newbie” in our midst who had learned about the meeting from a friend’s re-tweet (see Networking Power on my blog), Will asked “Why do you debunk?” I somewhat sheepishly admitted that the root cause of my debunking desires could be traced back to a presentation I had done with a couple of colleagues in 2006 which was very early in my training and performance career. This was before I had discovered ISPI and before I understood and embraced the principles of evidence-based practice and scientific rigour.

We were working as e-Learning Instructional Designers (evangelists?) at the time and we were trying hard to communicate the benefits of e-Learning when it was designed correctly, which as we all know includes the design of activities that assist in transfer of learning. When we discovered Dale’s Cone – with the bad, bad, bad numbers, it made total sense to us. Insert foreboding music here.

The following image is an example of what we had seen (a problematic version of Dale’s Cone):

One of many bogus versions of Dale’s Cone

Our aim was to show to our training development colleagues that Dale’s Cone (with the numbers) was valid and that we should all endeavour to design activity into our training. We developed three different scenarios, one for each group. One group would read silently, one would read to each other out loud, and the last group would have an activity included. Everyone would then do a short assessment to measure transfer. The hope (Hypothesis? Pipe Dream?) was to show that the farther down the cone you went, the higher the transfer would be.

Well! That was not the outcome at all. In fact, if I remember correctly, everyone had similar scores on the exercise and the result was the exact opposite of what we were looking for. Rather than dig deeper into that when we got back home, we were on to the next big thing and Dale’s Cone faded in my memory. Before I go on, I’d like to point out that we weren’t total “hacks!” Our ISD process was based on valid models and we applied Mayer and Clark’s (2007) principles in all our work. We even received a “Gold e-Learning Award” award from the Canadian Society for Training Development, now the Institute for Performance and Learning (I4PL)

It wasn’t until much later, after being in ISPI for a number of years, that I had gotten to know Will, our head debunker, and read his research on Dale’s Cone! I was enlightened and a bit embarrassed that I had been a contributor to spreading bad “ju-ju” in the field. But hey – you don’t know what you don’t know. A couple of years after I found Will and finished my MSc, he started The Debunker Club. I knew I had to right my wrongs of the past and help spread the word to raise awareness of the myths and fads that continue to permeate our profession.

That’s why I am a debunker. Thank you, Will, for making me smarter in the work I do.

______________________________

Will’s Note: Brett is being much too kind. There are many people who take debunking very seriously these days. There are folks like De Bruyckere, Kirschner, Hulshof who wrote a book on learning myths. There is Clark Quinn who’s new debunking book is being released this month. There is Guy Wallace, Patti Shank, Julie Dirksen, Mirjam Neelen, Ruth Clark, Jane Bozarth, and many, many, many others (sorry if I’m forgetting you!). Now, there is also Brett Christensen who has been very active on social media over the last few years, debunking myths and more. The Debunker Club has over 600 members and over 50 people have applied for membership in the last month alone. And note, you are all invited to join.

Of course, debunking works most effectively if everybody jumps in and takes a stand. We must all stay current with the learning research and speak up gently and respectfully when we see bogus information being passed around.

Thanks Brett for sharing your story!! Most of us must admit that we have been taken in by bogus learning myths at some point in our careers. I know I have, and it’s a great reminder to stay humble and skeptical.

And let me point out a feature of Brett’s story that is easy to miss. Did you notice that Brett and his team actually did rigorous evaluation of their learning intervention? It was this evaluation that enabled Brett and his colleagues to learn how well things had gone. Now imagine if Brett and his team hadn’t done a good evaluation. They would never have learned that the methods they tried were not helpful in maximizing learning outcomes! Indeed, who knows what would have happened when they learned years later that the Dale’s Cone numbers were bogus. They might not have believed the truth of it!

Finally, let me say that Dale’s Cone itself, although not really research-based, is not the myth we’re talking about. It’s when Dale’s Cone is bastardized with the bogus numbers that it became truly problematic. See the link above entitled “research on Dale’s Cone” to see many other examples of bastardized cones.

Thanks again Brett for reminding us about what’s at stake. When myths are shared, the learning field loses trust, we learning professionals waste time, and our organizations bear the costs of many misspent funds. Our learners are also subjected to willy-nilly experimentation that hurts their learning.