Posts

Thankful for So Much!! Paying Off My Student Loans at 60 Years of Age

, ,

Today, after turning 60 a few months ago, I finally paid off my student loans—the loans that made it possible for me to get my doctorate from Columbia University. I was in school for eight years from 1988 to 1996, studying with some of the brightest minds in learning, development, and psychology (Rothkopf, Black, Peverly, Kuhn, Higgins, Dweck, Mischel, Darling-Hammond, not to mention my student cohort). If my math is right, that’s 22 years to pay off my student-loan debt. A ton of interest paid too!

I’m eternally grateful! Without the federal government funding my education, my life would have been so much different. I would never have learned how to understand the research on learning. My work at Work-Learning Research, Inc.—attempting to bridge the gap between research and practice—would not have been possible. Thank you to my country—the United States of America—and fellow citizens for giving me the opportunity of a lifetime!! Thanks also must go to my wife for marrying into the forever-string of monthly payments. Without her tolerance and support I certainly would be lost in a different life.

I’ve often reflected on my good fortune in being able to pursue my interests, and wondered why we as a society don’t do more to give our young people an easier road to pursue their dreams. Even when I hear about the brilliant people winning MacArthur fellowships, I wonder why only those who have proven their genius are being boosted. They are deserving of course, but where is our commitment to those who might be teetering on a knife edge of opportunity and economic desperation? I was even lucky as an undergrad back in the late 1970’s, paying relatively little for a good education at a state school and having parents who funded my tuition and living expenses. College today is wicked expensive, cutting out even more of our promising youth from realizing their potential.

Economic mobility is not as easy as we might like it. The World Bank just released a report showing that worldwide only 12% of young adults have been able to obtain more education than their parents. The United States iis no longer the land of opportunity we once liked to imagine.

This is crazy short-sighted, and combine this with our tendency to underfund our public schools, it has the smell of a societal suicide.

That’s depressing! Today I’m celebrating my ability to get student loans two-and-a-half decades ago and pay them off over the last twenty-some years! Hooray!

Seems not so important when put into perspective. It’s something though.

 

 

Reflections This Morning On Brushing My Teeth

,

I use a toothbrush that has a design that research shows maximizes the benefits of brushing. It spins, and spinning is better than oscillations. It also has a timer, telling me when I’ve brushed for two minutes. Ever since a hockey stick broke up my mouth when I was twenty, I’ve been sensitive about the health of my teeth.

But what the heck does this have to so with learning and development? Well, let’s see.

Maybe my toothbrush is a performance-support exemplar. Maybe no training is needed. I didn’t read any instructions. I just used it. The design is intuitive. There’s an obvious button that turns it on, an obvious place to put toothpaste (on the bristles), and it’s obvious that the bristles should be placed against the teeth. So, the tool itself seems like it needs no training.

But I’m not so sure. Let’s do a thought experiment. If I give a spinning toothbrush to a person who’s never brushed their teeth, would they use it correctly? Would they use it at all? Doubtful!

What is needed to encourage or enable good tooth-brushing?

  • People probably need something to compel them to brush, perhaps knowledge that brushing prevents dental calamities like tooth decay, gum disease, bad breath—and may even prevent cognitive decline as in Alzheimer’s. Training may help motivate action.
  • People will probably be more likely to brush if they know other people are brushing. Tons of behavioral economics studies have shown that people are very attuned to social comparisons. Again, training may help motivate action. Interestingly, people may be more likely to brush with a spinning toothbrush if others around them are also brushing with spinning toothbrushes. Training coworkers (or in this case other family members) may also help motivate action.
  • People will probably brush more effectively if they know to brush all their teeth, and to brush near their gums as well—not just the biting surfaces of their teeth. Training may provide this critical knowledge.
  • People will probably brush more effectively if they are set up—probably if they set themselves up—to be triggered by environmental cues. For example, tooth-brushing is often most effectively triggered when people brush right after breakfast and right before they go to bed. Training people to set up situation-action triggering may increase later follow through.
  • People will probably brush more effectively if they know that they should brush for two minutes or so rather than just brushing quickly. Training may provide this critical knowledge. Note, of course, that the toothbrush’s two-minute timer may act to support this behavior. Training and performance support can work together to enable effective behavior.
  • People will be more likely to use an effective toothbrush if the cost of the toothbrush is reasonable given the benefits. The costs of people’s tools will affect their use.
  • People will be more likely to use a toothbrush if the design is intuitive and easy to use. The design of tools will affect their use.

I’m probably missing some things in the list above, but it should suffice to show the complex interplay between our workplace tools/practices/solutions and training and prompting mechanisms (i.e., performance support and the like).

But what insights, or dare we say wisdom, can we glean from these reflections? How about these for starters:

  • We could provide excellent training, but if our tools/practices/solutions are poorly designed they won’t get used.
  • We could provide excellent training, but if our tools/practices/solutions are too expensive they won’t get used.
  • Let’s not forget the importance of prior knowledge. Most of us know the basics of tooth brushing. It would waste time, and be boring, to repeat that in a training. The key is to know, to really know, not just guess, what our learners know—and compare that to what they really need to know.
  • Even when we seem to have a perfectly intuitive, well-designed tool/practice/solution let’s not assume that no training is needed. There might be knowledge or motivational gaps that need to be bridged (yes, the pun was intended! SMILE). There might be situation-action triggering sets that can be set up. There might be reminders that would be useful to maintain motivation and compel correct technique.
  • Learning should not be separated from design of tools/practices/solutions. We can support better designs by reminding the designers and developers of these objects/procedures that training can’t fix a bad design. Better yet, we can work hand in hand involved in prototyping the tool/training bundle to enable the most pertinent feedback during the design process itself.
  • Training isn’t just about knowledge, it’s also about motivation.
  • Motivation isn’t just the responsibility of training. Motivation is an affordance of the tools/practices/solutions themselves, it is borne in the social environment, it is subject to organizational influence, particularly through managers and peers.
  • Training shouldn’t be thought of as a one-time event. Reminders may be valuable as well, particularly around the motivational aspects (for simple tasks), and to support remembering (for tasks that are easily forgotten or misunderstood).

One final note. We might also train people to use the time when they are engaged in automated tasks—tooth-brushing for example—to reflect on important aspects of their lives, gaining from the learning that might occur or the thoughts that may enable future learning. And adding a little fun into mundane tasks. Smile for the tiny nooks and crannies of our lives that may illuminate our thinking!

 

The Snake Oil Story—Preface to Clark Quinn’s Book on Debunking

, , ,

This is my preface to Clark Quinn’s book on debunking the myths in the learning field, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions. (available from Amazon here).

Clark Stanley worked as cowboy and later as a very successful entrepreneur, selling medicine in the United States that he made based on secrets he learned from an Arizona Hopi Indian medicine man. His elixir was made from rattlesnake oil, and was marketed in the 1890’s through public events in which Stanley killed live rattlesnakes and squeezed out their oil in front of admiring crowds. After his medicine gained a wide popularity, Stanley was able to set up production facilities in Massachusetts and Rhode Island with the help of a pharmacist. Stanley made himself a rich man.

You may not know his name, but you’ve certainly heard of his time and place. It was the era of patent medicines—false and sometimes dangerous elixirs sold to men and women of all stripes. Dr. Kilmer’s Swamp Root. Oxien. Kickapoo Indian Sagwa. Dr. Morse’s Indian Root Pills. Enzyte. Bonnore’s Electro Magnetic Bathing Fluid. Radithor. Liquozone. And of course, Clark Stanley’s Snake Oil Liniment.

These medicines were bought by the millions. Fortunes were made. Millions of people were bamboozled, made sick, killed or murdered depending on how you see it. It turns out that, upon being tested, Stanley’s elixir was found to be made mostly from mineral oil, a worthless potion sold by a charlatan. His story of the medicine man and the rattlesnake juice was a more potent concoction than his famous elixir, which when tested was found to have no snake oil anyway.

What causes men and women to miss the truth, to fail to see, to continue happily in harming themselves and those around them? This, unfortunately, is not a question just for the era of patent medicines. It is eternal. It goes back to the dawn of humanity and continues today as well. I have no answer except to assume that our credulity is part of our humanity—and should guide us to be on guard at all times.

What stopped the patent-medicine pandemic of poison, persuasion, and placebo? Did we the people rise up on our own and throw out the scoundrels, the money-grubbers, the snake-oil salesmen? Did we see that we were deceived, or too hopeful, or too blind? Did we as a community heed our senses and find a way to overcome the dangers hidden from us?

No! We did not!

It was not a mass movement back to rationality and truth that saved us. It was the work of a few intrepid agitators who made all the difference. Journalists began reporting on deaths, sicknesses, and addictions resulting from the use of patent medicines. In 1905, Collier’s Weekly published a cover story that exploded the industry. Written by Samuel Hopkins Adams a former crime reporter, with the title, “The Great American Fraud: The Patent Medicine Evil,” the long expose contained sections with headings like, “Medicine or Liquor?”, “The Men Who Back the Fake,” “Absolutely False Claims,” “Drugs that Deprave,” “Prescribing Without Authority,” and “Where the Money Goes.”

The article—or series of articles that today we would call investigative journalism—opened the floodgates and led directly to the Pure Food and Drug Act in 1906, followed later by additional regulations and requirements that continue to this day, with some success, protecting our health and safety.

The ugly truth is that we need help in seeing what we don’t see. This is true too in the learning industry and has been true since at least the early 1900’s when thought leaders in our industry floated bogus claims that people remember 10% of what they read, 20% of what they hear, 30% of what they see, et cetera. Indeed, it was partly the bogus claims floating around the learning industry in the late 1990’s that made me optimistic that starting a research-based consulting practice would find an audience, that perhaps the learning field could be protected from snake oil charlatans.

Bogus claims are not merely inert flotsam to be navigated around. At a minimum, they take attention away from learning practices that are more fundamental and effective, pushing us to waste time and resources. More insidious is that they proactively cause harm, hurting learners and weakening our learning outcomes.

I wish I could report that starting Work-Learning Research twenty years ago has had the influence that Samuel Hopkins Adams had in his journalism. Alas, I am a faint voice in the howling wind of our industry. Fortunately, there are many muckraking research-to-practice practitioners today, including folks like Paul Kirschner, Patti Shank, Guy Wallace, Pedro De Bruyckere, Julie Dirksen, Donald Clark, Ruth Clark, Mirjam Neelen, Jane Bozarth, and more. There are also legions of academic researchers who do the science necessary to enable research-to-practice wisdom to be compiled and conveyed to trainers, instructional designers, elearning developers and learning executives.

I am especially optimistic now that Clark Quinn has compiled, for the first time, the myths, misconceptions, and confusions that imbue the workplace learning field with faulty decision making and ineffective learning practices. As Clark rightly advises, don’t read the book in one sitting. You will find it too much—too many misconceptions and malingering falsehoods, and too much heartache to think that our field could tolerate so much snake oil.

Here’s what we don’t realize. Today’s workplace-learning snake oil is costing us billions of dollars in wasted effort, misspent resources, ill-advised decisions, and distraction from the science-of-learning fundamentals that have proven to be effective! Every time a trainer reads an article on learning styles and adjusts his or her training to make it suitable for visual, auditory, kinesthetic, and olfactory learners; time is wasted, money is spent, and learning is hurt. Every time an instructional designer goes to a conference and hears that neuroscience should guide learning design, he or she takes this faulty meme back to colleagues and infects them with false hope and ineffective learning strategies. Every time a Chief Learning Officer hears that learning events should be shrunk to 4-minute microlearning videos, that storytelling is everything, that all learning is social, that virtual reality is the future of learning—every time our learning executives jump on a bandwagon and send us to training or conferences or hire experts in these multitudinous fascinations—we are diverted from the veritable essence of learning. We waste our own developmental budgets with snake-oil rostrums. We waste time organizing ourselves around wrong-headed initiatives. We ignore what really works, all the while costing our organizations billions of dollars in waste and ineffective learning practices.

Let us start anew today. We can begin with Clark’s book. It is a veritable treasure chest of wisdom. But let’s keep going. Let’s stay skeptical. Let’s look to the scientific research for knowledge. Let’s become more demanding and knowledgeable ourselves, knowing that we all have more to learn. Let’s look to the research translators who know the work that we do as instructional designers, trainers, and developers. Let’s do our own testing. Let’s improve our evaluation systems so that we get better feedback day by day. Let’s pilot, rework, improve, and continue to learn!

As the history of patent medicine shows, we must be forever vigilant against our own blindness and against those who will sell us the miraculous hope of snake-oil cure-alls.

The Learning-Transfer Evaluation Model (LTEM) Updated to Version 12

The Learning-Transfer Evaluation Model (LTEM) and accompanying Report were updated today with two major changes:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

You can find the latest version of LTEM and the accompanying report by clicking here.

Dealing with Emotional Readiness — What Should We be Doing?

,

I included this piece in my newsletter this morning (which you can sign up for here) and it seemed to really resonate with people, so I’m including it here.

I’ve always had a high tolerance for pain, but breaking my collarbone at the end of February really sent me crashing down a mountain. Lying in bed, I got thinking about the emotional side of workplace performance. I don’t have brilliant insights here, just maybe some thoughts that will get you thinking.

Skiing with my family in Vermont, it had been a very good week. My wife and I, skiing together on our next-to-last day on the mountain, went to look for the kids who told us they’d be skiing in the terrain park (where the jumps are). My wife skied down first, then I went. There was a little jump, about a foot high, of the kind I’d jumped many times. But this time would be different.

As I sailed over the jump — slowly because I’m wary of going too fast and flying too far — I looked down and saw, NOT snow, but stairs. WTF? Every other time I took a small jump there was snow on the other side. Not metal stairs. Not dry metal stairs. In mid-air my thought was, “okay, just stay calm, you’ll ski over the stairs back to snow.” Alas, what happened was that I came crashing down on my left shoulder, collarbone splintering into five or six pieces, and lay 20 feet down the hill. I knew right away that things were bad. I knew that my life would be upended for weeks or months. I knew that miserable times lay ahead.

I got up quickly. I was in shock and knew it. I looked up the mountain back at the jump. Freakin’ stairs!! What they hell were they doing there? I was rip-roaring mad! One of my skis was still on the stairs. The dry surface must have grabbed it, preventing me from skiing further down the slope. I retrieved my ski. A few people skied by me. My wife was long gone down the mountain. I was in shock and I was mad as hell and I couldn’t think straight, but I knew I shouldn’t sit down so I just stood there for five or ten minutes in a daze. Finally someone asked if I was okay, and I yelled crazy loud for the whole damn mountain to hear, “NO!” He was nice, said he’d contact the ski patrol.

I’ll spare you the details of the long road to recovery — a recovery not yet complete — but the notable events are that I had badly broken my collarbone, badly sprained my right thumb and mildly sprained my left thumb, couldn’t button my shirts or pants for a while, had to lie in bed in one position or the pain would be too great, watched a ton of Netflix (I highly recommend Seven Seconds!), couldn’t do my work, couldn’t help around the house, got surgery on my collarbone, got pneumonia, went to physical therapy, etc… Enough!

Feeling completely useless, I couldn’t help reflect on the emotional side of learning, development, and workplace performance in general. In L&D, we tend to be helping people who are able to learn and take actions — but maybe not all the people we touch are emotionally present and able. Some are certainly dealing with family crises, personal insecurities, previous job setbacks, and the like. Are we doing enough for them?

I’m not a person prone to depression, but I was clearly down for the count. My ability to do meaningful work was nil. At first it was the pain and the opiates. Later it was the knowledge that I just couldn’t get much work done, that I was unable to keep up with promises I’d made, that I was falling behind. I knew, intellectually, that I just had to wait it out — and this was a great comfort. But still, my inability to think and to work reminded me that as a learning professional I ought to be more empathetic with learners who may be suffering as well.

Usually, dealing with emotional issues of an employee falls to the employee and his or her manager. I used to be a leadership trainer and I don’t remember preparing my learners for how to deal with direct reports who might be emotionally unready to fully engage with work. Fortunately today we are willing to talk about individual differences, but I think we might be forgetting the roller-coaster ride of being human, that we may differ in our emotional readiness on any given day. Managers/supervisors rightly are the best resource for dealing with such issues, but we in L&D might have a role to play as well.

I don’t have answers here. I wish I did. Probably it begins with empathy. We also can help more when we know our learners more — and when we can look them in the eyes. This is tricky business though. We’re not qualified to be therapists and simple solutions like being nice and kind and keeping things positive is not always the answer. We know from the research that challenging people with realistic decision-making challenges is very beneficial. Giving honest feedback on poor performance is beneficial.

We should probably avoid scolding and punishment and reprimands. Competition has been shown to harmful in at least some learning situations. Leaderboards may make emotional issues worse, and generally the limited research suggests they aren’t very useful anyway. But these negative actions are rarely invoked, so we have to look deeper.

I wish I had more wisdom about this. I wish there was research-based evidence I could draw on. I wish I could say more than just be human, empathetic, understanding.

Now that I’m aware of this, I’m going to keep my eyes and ears open to learning more about how we as learning professionals can design learning interventions to be more sensitive to the ups and downs of our fellow travelers.

If you’ve got good ideas, please send them my way or use the LinkedIn Post generated from this to join the discussion.

Will Thalheimer Interviewed by Jeffrey Dalto

, , ,

Series of Four Interviews

I was recently interviewed by Jeffrey Dalto of Convergence Training. Jeffrey is a big fan of research-based practice. He did a great job compiling the interviews.

Click on the title of each one to read the interview:

Preparing for Attending a Learning Conference in 2018 and Beyond

, ,

Conferences can be beautiful things—helping us learn, building relationships that help us grow and bring us joy, prompting us to see patterns in our industry we might miss otherwise, helping us set our agenda for what we need to learn more fully.

 

Conferences can be ugly things—teaching us myths, reinforcing our misconceptions, connecting us to people who steer us toward misinformation, echo chambers of bad thinking, a vendor-infested shark tank that can lead us to buy stuff that’s not that helpful or is actually harmful, pushing us to set our learning agenda on topics that distract us from what’s really important.

Given this dual reality, your job as a conference attendee is to be smart and skeptical, and work to validate your learning. In the Training Maximizers model, the first goal is ensuring our learning interventions are built from a base of “valid, credible content.” In conferences, where we curate our own learning, we have to be sure we are imbibing the good stuff and avoiding the poison. Here, I’ll highlight a few things to keep in mind as you attend a conference. I’ll aim to make this especially relevant for this year, 2018, when you are likely to encounter certain memes and themes.

Drinking the Good Stuff

  • Look for speakers who have a background doing two things, (1) studying the scientific research (not opinion research), and (2) working with real-world learning professionals in implementing research-based practices.
  • If speakers make statements without evidence, ask for the evidence or the research—or be highly skeptical.
  • If things seem almost too good to be true, warn yourself that learning is complicated and there are no magic solutions.
  • Be careful not to get sucked into group-think. Just because others seem to like something, doesn’t necessarily make it good. Think for yourself.
  • Remember that correlation does not mean causation. Just because some factors seem to move in the same direction doesn’t mean that one caused the other. It could be the other way around. Or some third factor may have caused both to move in the same direction.

Prepare Yourself for This Year’s Shiny Objects

  • Learning Styles — Learning Styles is bogus, but it keeps coming up every year. Don’t buy into it. Learn about it first. The Debunker.Club has a nice post on why we should avoid learning styles. Read it. And don’t let people tell you that learning styles if bad but learning preferences is good. They’re pulling the wool.
  • Dale’s Cone with Percentages — People do NOT remember 10% of what they read, 20% of what they read, 30% of what they see (or anything similar). Here’s the Internet’s #1 URL debunking this silly myth.
  • Neuroscience and Learning — It’s a very hot topic with vendors touting neuroscience to entice you to be impressed. But neuroscience at this time has nothing to say about learning.
  • Microlearning — Because it’s a hot topic, vendors and consultants are yapping about microlearning endlessly. But microlearning is not a thing. It’s many things. Here’s the definitive definition of microlearning, if I must say so myself.
  • AI, Machine Learning, and Big Data — Sexy stuff certainly, but it’s not clear whether these things can be applied to learning, or whether they can be applied now (given the state of our knowledge). Beware of taking these claims too seriously. Be open, but skeptical.
  • Gamification — We are almost over this fad thankfully. Still, keep in mind that gamification, like microlearning, is comprised of multiple learning methods. Gamification is NOT a thing.
  • Personalization — Personalization is a great idea, if carried out properly. Be careful if what someone calls personalization is just another way of saying learning styles. Also, don’t buy into the idea that personalization is new. It’s quite old. See Skinner and Keller back in the early 1900’s.
  • Learning Analytics — There is a lot of movement in learning evaluation, but much of it is wrong-headed focus on pretty dashboards, and a focus only on business impact. Look for folks who are talking about how to get better feedback to make learning better. I’ll tout my own effort to develop a new approach to gathering learner feedback. But beware and do NOT just do smile sheets (said by the guy who wrote a book on smile sheets)! Beware of vendors telling you to focus only on measuring behavior and business results. Read why here.
  • Kirkpatrick-Katzell Four-Level Model of Evaluation — Always a constant in the workplace learning field for the past 60 years. But even with recent changes it still has too many problems to be worthwhile. See the new Learning-Transfer Evaluation Model (LTEM), a worthy replacement.

Wow! So much to be worried about.

Well, sorry to say, I surely missing some stuff. It’s up to you to be smart and skeptical at the same time you stay open to new ideas.

You might consider joining the Debunker Club, folks who have agreed on the importance of debunking myths in the learning field.

Guest Post by Brett Christensen: How I Was Fooled by Dale’s Cone

,

This is a guest post by Brett Christensen of Workplace Performance Consulting (www.workplaceperformance.ca/)

In this post, Brett tells us a story he recounted at a gathering of Debunker Club members at the 2018 ISPI conference in Seattle. It was such a telling story that I asked him if he would write a blog post sharing his lessons learned with you. It’s a cautionary tale about how easy it is to be fooled by information about learning that is too good to be true.

One thing to know before you read Brett’s post. He’s Canadian, which explains two things about what you will read, one of which is that he uses Canadian spellings. I’ll let you figure out the other thing.

______________________________

How I Was Fooled by Dale’s Cone

Why do we debunk?

A handful of members of the Debunker Club had the rare opportunity to meet in person on the morning of 09 April 2018 at the Starbucks Reserve Roastery in sunny (sic) Seattle prior to the second day of the International Society of Performance Improvement’s (ISPI) annual conference.

After introducing ourselves and learning that we had a “newbie” in our midst who had learned about the meeting from a friend’s re-tweet (see Networking Power on my blog), Will asked “Why do you debunk?” I somewhat sheepishly admitted that the root cause of my debunking desires could be traced back to a presentation I had done with a couple of colleagues in 2006 which was very early in my training and performance career. This was before I had discovered ISPI and before I understood and embraced the principles of evidence-based practice and scientific rigour.

We were working as e-Learning Instructional Designers (evangelists?) at the time and we were trying hard to communicate the benefits of e-Learning when it was designed correctly, which as we all know includes the design of activities that assist in transfer of learning. When we discovered Dale’s Cone – with the bad, bad, bad numbers, it made total sense to us. Insert foreboding music here.

The following image is an example of what we had seen (a problematic version of Dale’s Cone):

One of many bogus versions of Dale’s Cone

Our aim was to show to our training development colleagues that Dale’s Cone (with the numbers) was valid and that we should all endeavour to design activity into our training. We developed three different scenarios, one for each group. One group would read silently, one would read to each other out loud, and the last group would have an activity included. Everyone would then do a short assessment to measure transfer. The hope (Hypothesis? Pipe Dream?) was to show that the farther down the cone you went, the higher the transfer would be.

Well! That was not the outcome at all. In fact, if I remember correctly, everyone had similar scores on the exercise and the result was the exact opposite of what we were looking for. Rather than dig deeper into that when we got back home, we were on to the next big thing and Dale’s Cone faded in my memory. Before I go on, I’d like to point out that we weren’t total “hacks!” Our ISD process was based on valid models and we applied Mayer and Clark’s (2007) principles in all our work. We even received a “Gold e-Learning Award” award from the Canadian Society for Training Development, now the Institute for Performance and Learning (I4PL)

It wasn’t until much later, after being in ISPI for a number of years, that I had gotten to know Will, our head debunker, and read his research on Dale’s Cone! I was enlightened and a bit embarrassed that I had been a contributor to spreading bad “ju-ju” in the field. But hey – you don’t know what you don’t know. A couple of years after I found Will and finished my MSc, he started The Debunker Club. I knew I had to right my wrongs of the past and help spread the word to raise awareness of the myths and fads that continue to permeate our profession.

That’s why I am a debunker. Thank you, Will, for making me smarter in the work I do.

______________________________

Will’s Note: Brett is being much too kind. There are many people who take debunking very seriously these days. There are folks like De Bruyckere, Kirschner, Hulshof who wrote a book on learning myths. There is Clark Quinn who’s new debunking book is being released this month. There is Guy Wallace, Patti Shank, Julie Dirksen, Mirjam Neelen, Ruth Clark, Jane Bozarth, and many, many, many others (sorry if I’m forgetting you!). Now, there is also Brett Christensen who has been very active on social media over the last few years, debunking myths and more. The Debunker Club has over 600 members and over 50 people have applied for membership in the last month alone. And note, you are all invited to join.

Of course, debunking works most effectively if everybody jumps in and takes a stand. We must all stay current with the learning research and speak up gently and respectfully when we see bogus information being passed around.

Thanks Brett for sharing your story!! Most of us must admit that we have been taken in by bogus learning myths at some point in our careers. I know I have, and it’s a great reminder to stay humble and skeptical.

And let me point out a feature of Brett’s story that is easy to miss. Did you notice that Brett and his team actually did rigorous evaluation of their learning intervention? It was this evaluation that enabled Brett and his colleagues to learn how well things had gone. Now imagine if Brett and his team hadn’t done a good evaluation. They would never have learned that the methods they tried were not helpful in maximizing learning outcomes! Indeed, who knows what would have happened when they learned years later that the Dale’s Cone numbers were bogus. They might not have believed the truth of it!

Finally, let me say that Dale’s Cone itself, although not really research-based, is not the myth we’re talking about. It’s when Dale’s Cone is bastardized with the bogus numbers that it became truly problematic. See the link above entitled “research on Dale’s Cone” to see many other examples of bastardized cones.

Thanks again Brett for reminding us about what’s at stake. When myths are shared, the learning field loses trust, we learning professionals waste time, and our organizations bear the costs of many misspent funds. Our learners are also subjected to willy-nilly experimentation that hurts their learning.

 

 

Vendors Seeking Confirmatory Research in the Learning Field

, ,

I’ve been at the helm of Work-Learning Research, Inc. for almost 20 years. Ever since I began to have a following as a research-to-practice consultant, I’ve been approached by vendors to “research” their products. A great majority who approach me are basically asking me to tell the industry that their products are good. I tell these vendors that I don’t do that kind of “research,” but if they want a fair, honest, and research-based evaluation of their product for their own benefit—advice not for public consumption but for their own feedback and deliberations—I can do that for them. Some take me up on this, but most don’t.

I recently got another request and I thought I’d share what this looks like (I’ve removed identifying information):

Vendor:

I’m reaching out as the co-founder of [GreatNewCompany], a [high-tech blankety-bling] platform. We’re trying to create a product that [does incredibly wonderful things to change the world of learning].

I wanted to ask if you’d consider reviewing our product? I know you’ve spoken to [this industry luminary about such-and-such] and wondered if this was an area of research you’d planned to do more work in?

A free account has access to almost all features but is just limited to [25] unique recipients [https URL generously offered]. If you need more access to perform a comprehensive review or have any questions then please let me know.

I understand that this isn’t a small ask as it’d take a decent amount of your time but thought I’d see if you found us interesting.

Gentleman Researcher/Consultant:

I do review products, but not for public consumption. I do it to provide feedback to developers, not for marketing purposes.

My cost is [such-and-such] per hour.

Let me know if you’re interested.

Vendor:

Thanks for letting me know – it’s appreciated.

We’d be interested in some consultancy on helping raise awareness of our product and to better reach more customers. We’re not sure if we’re just failing at marketing or whether our product just doesn’t have the broad appeal. Do you think you’d be a good fit helping us with that?

Thanks.

Gentleman Researcher/Consultant:

It’s a crazy market now, with lots of new entries. Very hard to gain visibility and traction.

I don’t schlep for others. I run a high-integrity consultancy here. SMILE.

One recommendation I make is to actually do good research on your product. This helps you to learn more and it gives you something to talk about in your content marketing efforts. A way to stand above the screaming crowd.

I can help you with high-integrity research, but this usually costs a ton…

Vendor:

Hi Will,

Thanks again for the thoughts, sounds like we’re a bad fit for the kind of consultancy that we need so I appreciate you being open about that.

Cheers!

THE END

A happy ending?

================

Conclusions:

  • Be careful when you hear about product endorsements. They may be paid for.
  • Remember, not all communications that are called “research” are created equal.
  • Look for consultants who can’t be bought. You want valid advice not advice tilted toward those who pay the consultants.
  • Look for vendors who tell true stories, who honestly research their products, who learn from their experience.
  • Be skeptical of communications coming out of trade associations when those messages are paid for directly or indirectly (through long commercial association between the vendor and the association).
  • Be even more skeptical of best-in-industry lists where those listed pay to be listed. Yes! These exist!
  • In general, be skeptical and look to work with those who have integrity. They exist too!

 

The Learning-Transfer Evaluation Model (LTEM)

NOTICE OF UPDATE (17 May 2018):

The LTEM Model and accompanying Report were updated today and can be found below.

Two major changes were included:

  • The model has been inverted to put the better evaluation methods at the top instead of at the bottom.
  • The model now uses the word “Tier” to refer to the different levels within the model—to distinguish these from the levels of the Kirkpatrick-Katzell model.

This will be the last update to LTEM for the foreseeable future.

 

This blog post introduces a new learning-evaluation model, the Learning-Transfer Evaluation Model (LTEM).

 

Why We Need a New Evaluation Model

It is well past time for a new learning-evaluation model for the workplace learning field. The Kirkpatrick-Katzell Model is over 60 years old. It was born in a time before computers, before cognitive psychology revolutionized the learning field, before the training field was transformed from one that focused on the classroom learning experience to one focused on work performance.

The Kirkpatrick-Katzell model—created by Raymond Katzell and popularized by Donald Kirkpatrick—is the dominant standard in our field. It has also done a tremendous amount of harm, pushing us to rely on inadequate evaluation practices and poor learning designs.

I am not the only critic of the Kirkpatrick-Katzell model. There are legions of us. If you do a Google search starting with these letters, “Criticisms of the Ki,” Google anticipates the following: “Criticisms of the Kirkpatrick Model” as one of the most popular searches.

Here’s what a seminal research review said about the Kirkpatrick-Katzell model (before the model’s name change):

The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders…

The New Model

For the past year or so I’ve been working to develop a new learning-evaluation model. The current version is the eleventh iteration, improved after reflection, after asking some of the smartest people in our industry to provide feedback, after sharing earlier versions with conference attendees at the 2017 ISPI innovation and design-thinking conference and the 2018 Learning Technologies conference in London.

Special thanks to the following people who provided significant feedback that improved the model and/or the accompanying article:

Julie Dirksen, Clark Quinn, Roy Pollock, Adam Neaman, Yvon Dalat, Emma Weber, Scott Weersing, Mark Jenkins, Ingrid Guerra-Lopez, Rob Brinkerhoff, Trudy Mandeville, Mike Rustici

The model, which I’ve named the Learning-Transfer Evaluation Model (LTEM, pronounced L-tem) is a one page, eight-level model, augmented with color coding and descriptive explanations. In addition to the model itself, I’ve prepared a 34-page report to describe the need for the model, the rationale for its design, and recommendations on how to use it.

You can access the model and the report by clicking on the following links:

 

 

Release Notes

The LTEM model and report were researched, conceived, and written by Dr. Will Thalheimer of Work-Learning Research, Inc., with significant and indispensable input from others. No one sponsored or funded this work. It was a labor of love and is provided as a valentine for the workplace learning field on February 14th, 2018 (Version 11). Version 12 was released on May 17th, 2018 based on feedback from its use. The model and report are copyrighted by Will Thalheimer, but you are free to share them as is, as long as you don’t sell them.

If you would like to contact me (Will Thalheimer), you can do that at this link: https://www.worklearning.com/contact/

If you would like to sign up for my list, you can do that here: https://www.worklearning.com/sign-up/