Posts

The LEARNNOVATORS team (specifically Santhosh Kumar) asked if I would join them in their Crystal Balling with Learnnovators interview series, and I accepted! They have some really great people on the series, I recommend that you check it out!

The most impressive thing was that they must have studied my whole career history and read my publication list and watched my videos because they came up with a whole set of very pertinent and important questions. I was BLOWN AWAY—completely IMPRESSED! And, given their dedication, I spent a ton of time preparing and answering their questions.

It’s a two part series and here are the links:

Here are some of the quotes they pulled out and/or I’d like to highlight:

Learning is one of the most wondrous, complex, and important areas of human functioning.

The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning.

Data can be good, but also very very bad.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important.

We will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Our senior managers don’t understand learning, they think it is easy, so they don’t support L&D like they should.

Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness.

Learning is hard and learning that is focused on work is even harder because our learners have other priorities—so we shouldn’t beat ourselves up too much.

We know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements.

Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera

Learnnovators asked me what I hoped for the learning and development field. Here’s what I said:

Nobody is good at predicting the future, so I will share the vision I hope for. I hope we in learning and development continue to be passionate about helping other people learn and perform at their best. I hope we recognize that we have a responsibility not just to our organizations, but beyond business results to our learners, their coworkers/families/friends, to the community, society, and the environs. I hope we become brilliantly professionalized, having rigorous standards, a well-researched body of knowledge, higher salaries, and career paths beyond L&D. I hope we measure better, using our results to improve what we do. I hope we, more-and-more, take a small-S scientific approach to our practices, doing more A-B testing, compiling a database of meaningful results, building virtuous cycles of continuous improvement. I hope we develop better tools to make building better learning—and better performance—easier and more effective. And I hope we continue to feel good about our contributions to learning. Learning is at the heart of our humanity!

Given the challenges TEACHERS and PROFESSORS are facing with the Coronavirus Pandemic I’ve decided to make the Presentation Science Online Workshop available to Teachers and Professors for FREE (now through April 30th).

The workshop provides a strong science-of-learning foundation that will help educators make informed decisions as they move their courses online, create video recordings, or use any free time to update their classroom learning designs.

PLEASE share this with educators you know.

https://academy.worklearning.com/library/presentation-science/90041/about/

 

About the Presentation Science Workshop

Presentation Science is an online self-paced workshop designed specifically to help people who are trainers, teachers, professors, speakers, CEOs, Executive Directors, managers, military leaders, salespeople, team leads–anybody who uses presentation software–to help their audiences learn.

Inspired by learning science, this workshop will help speakers and educators to (1) keep their audiences’ attention, (2) support comprehension, (3) motivate audience members to take action, and (4) support them in remembering what’s been taught.

The workshop is also an excellent TRAIN-THE-TRAINER experience and organizations wanting to engage in a private cohort can make arrangements with Will Thalheimer (workshop creator and host) to do that. You can see the specific pricing options here: https://www.presentationscience.net/pricing.

And for more information about the workshop, see PresentationScience.Net.

Mirjam Neelen and Paul Kirschner have written a truly beautiful book—one that everyone in the workplace learning field should read, study, and keep close at hand. It’s a book of transformational value because it teaches us how to think about our jobs as practitioners in utilizing research-informed ideas to build maximally effective learning architectures.

Their book is titled, Evidence-Informed Learning Design: Use Evidence to Create Training Which Improves Performance. The book warns us of learning myths and misconceptions—but it goes deeper to bring us insights in how these myths arise and how we can disarm them in our work.

Here’s a picture of me and my copy! The book officially goes on sale today in the United States.

 

Click to get your copy of the book from Amazon (US).

The book covers the most powerful research-informed learning factors known by science. Those who follow my work will hear familiar terms like Feedback, Retrieval Practice, Spacing; but also, terms like double-barreled learning, direct instruction, nuanced design, and more. I will keep this book handy in my own work as a research-inspired consultant, author, provocateur—but this book is not designed for people like me. Evidence-Informed Learning Design is perfect for everyone with more than a year of experience in the workplace learning field.

The book so rightly laments that “the learning field is cracked at its foundation.” It implores us to open our eyes to what works and what doesn’t, and fundamentally to rethink how we as practitioners work in our teams to bring about effective learning.

The book intrigues as can be seen in sections like, “Why myths are like zombies,” and “No knowledge, no nothing,” and “Pigeonholing galore.”

One of my favorite parts of the book is the interviews of researchers that delve into the practical ramifications of their work. There are interviews with an AI expert, a neuroscientist, and an expert on complex learning, among others. These interviews will wake up more than a few of us.

What makes this book so powerful is that it combines the work of a practitioner and a researcher. Mirjam is one of our field’s most dedicated practitioners in bringing research inspirations to bear on learning practice. Paul is one of the great academic researchers in doing usable research and bringing that research to bear on educational practice. Together, for many years, they’ve published one of the most important blogs in the workplace learning field, the Three-Star Learning blog (https://3starlearningexperiences.wordpress.com/).

Here are some things you will learn in the book:

Big Picture Concepts:

  • What learning myths to avoid.
  • What learning factors to focus on in your learning designs.
  • How to evaluate research claims.

Specific Concepts:

  • Whether Google searches can supplant training.
  • What neuroscience says about learning, if anything.
  • How to train for complex skills.
  • How AI might help learning, now and in the future.
  • Types of research to be highly skeptical of.
  • Whether you need to read scientific research yourself.
  • Whether you should use learning objectives, or not, or when.
  • Whether learning should be fun.
  • The telltale signs of bad research.

This book is so good that it should be required reading for everyone graduating at the university level in learning-and-development.

 

 

Click on the book image to see it on Amazon (US).

 

I’m thrilled and delighted to share the news that Jane Bozarth, research-to-practice advocate, author of Show Your Work, and Director of Research for the eLearning Guild, is pledging $1,000 to the Learning Styles Challenge!!

 

 

Jane has been a vigorous debunker of the Learning-Styles Myth for many, many years! For those of you who don’t know, the Learning-Styles Notion is the idea that different people have different styles of learning and that by designing our learning programs to meet each style—that is, to actually provide different learning content or activities to different learners—learning will be improved. Sounds great, but unfortunately, dozens and dozens of research studies and many major research reviews have found the Learning-Styles Notion to be untrue!

 

“Decades of research suggest that learning styles, or the belief that people learn better when they receive instruction in their dominant way of learning, may be one of the most pervasive myths about cognition.”

Nancekivell, S. E., Shah, P., & Gelman, S. A. (2020).
Maybe they’re born with it, or maybe it’s experience:
Toward a deeper understanding of the learning style myth.
Journal of Educational Psychology, 112(2), 221–235.

 

 

“Several reviews that span decades have evaluated the literature on learning styles (e.g., Arter & Jenkins, 1979; Kampwirth & Bates, 1980; Kavale & Forness, 1987; Kavale, Hirshoren, & Forness, 1998; Pashler et al., 2009; Snider, 1992; Stahl, 1999; Tarver & Dawson, 1978), and each has drawn the conclusion that there is no viable evidence to support the theory.”

Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015).
The scientific status of learning styles theories.
Teaching of Psychology, 42(3), 266-271.

 

With Jane’s contribution, the Learning Styles Challenge is up to $6,000! That is, if someone can demonstrate a beneficial effect from using learning styles to design learning, the underwriters will pay that person or group $6,000.

The Learning Styles Challenge began on August 4th 2006 when I offered $1,000 for the first challenge. In 2014, it expanded to $5,000 when additional pledges were made by Guy Wallace, Sivasailam “Thiagi” Thiagarajan, Bob Carleton, and Bob’s company, Vector Group.

Thank you to Jane Bozarth for her generous contribution to the cause! And check out her excellent research review of the learning-styles literature. Jane’s report is filled with tons of research, but also many very practical recommendations for learning professionals.

In my online-anytime workshop, Presentation Science, I make over one hundred recommendations for giving more effective presentations, based on the science of learning. You can learn more about the workshop by clicking here.

Below is Tip 4 in my workshop marketing effort. Please share with others if you think they’ll find it useful. This Tip 4 video is a bit longer than Tips 1, 2, and 3, because it takes a bit more time to explain. It’s still only four and half minutes, but this content is really critical.

Bullet points bore and cause pain for our audiences. We need to get rid of them. In the video I share one of the most powerful ways to do that!


Tip 4 — Disguising Our Bullet Points

 

Embedded here are the first three tips in my marketing campaign to let people know about my Online-Anytime Workshop, Presentation Science, which you can learn more about by clicking here. I would be grateful if you shared this with those who might be interested.

The Presentation Science online-anytime workshop is designed for anybody who gives presentations, especially for those who want their audience members to walk away remembering and acting on the ideas in their presentations. Also suitable for Train-the-Trainer introductions, providing a science-of-learning approach to presenting content.


Tip 1 — ELRA!

 

Tip 2 — The Microphone

 

Tip 3 — The Podium/Lectern


For over two years I’ve been compiling and analyzing the research on learning transfer as it relates to workplace learning and development. Today I am releasing my findings to the public.

Here is the Overview from the Research-to-Practice Report:

Learning transfer—or “training transfer” as it is sometimes called—occurs when people learn concepts and/or skills and later utilize those concepts/skills in work situations.1 Because we invest time, effort, and resources to create learning interventions, we hope to get a return on those investments in the form of some tangible benefit—usually some form of improved work outcome. Transfer, then, is our paramount goal. When we transfer, we are successful. When we don’t transfer, we fail.

To be practical about this, it is not enough to help our learners comprehend concepts or understand skills. It is not enough to get them to remember concepts/skills. It is not enough to inspire our learners to be motivated to use what they’ve learned. These results may be necessary, but they are not sufficient. We learning professionals hold transfer sacrosanct because it is the ultimate standard for success and failure.

This research review was conducted to determine factors that can be leveraged by workplace learning professionals to increase transfer success. This effort was not intended to be an exhaustive scientific review, but rather a quick analysis of recent research reviews, meta-analyses, and selected articles from scientific refereed journals. The goal of this review was to distill validated transfer factors—learning design and learning support elements that increase the likelihood that learning will transfer—and make these insights practical for trainers, learning architects, instructional designers, elearning developers, and learning professionals in general. In targeting this goal, this review aligns with transfer researchers’ recent admonition to ensure the scientific research on learning transfer gets packaged in a format that is usable by those who design and develop learning (Baldwin, Ford, Blume, 2017).

Unfortunately, after reviewing the scientific articles referenced in this report as well as others not cited here, my conclusion is that many of the most common transfer approaches have not yet been researched with sufficient rigor or intensity to enable us to have full certainty about how to engineer transfer success. At the end of this report, I make recommendations on how we can have a stronger research base.

Despite the limitations of the research, this quick review did uncover many testable hypotheses about the factors that may support transfer. Factors are presented here in two categories—those with strong support in the research, and those the research identifies as having possible benefits. I begin by highlighting the overall strength of the research.

Special Thanks for Early Sponsorship

Translating scientific research involves a huge investment in time, and to be honest, I am finding it more and more difficult to carve out time to do translational research. So it is with special gratitude that I want to thank Emma Weber of Lever Transfer of Learning for sponsoring me back in 2017 on some of the early research-translation efforts that got me started in compiling the research for this report. Without Lever’s support, this research would not have been started!

Tidbits from the Report

There are 17 research-supported recommended transfer factors and an additional six possible transfer factors. Here are a subset of the supported transfer factors:

  • Transfer occurs most potently to the extent that our learning designs strengthen knowledge and skills.
  • Far transfer hardly ever happens. Near transfer—transfer to contexts similar to those practiced during training or other learning efforts—can happen.
  • Learners who set goals are more likely to transfer.
  • Learners who also utilize triggered action planning will be even more likely to transfer, compared to those who only set goals alone.
  • Learners with supervisors who encourage, support, and monitor learning transfer are more likely to successfully transfer.
  • The longer the time between training and transfer, the less likely that training-generated knowledge create benefits for transfer.
  • The more success learners have in their first attempts to transfer what they’ve learned, the more likely they are to persevere in more transfer-supporting behaviors.

The remaining recommendations can be viewed in the report (available below).

Recommendations to Researchers

While transfer researchers have done a great deal of work in uncovering how transfer works, the research base is not as solid as it should be. For example, much of the transfer research uses learners’ subjective estimates of transfer—rather than actual transfer—as the dependent measure. Transfer researchers themselves recognize the limitations of the research base, but they could be doing more. In the report, I offer several additional recommendations to the improvements they’ve already suggested.

The Research-to-Practice Report

 

Access the report by clicking here…

 

Sign Up for Additional Research-Inspired Practical Recommendations

 

Sign up for Will Thalheimer’s Newsletter here…

Industry awards are hugely prominent in the workplace learning field and send a ripple of positive and negative effects on individuals and organizations. Awards affect vendor and consultant revenues and viability, learning department reputations and autonomy, individual promotion, salary, and recruitment opportunities. Because of their outsized influence, we should examine industry award processes to determine their strengths and weaknesses and to ascertain how helpful or harmful they are currently, and suggest improvements if any can be recommended.

The Promise of Learning Industry Awards

Industry awards seem to hold so much promise, with these potential benefits:

Application Effects

  • Learning and Development
    Those who apply for awards seem to have the potential to reflect on their own practices and thus learn and improve based on this reflection and any feedback they might get from those who judge their applications.
  • Nudging Improvement
    Those who apply (and even those who just review an awards application) maybe be nudged toward better practices based on the questions or requirements outlined.

Publicity of Winners Effect

  • Role Modeling
    Selected winners and the description of their work can set aspirational benchmarks for other organizations.
  • Rewarding of Good Effort
    Selected winners can be acknowledged and rewarded for their hard work, innovation, and results.
  • Promotion and Recruitment Effects
    Individuals selected for awards can be deservedly promoted or recruited to new opportunities.
  • Resourcing and Autonomy Effects
    Learning departments can earn reputation credits within their organizations that can be cashed in for resources and permission to act autonomously and avoid micromanagement.
  • Vendor Marketing
    Vendors who win can publicize and support their credibility and brand.
  • Purchasing Support
    Organizations who need products or services can be directed to vendors who have been vetted as excellent.

Benefits of Judging

  • Market Intelligence
    Judges who participate can learn about best practices, innovations, trends that they can use in their work.

NOTE: At the very end of this article, I will come back to each and every one of these promised benefits and assess how well our industry awards are helping or hurting.

The Overarching Requirements of Awards

Awards can be said to be useful if they produce valid, credible, fair, and ethical results. Ideally, we expect our awards to represent all players within the industry or subsegment—and to select from this group the objectively best exemplars based on valid, relevant, critical criteria.

The Awards Funnel

To make this happen, we can imagine a funnel, where people and/or organizations have an equal opportunity to be selected for an award. They enter the funnel at the top and then elements of the awards process winnow the field until only the best remain at the bottom of the funnel.

How Are We Doing?

How well do our awards processes meet the best practices suggested in the Awards Funnel?

Application Process Design

Award Eligibility

At the top of the funnel, everybody in the target group should be considered for an award. Particularly if we are claiming that we are choosing “The Best,” everybody should be able to enter the award application process. Ideally, we would not exclude people because they can’t afford the time or cost of the application process. We would not exclude people just because they didn’t know about the contest. Now obviously, these criterion are too stringent for the real world, but they do illustrate how an unrepresentative applicant pool can make the results less meaningful than we might like.

In a recent “Top” list on learning evaluation, none of the following organizations were included, despite these folks being leaders in learning evaluation. Non-award winners in learning evaluation were the Kirkpatrick’s, the Phillips’, Brinkerhoff, and Thalheimer. They did not end up at the end of the funnel as winners because they did not apply for the award.

Criteria

The criteria baked into the application process are fundamental to the meaningfulness of the results. If the criteria are not the most important, then the results can’t reflect a valid ranking. Unfortunately, too many awards in the workplace learning field give credit for such things as “numbers of trainers,” “hours of training provided,” “company revenues,” “average training hours per person,” “average class size,” “learner-survey ratings,” etc. These data are not related to learning effectiveness, so they should not impact applicant ratings. Unfortunately, these are taken into account in more than a few of our award contests. Indeed, in one such awards program, these types of data were worth over 20% toward the final scoring of applicants.

Application

Application questions should prompt respondents to answer with information and data that is relevant to assessing critical outcomes. Unfortunately, too many applications have generally worded questions that don’t nudge respondents to specificity. “Describe how your learning-technology innovation improved your organization’s business results.” Similarly, many applications don’t specifically ask people to show the actual learning event. Even for elearning programs, sometimes applicants are asked to include videos instead of actual programs.

Data Quality

Applicant Responses

To select the best applicants, each of the applicant responses has to be honest and substantial enough to allow judges to make considered judgments. If applicants stretch the truth, then the results will be biased. Similarly, if some applicants employ the use of awards writers—people skilled in helping companies win awards—then fair comparisons are not possible.

Information Verification

Ideally, application information would be verified to ensure accuracy. This never happens (as far as I can tell)—casting further doubt on the validity of the results.

Judge Performance

Judge Quality

Judges must be highly knowledgeable about learning and all the subsidiary areas involved in the workplace learning field, including the science of learning, memory, instruction. Ideally, judges would also be up-to-date on learning technologies, learning innovations, organization dynamics, statistics, leadership, coaching, learning evaluation, data science, and even perhaps on the topic area being taught. It is difficult to see how judges can meet all the desired criteria. One awards organizer allows unvetted conference goers to cast votes for their favorite elearning program. The judges are presumably somewhat interested and experienced in elearning, but as a whole they are clearly not all experts.

Judge Impartiality

Judges should be impartial, unbiased, blind to applicant identities, and have no conflicts of interest. This is made more difficult because screen shots and videos often include branding of the end users and learning vendors. And actually, many award applications ask for the names of the companies involved. In one contest many of the judges listed were from companies that won awards. One person I talked with who was a judge told me how when he got together with his fellow judges and the sponsor contact, he told the team that none of the applicants solutions were any good. He was first told to follow through with the process and give them a fair hearing. He said he had already done that. After some more back and forth he was told to review the applicants by trying to be appreciative. In this case there was a clear bias toward providing positive judgments—and awarding more winners.

Judge Time and Attention

Judges need to give sufficient time or their judgments won’t be accurate. Judges are largely volunteers and they have other involvements. We should assume, I think, that these volunteer judges are working in good faith and want to provide accurate ratings, but where they are squeezed for time—or the applications are confused, off-target, or include large amounts of data, there may be poor decision making. For one awards contest, the organizer claimed there were near 500 winners representing about 20% of all applicants. This would mean that there were 2,500 applicants. They said they had about 100 judges. If this was true, that would be 25 applications for each judge to review—and note that this assumes only one judge per application (which isn’t a good practice anyway, as more are needed). This seems like a recipe for judges to do as little as possible per application they review. In another award event, the judges went from table to table in a very loud room, having to judge 50-plus entries in about 90 minutes. Impossible to judge fully in this kind of atmosphere.

Judging Rubric

Bias can occur when evaluating open-ended responses like the essay questions typical on these award applications. One way to reduce bias is to give each judge a rubric with very specific options to guide judge’s decision making, or ask questions that are in the form of rubrics (see Performance-Focused Smile-Sheet questions as examples). For the award applications I reviewed, such rubrics were not a common occurrence.

Judge Reliability

Given that judging these applications is a subjective exercise—one made more chaotic by the lack of specific questions and rubrics—bias and variability can enter the judging process. It’s helpful to have a set of judges review each application to add some reliability to the judging. This seems to be a common practice, but it may not be a universal one.

Non-Interference

Sponsor Non-Interference

The organizations who sponsor these events could conceivably change or modify the results. This seems a possibility since the award organizations are not uninterested parties. They often earn revenues by getting consulting, advertising, conference, and/or awards-ceremony revenues from the same organizations who are applying for these awards. They could benefit by having low standards or relaxed judging to increase the number of award winners. Indeed, one award winner last year had 26 award categories and gave out 196 gold awards!

Awards organizations might also benefit if well-known companies are among the award winners. Judges may subconsciously give better ratings to a well-respected tech company rather than some unknown manufacturing company if company identities are not hidden. Worse, sponsors may be enticed to put their thumbs on the scale to ensure the star companies rise to the top. When applications ask for number of employees, company revenues, and even seemingly-relevant data points as number of hours trained, it’s easy to see how the books have been cooked to make the biggest, sexiest companies rise to the top of the rankings.

Except for the evidence described above where a sponsor encouraged a judge to be “appreciative,” I can’t document any cases of sponsor direct interference, but the conditions are ripe for those who might want to exploit the process. One award-sponsoring organization recognized the perception problem, and uses a third-party organization to vet the applicants. They also bestow only award one winner in each gold, silver, and bronze category, so the third-party organization has no incentive to be lenient in judging. These are good practices!

Implications

There is so much here—and I’m afraid I am only touching the surface. Despite the dirt and treasure left to be dug and discovered, I am convinced of one thing. I cannot trust the results of most of the learning industry awards. More importantly, these awards don’t give us the benefits we might hope to get from them. Let’s revisit those promised benefits from the very beginning of this article and see how things stack up.

Application Effects

  • Learning and Development
    We had hoped that applicants could learn from their involvement. However, if the wrong criteria are highlighted, they may actually learn to focus on the wrong target outcomes!
  • Nudging Improvement
    We had hoped the awards criteria would nudge applicants and other members of the community to focus on valuable design criteria and outcome measures. Unfortunately, we’ve seen that the criteria are often substandard, possibly even tangential or counter to effective learning-to-performance design.

Publicity of Winners Effect

  • Role Modeling
    We had hoped that winners would be deserving and worthy of being models, but we’ve seen that the many flaws of the various awards processes may result in winners not really being exemplars of excellence.
  • Rewarding of Good Effort
    We had hoped that those doing good work would be acknowledged and rewarded, but now we can see that we might be acknowledging mediocre efforts instead.
  • Promotion and Recruitment Effects
    We had hoped that our best and brightest might get promotions, be recruited, and be rewarded, but now it seems that people might be advantaged willy-nilly.
  • Resourcing and Autonomy Effects
    We had hoped that learning departments that do the best work would gain resources, respect, and reputational advantages; but now we see that learning departments could win an award without really deserving it. Moreover, the best resourced organizations may be able to hire award writers, allocate graphic design help, etc., to push their mediocre effort to award-winning status.
  • Vendor Marketing
    We had hoped that the best vendors would be rewarded, but we can now see that vendors with better marketing skills or resources—rather than the best learning solutions—might be rewarded instead.
  • Purchasing Support
    We had hoped that these industry awards might create market signals to help organizations procure the most effective learning solutions. We can see now that the award signals are extremely unreliable as indicators of effectiveness. If ONE awards organization can manufacture 196 gold medalists and 512 overall in a single year, how esteemed is such an award?

Benefits of Judging

  • Market Intelligence
    We had hoped that judges who participated would learn best practices and innovations, but it seems that the poor criteria involved might nudge judges to focus on information and particulars not as relevant to effective learning design.

What Should We Do Now?

You should draw your own conclusions, but here are my recommendations:

  1. Don’t assume that award winners are deserving or that non-award winners are undeserving.
  2. When evaluating vendors or consultants, ignore the awards they claim to have won—or investigate their solutions yourself.
  3. If you are a senior manager (whether on the learning team or in the broader organization), do not allow your learning teams to apply for these awards, unless you first fully vet the award process. Better to hire research-to-practice experts and evaluation experts to support your learning team’s personal development.
  4. Don’t participate as a judge in these contests unless you first vet their applications, criteria, and the way they handle judging.
  5. If your organization runs an awards contest, reevaluate your process and improve it, where needed. You can use the contents of this article as a guide for improvement.

Mea Culpa

I give an award every year, and I certainly don’t live up to all the standards in this article.

My award, the Neon Elephant Award, is designed to highlight the work of a person or group who utilizes or advocates for practical research-based wisdom. Winners include people like Ruth Clark, Paul Kirschner, K. Anders Ericsson, Julie Dirksen (among a bunch of great people, check out the link).

Interestingly, I created the award starting in 2006 because of my dissatisfaction with the awards typical in our industry at that time—awards that measured butts in seats, etc.

It Ain’t Easy — And It Will Never Be Easy!

Organizing an awards process or vetting content is not easy. A few of you may remember the excellent work of Bill Ellet, starting over two decades ago, and his company Training Media Review. It was a monumental effort to evaluate training programs. So monumental in fact that it was unsustainable. When Bill or one of his associates reviewed a training program, they spent hours and hours doing so. They spent more time than our awards judges, and they didn’t review applications; they reviewed the actual learning program.

Is a good awards process even possible?

Honestly, I don’t know. There are so many things to get right.

Can they be better?

Yes!

Are they good enough now?

Not most of them!

Christian Unkelbach and Fabia Högden, researchers at the Universität zu Köln, reviewed research on how pairing celebrities—or other stimuli—can imbue objects with characteristics that might be beneficial. Their article in Current Directions in Psychological Science (2019, 28(6), 540–546), titled Why Does George Clooney Make Coffee Sexy? The Case for Attribute Conditioning, described earlier research that showed how REPEATED PAIRINGS of George Clooney and the Nespresso brand, in advertisements, imbued the coffee brand with attributes such as cosmopolitan, sophisticated, and seductive. Research on persuasion (see Cialdini, 2009 and here’s a nice blog-post review), also has demonstrated the power of celebrities to gain attention and be persuasive.

 

Can we use the power of celebrity to support our training?

Yes! And first realize that you don’t have to have access to worldwide celebrities. There are always people in our organizations who are celebrities as well; people like our CEOs, our best and brightest, our most beloved. You don’t even really need celebrities to get some kind of transference.

What could celebrity do for us? It could make employees more interested in our training, more likely to pay attention, more likely to apply what they’ve learned, etc.

The only catch I see is that this kind of attribute transference may require multiple pairings, so we’d have to figure out ways to do that without it feeling repetitive.

I, Will Thalheimer, am Available!

George Clooney shouldn’t have all the fun. If you’d like to imbue your learning product or service with a sense of sexy research-inspired sophistication, my services are available. I’m so good, I can even sell overhead transparencies to trainers!

 

I’m joking! Please don’t call! SMILE

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser