Posts

For over two years I’ve been compiling and analyzing the research on learning transfer as it relates to workplace learning and development. Today I am releasing my findings to the public.

Here is the Overview from the Research-to-Practice Report:

Learning transfer—or “training transfer” as it is sometimes called—occurs when people learn concepts and/or skills and later utilize those concepts/skills in work situations.1 Because we invest time, effort, and resources to create learning interventions, we hope to get a return on those investments in the form of some tangible benefit—usually some form of improved work outcome. Transfer, then, is our paramount goal. When we transfer, we are successful. When we don’t transfer, we fail.

To be practical about this, it is not enough to help our learners comprehend concepts or understand skills. It is not enough to get them to remember concepts/skills. It is not enough to inspire our learners to be motivated to use what they’ve learned. These results may be necessary, but they are not sufficient. We learning professionals hold transfer sacrosanct because it is the ultimate standard for success and failure.

This research review was conducted to determine factors that can be leveraged by workplace learning professionals to increase transfer success. This effort was not intended to be an exhaustive scientific review, but rather a quick analysis of recent research reviews, meta-analyses, and selected articles from scientific refereed journals. The goal of this review was to distill validated transfer factors—learning design and learning support elements that increase the likelihood that learning will transfer—and make these insights practical for trainers, learning architects, instructional designers, elearning developers, and learning professionals in general. In targeting this goal, this review aligns with transfer researchers’ recent admonition to ensure the scientific research on learning transfer gets packaged in a format that is usable by those who design and develop learning (Baldwin, Ford, Blume, 2017).

Unfortunately, after reviewing the scientific articles referenced in this report as well as others not cited here, my conclusion is that many of the most common transfer approaches have not yet been researched with sufficient rigor or intensity to enable us to have full certainty about how to engineer transfer success. At the end of this report, I make recommendations on how we can have a stronger research base.

Despite the limitations of the research, this quick review did uncover many testable hypotheses about the factors that may support transfer. Factors are presented here in two categories—those with strong support in the research, and those the research identifies as having possible benefits. I begin by highlighting the overall strength of the research.

Special Thanks for Early Sponsorship

Translating scientific research involves a huge investment in time, and to be honest, I am finding it more and more difficult to carve out time to do translational research. So it is with special gratitude that I want to thank Emma Weber of Lever Transfer of Learning for sponsoring me back in 2017 on some of the early research-translation efforts that got me started in compiling the research for this report. Without Lever’s support, this research would not have been started!

Tidbits from the Report

There are 17 research-supported recommended transfer factors and an additional six possible transfer factors. Here are a subset of the supported transfer factors:

  • Transfer occurs most potently to the extent that our learning designs strengthen knowledge and skills.
  • Far transfer hardly ever happens. Near transfer—transfer to contexts similar to those practiced during training or other learning efforts—can happen.
  • Learners who set goals are more likely to transfer.
  • Learners who also utilize triggered action planning will be even more likely to transfer, compared to those who only set goals alone.
  • Learners with supervisors who encourage, support, and monitor learning transfer are more likely to successfully transfer.
  • The longer the time between training and transfer, the less likely that training-generated knowledge create benefits for transfer.
  • The more success learners have in their first attempts to transfer what they’ve learned, the more likely they are to persevere in more transfer-supporting behaviors.

The remaining recommendations can be viewed in the report (available below).

Recommendations to Researchers

While transfer researchers have done a great deal of work in uncovering how transfer works, the research base is not as solid as it should be. For example, much of the transfer research uses learners’ subjective estimates of transfer—rather than actual transfer—as the dependent measure. Transfer researchers themselves recognize the limitations of the research base, but they could be doing more. In the report, I offer several additional recommendations to the improvements they’ve already suggested.

The Research-to-Practice Report

 

Access the report by clicking here…

 

Sign Up for Additional Research-Inspired Practical Recommendations

 

Sign up for Will Thalheimer’s Newsletter here…

Industry awards are hugely prominent in the workplace learning field and send a ripple of positive and negative effects on individuals and organizations. Awards affect vendor and consultant revenues and viability, learning department reputations and autonomy, individual promotion, salary, and recruitment opportunities. Because of their outsized influence, we should examine industry award processes to determine their strengths and weaknesses and to ascertain how helpful or harmful they are currently, and suggest improvements if any can be recommended.

The Promise of Learning Industry Awards

Industry awards seem to hold so much promise, with these potential benefits:

Application Effects

  • Learning and Development
    Those who apply for awards seem to have the potential to reflect on their own practices and thus learn and improve based on this reflection and any feedback they might get from those who judge their applications.
  • Nudging Improvement
    Those who apply (and even those who just review an awards application) maybe be nudged toward better practices based on the questions or requirements outlined.

Publicity of Winners Effect

  • Role Modeling
    Selected winners and the description of their work can set aspirational benchmarks for other organizations.
  • Rewarding of Good Effort
    Selected winners can be acknowledged and rewarded for their hard work, innovation, and results.
  • Promotion and Recruitment Effects
    Individuals selected for awards can be deservedly promoted or recruited to new opportunities.
  • Resourcing and Autonomy Effects
    Learning departments can earn reputation credits within their organizations that can be cashed in for resources and permission to act autonomously and avoid micromanagement.
  • Vendor Marketing
    Vendors who win can publicize and support their credibility and brand.
  • Purchasing Support
    Organizations who need products or services can be directed to vendors who have been vetted as excellent.

Benefits of Judging

  • Market Intelligence
    Judges who participate can learn about best practices, innovations, trends that they can use in their work.

NOTE: At the very end of this article, I will come back to each and every one of these promised benefits and assess how well our industry awards are helping or hurting.

The Overarching Requirements of Awards

Awards can be said to be useful if they produce valid, credible, fair, and ethical results. Ideally, we expect our awards to represent all players within the industry or subsegment—and to select from this group the objectively best exemplars based on valid, relevant, critical criteria.

The Awards Funnel

To make this happen, we can imagine a funnel, where people and/or organizations have an equal opportunity to be selected for an award. They enter the funnel at the top and then elements of the awards process winnow the field until only the best remain at the bottom of the funnel.

How Are We Doing?

How well do our awards processes meet the best practices suggested in the Awards Funnel?

Application Process Design

Award Eligibility

At the top of the funnel, everybody in the target group should be considered for an award. Particularly if we are claiming that we are choosing “The Best,” everybody should be able to enter the award application process. Ideally, we would not exclude people because they can’t afford the time or cost of the application process. We would not exclude people just because they didn’t know about the contest. Now obviously, these criterion are too stringent for the real world, but they do illustrate how an unrepresentative applicant pool can make the results less meaningful than we might like.

In a recent “Top” list on learning evaluation, none of the following organizations were included, despite these folks being leaders in learning evaluation. Non-award winners in learning evaluation were the Kirkpatrick’s, the Phillips’, Brinkerhoff, and Thalheimer. They did not end up at the end of the funnel as winners because they did not apply for the award.

Criteria

The criteria baked into the application process are fundamental to the meaningfulness of the results. If the criteria are not the most important, then the results can’t reflect a valid ranking. Unfortunately, too many awards in the workplace learning field give credit for such things as “numbers of trainers,” “hours of training provided,” “company revenues,” “average training hours per person,” “average class size,” “learner-survey ratings,” etc. These data are not related to learning effectiveness, so they should not impact applicant ratings. Unfortunately, these are taken into account in more than a few of our award contests. Indeed, in one such awards program, these types of data were worth over 20% toward the final scoring of applicants.

Application

Application questions should prompt respondents to answer with information and data that is relevant to assessing critical outcomes. Unfortunately, too many applications have generally worded questions that don’t nudge respondents to specificity. “Describe how your learning-technology innovation improved your organization’s business results.” Similarly, many applications don’t specifically ask people to show the actual learning event. Even for elearning programs, sometimes applicants are asked to include videos instead of actual programs.

Data Quality

Applicant Responses

To select the best applicants, each of the applicant responses has to be honest and substantial enough to allow judges to make considered judgments. If applicants stretch the truth, then the results will be biased. Similarly, if some applicants employ the use of awards writers—people skilled in helping companies win awards—then fair comparisons are not possible.

Information Verification

Ideally, application information would be verified to ensure accuracy. This never happens (as far as I can tell)—casting further doubt on the validity of the results.

Judge Performance

Judge Quality

Judges must be highly knowledgeable about learning and all the subsidiary areas involved in the workplace learning field, including the science of learning, memory, instruction. Ideally, judges would also be up-to-date on learning technologies, learning innovations, organization dynamics, statistics, leadership, coaching, learning evaluation, data science, and even perhaps on the topic area being taught. It is difficult to see how judges can meet all the desired criteria. One awards organizer allows unvetted conference goers to cast votes for their favorite elearning program. The judges are presumably somewhat interested and experienced in elearning, but as a whole they are clearly not all experts.

Judge Impartiality

Judges should be impartial, unbiased, blind to applicant identities, and have no conflicts of interest. This is made more difficult because screen shots and videos often include branding of the end users and learning vendors. And actually, many award applications ask for the names of the companies involved. In one contest many of the judges listed were from companies that won awards. One person I talked with who was a judge told me how when he got together with his fellow judges and the sponsor contact, he told the team that none of the applicants solutions were any good. He was first told to follow through with the process and give them a fair hearing. He said he had already done that. After some more back and forth he was told to review the applicants by trying to be appreciative. In this case there was a clear bias toward providing positive judgments—and awarding more winners.

Judge Time and Attention

Judges need to give sufficient time or their judgments won’t be accurate. Judges are largely volunteers and they have other involvements. We should assume, I think, that these volunteer judges are working in good faith and want to provide accurate ratings, but where they are squeezed for time—or the applications are confused, off-target, or include large amounts of data, there may be poor decision making. For one awards contest, the organizer claimed there were near 500 winners representing about 20% of all applicants. This would mean that there were 2,500 applicants. They said they had about 100 judges. If this was true, that would be 25 applications for each judge to review—and note that this assumes only one judge per application (which isn’t a good practice anyway, as more are needed). This seems like a recipe for judges to do as little as possible per application they review. In another award event, the judges went from table to table in a very loud room, having to judge 50-plus entries in about 90 minutes. Impossible to judge fully in this kind of atmosphere.

Judging Rubric

Bias can occur when evaluating open-ended responses like the essay questions typical on these award applications. One way to reduce bias is to give each judge a rubric with very specific options to guide judge’s decision making, or ask questions that are in the form of rubrics (see Performance-Focused Smile-Sheet questions as examples). For the award applications I reviewed, such rubrics were not a common occurrence.

Judge Reliability

Given that judging these applications is a subjective exercise—one made more chaotic by the lack of specific questions and rubrics—bias and variability can enter the judging process. It’s helpful to have a set of judges review each application to add some reliability to the judging. This seems to be a common practice, but it may not be a universal one.

Non-Interference

Sponsor Non-Interference

The organizations who sponsor these events could conceivably change or modify the results. This seems a possibility since the award organizations are not uninterested parties. They often earn revenues by getting consulting, advertising, conference, and/or awards-ceremony revenues from the same organizations who are applying for these awards. They could benefit by having low standards or relaxed judging to increase the number of award winners. Indeed, one award winner last year had 26 award categories and gave out 196 gold awards!

Awards organizations might also benefit if well-known companies are among the award winners. Judges may subconsciously give better ratings to a well-respected tech company rather than some unknown manufacturing company if company identities are not hidden. Worse, sponsors may be enticed to put their thumbs on the scale to ensure the star companies rise to the top. When applications ask for number of employees, company revenues, and even seemingly-relevant data points as number of hours trained, it’s easy to see how the books have been cooked to make the biggest, sexiest companies rise to the top of the rankings.

Except for the evidence described above where a sponsor encouraged a judge to be “appreciative,” I can’t document any cases of sponsor direct interference, but the conditions are ripe for those who might want to exploit the process. One award-sponsoring organization recognized the perception problem, and uses a third-party organization to vet the applicants. They also bestow only award one winner in each gold, silver, and bronze category, so the third-party organization has no incentive to be lenient in judging. These are good practices!

Implications

There is so much here—and I’m afraid I am only touching the surface. Despite the dirt and treasure left to be dug and discovered, I am convinced of one thing. I cannot trust the results of most of the learning industry awards. More importantly, these awards don’t give us the benefits we might hope to get from them. Let’s revisit those promised benefits from the very beginning of this article and see how things stack up.

Application Effects

  • Learning and Development
    We had hoped that applicants could learn from their involvement. However, if the wrong criteria are highlighted, they may actually learn to focus on the wrong target outcomes!
  • Nudging Improvement
    We had hoped the awards criteria would nudge applicants and other members of the community to focus on valuable design criteria and outcome measures. Unfortunately, we’ve seen that the criteria are often substandard, possibly even tangential or counter to effective learning-to-performance design.

Publicity of Winners Effect

  • Role Modeling
    We had hoped that winners would be deserving and worthy of being models, but we’ve seen that the many flaws of the various awards processes may result in winners not really being exemplars of excellence.
  • Rewarding of Good Effort
    We had hoped that those doing good work would be acknowledged and rewarded, but now we can see that we might be acknowledging mediocre efforts instead.
  • Promotion and Recruitment Effects
    We had hoped that our best and brightest might get promotions, be recruited, and be rewarded, but now it seems that people might be advantaged willy-nilly.
  • Resourcing and Autonomy Effects
    We had hoped that learning departments that do the best work would gain resources, respect, and reputational advantages; but now we see that learning departments could win an award without really deserving it. Moreover, the best resourced organizations may be able to hire award writers, allocate graphic design help, etc., to push their mediocre effort to award-winning status.
  • Vendor Marketing
    We had hoped that the best vendors would be rewarded, but we can now see that vendors with better marketing skills or resources—rather than the best learning solutions—might be rewarded instead.
  • Purchasing Support
    We had hoped that these industry awards might create market signals to help organizations procure the most effective learning solutions. We can see now that the award signals are extremely unreliable as indicators of effectiveness. If ONE awards organization can manufacture 196 gold medalists and 512 overall in a single year, how esteemed is such an award?

Benefits of Judging

  • Market Intelligence
    We had hoped that judges who participated would learn best practices and innovations, but it seems that the poor criteria involved might nudge judges to focus on information and particulars not as relevant to effective learning design.

What Should We Do Now?

You should draw your own conclusions, but here are my recommendations:

  1. Don’t assume that award winners are deserving or that non-award winners are undeserving.
  2. When evaluating vendors or consultants, ignore the awards they claim to have won—or investigate their solutions yourself.
  3. If you are a senior manager (whether on the learning team or in the broader organization), do not allow your learning teams to apply for these awards, unless you first fully vet the award process. Better to hire research-to-practice experts and evaluation experts to support your learning team’s personal development.
  4. Don’t participate as a judge in these contests unless you first vet their applications, criteria, and the way they handle judging.
  5. If your organization runs an awards contest, reevaluate your process and improve it, where needed. You can use the contents of this article as a guide for improvement.

Mea Culpa

I give an award every year, and I certainly don’t live up to all the standards in this article.

My award, the Neon Elephant Award, is designed to highlight the work of a person or group who utilizes or advocates for practical research-based wisdom. Winners include people like Ruth Clark, Paul Kirschner, K. Anders Ericsson, Julie Dirksen (among a bunch of great people, check out the link).

Interestingly, I created the award starting in 2006 because of my dissatisfaction with the awards typical in our industry at that time—awards that measured butts in seats, etc.

It Ain’t Easy — And It Will Never Be Easy!

Organizing an awards process or vetting content is not easy. A few of you may remember the excellent work of Bill Ellet, starting over two decades ago, and his company Training Media Review. It was a monumental effort to evaluate training programs. So monumental in fact that it was unsustainable. When Bill or one of his associates reviewed a training program, they spent hours and hours doing so. They spent more time than our awards judges, and they didn’t review applications; they reviewed the actual learning program.

Is a good awards process even possible?

Honestly, I don’t know. There are so many things to get right.

Can they be better?

Yes!

Are they good enough now?

Not most of them!

Christian Unkelbach and Fabia Högden, researchers at the Universität zu Köln, reviewed research on how pairing celebrities—or other stimuli—can imbue objects with characteristics that might be beneficial. Their article in Current Directions in Psychological Science (2019, 28(6), 540–546), titled Why Does George Clooney Make Coffee Sexy? The Case for Attribute Conditioning, described earlier research that showed how REPEATED PAIRINGS of George Clooney and the Nespresso brand, in advertisements, imbued the coffee brand with attributes such as cosmopolitan, sophisticated, and seductive. Research on persuasion (see Cialdini, 2009 and here’s a nice blog-post review), also has demonstrated the power of celebrities to gain attention and be persuasive.

 

Can we use the power of celebrity to support our training?

Yes! And first realize that you don’t have to have access to worldwide celebrities. There are always people in our organizations who are celebrities as well; people like our CEOs, our best and brightest, our most beloved. You don’t even really need celebrities to get some kind of transference.

What could celebrity do for us? It could make employees more interested in our training, more likely to pay attention, more likely to apply what they’ve learned, etc.

The only catch I see is that this kind of attribute transference may require multiple pairings, so we’d have to figure out ways to do that without it feeling repetitive.

I, Will Thalheimer, am Available!

George Clooney shouldn’t have all the fun. If you’d like to imbue your learning product or service with a sense of sexy research-inspired sophistication, my services are available. I’m so good, I can even sell overhead transparencies to trainers!

 

I’m joking! Please don’t call! SMILE

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

Last month I released a new online, self-paced workshop called Presentation Science: How to Help Your Audience to Engage, Learn, Remember, and Act. The workshop is comparable to a two-day workshop and comprises about 12.5 hours of work, including videos, scenario questions, reflection questions, discussions, and a final assessment.

 

People are beginning to “graduate” from the workshop. Here’s what the first two graduates had to say:

Powerful content here! I love this course. It’s the best online course I’ve taken–ever! I only see one problem with the course. You’ve set the price too low based on the actual value of the course! It’s worth much more than what you are charging, considering the quality. I’d set the price at $1,000 minimum personally! In my opinion, it is worth $10,000 in the first 6 months once a person has successfully applied this to build new trainings!

— Gale Stafford, executive coach and learning architect at County of San Mateo

Will Thalheimer’s Presentation Science Workshop provides a TON of strategies, tactics, and tools backed up by learning science that will help you transform your bullet-riddled, mind-numbing PowerPoint presentations into meaningful, memorable, motivating, and (yes!) magnificent learning events.

— Holly H., senior instructional designer at global energy technology company

 

 

As you can imagine, I’m thrilled with this response. At the same time, I feel a responsibility to continue making the workshop better and better. At a later date–when I’ve gathered more data–I will write about how I think the new online-learning technologies are now poised to enable great learning designs. I’ll also talk about how to utilize these tools to follow research-inspired recommendations. For now, I’m just going to brag a bit! SMILE

And encourage you to consider taking this course for yourself, or recommending it to your organization, your subject-matter experts, trainers, teachers, professors, managers, salespeople, executives—anybody who has to give a presentation that has to be maximally effective.

 

The Presentation Science Workshop:
Learn More by Clicking Here!

 

 

For those of you who don’t know Matt Richter, President of the Thiagi Group, he’s one of the most innovative thinkers when it comes to creating training that both sizzles and supports work performance. Recently, Matt and I began partnering in a new podcast, Truth In Learning, which I’ll have more to say about later once I figure out where the escape hatch is.

NOW, I want to share with you a brilliant new article, that Matt surprised me with, on his efforts to brainstorm innovative ways to use LTEM (The Learning-Transfer Evaluation Model).

You should read his article, but just to give you the list of seven uses for LTEM:

  1. Learning Evaluation—The primary intent of the LTEM framework.
  2. Instructional Design—To negotiate with stakeholders the outcomes desired.
  3. Training Game Design—To ensure games/activities have an instructional purpose.
  4. Coaching—Helping to build a development plan for those who are coached.
  5. Performance Consulting—To focus on performances that matter along the journey.
  6. Keynoting/Presenting—To ensure a focus on meaningful outcomes, not just infotainment.
  7. Sales/Business Development—To keep sales conversations focused on meaningful outcomes.

We are All in this Together

One of the great benefits of publishing LTEM is that since its publication last year I’m regularly being contacted by people whose organizations are finding new and innovative ways to utilize LTEM—and not just for learning evaluation but as a central element of their learning strategy and practice.

I’m especially pleased with those who have taken LTEM really deep, and I’d like to give a shout out to Elham Arabi who is doing her doctoral dissertation using LTEM as a spur to supporting a hospital’s effort to maximize the benefits or their learning interventions. Congrats to her for being accepted as a speaker at the upcoming eLearning Guild Learning Solutions Conference, March 31 to April 2 (2020) in Orlando. The title of her talk is: Using Evaluation Data to Enhance Your Training Programs.

Share Your Examples and Innovations

Please share your innovations and ideas about using LTEM in your workplace, on social media, or by contacting me at https://www.worklearning.com/contact/. I would really love to hear how it’s going, including any obstacles you’ve faced, your success stories, etc.

And, of course, if you’d like me to help your organization utilize LTEM, or just be the face of LTEM to your organization, please contact me so we can set up a time to talk, and consider my LTEM workshop to introduce LTEM to your team.

 

 

People keep asking me for references to the claim that learner surveys are not correlated—or are virtually uncorrelated—with learning results. In this post, I include them, with commentary.

 

 

Major Meta-Analyses

Here are the major meta-analyses (studies that compile the results of many other scientific studies using statistical means to ensure fair and valid comparisons):

For Workplace Training

Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Hughes, A. M., Gregory, M. E., Joseph, D. L., Sonesh, S. C., Marlow, S. L., Lacerenza, C. N., Benishek, L. E., King, H. B., Salas, E. (2016). Saving lives: A meta-analysis of team training in healthcare. Journal of Applied Psychology, 101(9), 1266-1304.

Sitzmann, T., Brown, K. G., Casper, W. J., Ely, K., & Zimmerman, R. D. (2008). A review and meta-analysis of the nomological network of trainee reactions. Journal of Applied Psychology, 93, 280-295.

For University Teaching

Uttl, B., White, C. A., Gonzalez (2017). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.

What these Results Say

These four meta-analyses, covering over 200 scientific studies, find correlations between smile-sheet ratings and learning to average about 10%, which is virtually no correlation at all. Statisticians consider correlations below 30% to be weak correlations, and 10% then is very weak.

What these Results Mean

These results suggest that typical learner surveys are not correlated with learning results.

From a practical standpoint:

 

If you get HIGH MARKS on your smile sheets:

You are almost equally likely to have

(1) An Effective Course

(2) An Ineffective Course

 

If you get LOW MARKS on your smile sheets:

You are almost equally likely to have

(1) A Poorly-Designed Course

(2) A Well-Designed Course

 

Caveats

It is very likely that the traditional smile sheets that have been used in these scientific studies, while capturing data on learner satisfaction, have been inadequately designed to capture data on learning effectiveness.

I have developed a new approach to learner surveys to capture data on learning effectiveness. This approach is the Performance-Focused Smile Sheet approach as originally conveyed in my 2016 award-winning book. As of yet, no scientific studies have been conducted to correlate the new smile sheets with measures of learning. However, many many organizations are reporting substantial benefits. Researchers or learning professionals who want my updated list of recommended questions can access them here.

Reflections

  1. Although I have written a book on learner surveys, in the new learning evaluation model, LTEM (Learning-Transfer Evaluation Model), I place these smile sheets at Tier 3, out of eight tiers, less valuable than measures of knowledge, decision-making, task performance, transfer, and transfer effects. Yes, learner surveys are worth doing, if done right, but they should not be the only tool we use when we evaluate learning.
  2. The earlier belief—and one notably advocated by Donald, Jim, and Wendy Kirkpatrick—that there was a causal chain from learner reactions to learning, behavior, and results has been shown to be false.
  3. There are three types of questions we can utilize on our smile sheets: (1) Questions that focus on learner satisfaction and the reputation of the learning, (2) Questions that support learning, and (3) Questions that capture information about learning effectiveness.
  4. It is my belief that we focus too much on learner satisfaction, which has been shown to be uncorrelated with learning results—and we also focus too little on questions that gauge learning effectiveness (the main impetus for the creation of Performance-Focused Smile Sheets).
  5. I do believe that learner satisfaction is important, but it is not most important.

Learning Opportunities regarding Learner Surveys

CEO’s are calling for their companies to be more innovative in the ever-accelerating competitive landscape! Creativity is the key leverage point for innovation. Research I’ve compiled (from the science on creativity) shows that unique and valuable ideas are generated when people and teams look beyond their inner circle to those in their peripheral networks. GIVEN THIS, a smart company will seed themselves with outside influencers who are working with new ideas.

But what are a vast majority of big companies doing that kills their own creativity? They are making it difficult or virtually impossible for their front-line departments to hire small businesses and consultants. It’s allowed, but massive walls are being built! And these walls have exploded over the last five to ten years:

  1. Only fully vetted companies can be hired, requiring small lean companies to waste time in compliance—or turn away in frustration. Also causing large-company managers to favor the vetted companies, even if a small business or consultant would provide better value or more-pertinent products or services.
  2. Master Service Agreements are required (pushing small companies away due to time and legal fees).
  3. Astronomical amounts of insurance are required. Why the hell do consultants need $2 million in insurance, even when they are consulting on non-safety-related issues? Why do they need any insurance at all if they are not impacting critical safety factors?
  4. Companies can’t be hired unless they’ve been in business for 5 or 10 or 15 years, completely eliminating the most unique and innovative small businesses or consultants—those who recently set up shop.
  5. Minimum company revenues are required, often in the millions of dollars.

These barriers, of course, aren’t the only ones pushing large organizations away from small businesses or consultants. Small companies often can’t afford sales forces or marketing budgets so they are less likely to gain large companies’ share of attention. Small companies aren’t seen as safe bets because they don’t have a name, or their website is not as beautiful, or they haven’t yet worked with other big-name companies, or the don’t speak the corporate language. Given these surface characteristics, only the bravest, most visionary frontline managers will take the risk to make the creative hire. And even then, their companies are making it increasingly hard for them to follow through.

Don’t be fooled by the high-visibility anecdotes that show a CEO hiring a book author or someone featured in Wired, HBR, or on some podcast. Yes, CEO’s and senior managers can easily find ways to hire innovators, and the resulting top-down creativity infusion can be helpful. But it can be harmful as well!!!! Too many times senior managers are too far away from knowing what works and what’s needed on the front lines. They push things innocently not knowing that they are distracting the troops from what’s most important, or worse, pushing the frontline teams to do stupid stuff against their best judgment.

Even more troublesome with these anecdotes of top-down innovation is that they are too few and far between. There may be ten senior managers who can hire innovation seeds, but there are dozens or hundreds or thousands of folks who might be doing so but can’t.

A little digression: It’s the frontline managers who know what’s needed—or perhaps more importantly the “leveraging managers” if I can coin a term. These are the managers who are deeply experienced and wise in the work that is getting done, but high enough in the organization to see the business-case big picture. I will specifically exclude “bottle-cap managers” who have little or no experience in a work area, but were placed there because they have business experience. Research shows these kind of hires are particularly counterproductive in innovation.

Let me summarize.

I’m not selling anything here. I’m in the training, talent development, learning evaluation business as a consultant—I’m not an innovation consultant! I’m just sharing this out of my own frustration with these stupid counter-productive barriers that I and my friends in small businesses and consultancies have experienced. I also am venting here to provide a call to action for large organizations to wake the hell up to the harm you are inflicting on yourselves and on the economy in general. By not supporting the most innovative small companies and consultants, you are dumbing-down the workforce for years to come!

Alright! I suppose I should offer to help instead of just gripe! I have done extensive research on creativity. But I don’t have a workshop developed, the research is not yet in publishable form, and it’s not really what I’m focused on right now. I’m focused on innovating in learning evaluation (see my new learning-evaluation model and my new method for capturing valid and meaningful data from learners). These are two of the most important innovations in learning evaluation in the past few years!

However, a good friend of mine did, just last month, suggest that the world should see the research on creativity that I’ve compiled (thanks Mirjam!). Given the right organization, situation, and requirements—and the right amount of money—I might be willing to take a break from my learning-evaluation work and bring this research to your organization. Contact me to try and twist my arm!

I’m serious, I really don’t want to do this right now, but if I can capture funds to reinvest in my learning-evaluation innovations, I just might be persuaded. On the contact-me link, you can set up an appointment with me. I’d love to talk with you if you want to talk innovation or learning evaluation.

A huge fiery debate rages in the learning field.

 

What do we call ourselves? Are we instructional designers, learning designers, learning experience designers, learning engineers, etc.? This is an important question, of course, because words matter. But it is also a big freakin’ waste of time, so today, I’m going to end the debate! From now on we will call ourselves by one name. We will never debate this again. We will spend our valuable time on more important matters. You will thank me later! Probably after I am dead.

How do I know the name I propose is the best name? I just know. And you will know it too when you hear the simple brilliance of it.

How do I know the name I propose is the best name? Because Jim Kirkpatrick and I are in almost complete agreement on this, and, well, we have a rocky history.

How do I know the name I propose is the best name? Because it’s NOT the new stylish name everybody’s now printing on their business cards and sharing on LinkedIn. That name is a disaster, as I will explain.

The Most Popular Contenders

I will now list each of the major contenders for what we should call ourselves and then thoroughly eviscerate each one.

Instructional Designer

This is the traditional moniker—used for decades. I have called myself an instructional designer and felt good about it. The term has the benefit of being widely known in our field but it has severe deficiencies. First, if you’re at a party and you tell people you’re an instructional designer, they’re likely to hear “structural designer” or “something-something designer” and think you’re an engineer or a new-age guru who has inhaled too much incense. Second, our job is NOT to create instruction, but to help people learn. Third, our job is NOT ONLY to create instruction to help people learn, but to also create, nurture, or enable contexts that help people learn. Instructional designer is traditional, but not precise. It sends the wrong message. We should discard it.

Learning Designer

This is not bad. It’s my second choice. But it suffers from being too vanilla, too plain, too much lacking in energy. More problematic is that it conveys the notion that we can control learning. We cannot design learning! We can only create or influence situations and materials and messages that enable learning and mathemagenic processes—that is, cognitive processes that give rise to learning. We must discard this label too.

Learning Engineer

This seems reasonable at first glance. We might think our job is to engineer learning—to take the science and technology of learning and use it to blueprint learning interventions. But this is NOT our job. Again, we don’t control learning. We can’t control learning. We can just enable it. Yes! The same argument against “designing learning” can be used against “engineering learning.” We must also reject the learning engineering label because there are a bunch of crazed technology evangelists running around advocating for learning engineering who think that big data and artificial intelligence is going to solve all the problems of the learning profession. While it is true that data will help support learning efforts, we are more likely to make a mess of this by focusing on what is easy to measure and not on what is important and difficult to measure. We must reject this label too!

Learning Experience Designer

This new label is the HOT new label in our field, but it’s a disastrous turn backward! Is that who we are—designers of experiences? Look, I get it. It seems good on the surface. It overcomes the problem of control. If we design experiences, we rightly admit that we are not able to control learning but can only enable it through learning experiences. That’s good as far as it goes. But is that all there is? NO DAMMIT! It’s a freakin’ cop-out, probably generated and supported by learning-technology platform vendors to help sell their wares! What the hell are we thinking? Isn’t it our responsibility to do more than design experiences? We’re supposed to do everything we can to use learning as a tool to create benefits. We want to help people perform better! We want to help organizations get better results! We want to create benefits that ripple through our learners’ lives and through networks of humanity. Is it okay to just create experiences and be happy with that? If you think so, I wish to hell you’d get out of the learning profession and cast your lack of passion and your incompetence into a field that doesn’t matter as much as learning! Yes! This is that serious!

As learning professionals we need to create experiences, but we also need to influence or create the conditions where our learners are motivated and resourced and supported in applying their learning. We need to utilize learning factors that enable remembering. We need to create knowledge repositories and prompting mechanisms like job aids and performance support. We need to work to create organizational cultures and habits of work that enable learning. We need to support creative thinking so people have insights that they otherwise wouldn’t have. We also must create learning-evaluation systems that give us feedback so we can create cycles of continuous improvement. If we’re just creating experiences, we are in the darkest and most dangerous depths of denial. We must reject this label and immediately erase the term “Learning Experience Designer” from our email signatures, business cards, and LinkedIn profiles!

The Best Moniker for us as Learning Professionals

First, let me say that there are many roles for us learning professionals. I’ve been talking about the overarching design/development role, but there are also trainers, instructors, teachers, professors, lecturers, facilitators, graphic designers, elearning developers, evaluators, database managers, technologists, programmers, LMS technicians, supervisors, team leaders, et cetera, et cetera, et cetera. Acknowledged!!! Now let me continue. Thanks!

A month ago, Mirjam Neelen reached out to me because she is writing a book on how to use the science of learning in our role as learning professionals. She’s doing this with another brilliant research-to-practice advocate, the learning researcher Paul Kirschner, following from their blog, 3-Star Learning. Anyway, Mirjam asked me what recommendation I might have for what we call ourselves. It was a good question, and I gave her my answer.

I gave her THE answer. I’m not sure she agreed and she and Paul and their publisher probably have to negotiate a bit, but regardless, I came away from my discussions with Mirjam convinced that the learning god had spoken to me and asked me to share the good word with you. I will now end this debate. The label we should use instead of the others is Learning Architect. This is who we are! This is who we should be!

Let’s think about what architects do—architects in the traditional sense. They study human nature and human needs, as well as the science and technology of construction, and use that knowledge/wisdom to create buildings that enable us human beings to live well. Architects blueprint the plans—practical plans—for how to build the building and then they support the people who actually construct the buildings to ensure that the building’s features will work as well as possible. After the building is finished, the people in the buildings lead their lives under the influence of the building’s design features. The best architects then assess the outcomes of those design features and suggest modifications and improvements to meet the goals and needs of the inhabitants.

We aspire to be like architects. We don’t control learning, but we’d like to influence it. We’d like to motivate our learners to engage in learning and to apply what they’ve learned. We’d like to support our learners in remembering. We’d like to help them overcome obstacles. We’d like to put structures in place to enable a culture of learning, to give learners support and resources, to keep learners focused on applying what they’ve learned. We’d like to support teams and supervisors in their roles of enabling learning. We’d like to measure learning to get feedback on learning so that we can improve learning and troubleshoot if our learners are having problems using what we’ve created or applying what they’ve learned.

We are learning architects so let’s start calling ourselves by that name!

But Isn’t “Architect” a Protected Name?

Christy Tucker (thanks Christy!) raised an important concern in the comments below, and her concern was echoed by Sean Rea and Brett Christensen. The term “architect” is a protected term, which you can read about on Wikipedia. Architects rightly want to protect their professional reputation and keep their fees high, protected from competition from people with less education, experience, and competence.

But, to my non-legal mind, this is completely irrelevant to our discussion. When we add an adjective, the name is a different name. It’s not legal to call yourself a doctor if you’re not a doctor, but it’s okay to call yourself the computer doctor, the window doctor, the cakemix doctor, the toilet doctor, or the LMS doctor.

While the term “architect” is protected, putting an adjective in front of the name changes everything. A search of LinkedIn for “data architects” lists 57,624 of them. A search of “software architect” finds 172,998. There are 3,110 “performance architects,” 24 “justice architects,” and 178 “sustainability architects.”

Already on LinkedIn, 2,396 people call themselves “learning architects.”

Searching DuckDuckGo, some of the top results were consultants calling themselves learning architects from the UK, New Zealand, Australia. LinkedIn says there are almost 10,000 learning architecture jobs in the United States.

This is a non-issue. First, adding the adjective changes the name legally. Second, even if it didn’t, there is no way that architect credentialing bodies are going to take legal action against the hundreds of thousands of people using the word “architect” with an adjective. I say this, of course, not as a lawyer—and you should not rely on my advice as legal advice.

But still, this has every appearance of being a non-issue and we learning professionals should not be so meek as to shy away from using the term learning architect.

I was listening to a podcast last week that interviewed Jim Kirkpatrick. I like to listen to what Jim and Wendy have to say because many people I speak with in my work doing learning evaluation are influenced by what they say and write. As you probably know, I think the Kirkpatrick-Katzell Four-Level Model causes more harm then good, but I like to listen and learn things from the Kirkpatrick’s even though I never hear them sharing ideas that are critical of their models and teachings. Yes! I’m offering constructive criticism! Anyway, I was listening to the podcast and agreeing with most of what Jim was saying when he mentioned that what we ought to call ourselves is, wait for it, wait for it, wait for it: “Learning-and-Performance Architects!” Did I mention that I just love Jim Kirkpatrick! Jim and I are in complete agreement on this. I’ll quibble in that the name Learning-and-Performance Architect is too long, but I agree with the sentiment that we ought to see performance as part of our responsibility.

So I did some internet searching this week for the term “Learning Architect.” I found a job at IBM with that title, estimated by Glassdoor to pay between $104,000 and $146,000, and I think I’m going to apply for that job as this consulting thing is kind of difficult these days, especially having to write incisive witty profound historic blog posts for no money and no fame.

I also found a podcast by the eLearning Coach Connie Malamed on her excellent podcast where she reviews a book by the brilliant and provocative Clive Shepherd with the title, The New Learning Architect. It was published in 2011 and now has an updated 2016 edition. Interestingly, in a post from just this year in 2019, Clive is much less demonstrative about advocating for the term Learning Architect, and casually mentions that Learning Solutions Designer is a possibility before rejecting it because of the acronym LSD. I will reject it because designing solutions may give some the idea that we are designing things, when we need to design more than tangible objects.

In searching the internet, I also found three consultants or group of consultants calling themselves learning architects. I also searched LinkedIn and found that the amazing Tom Kuhlmann has been Vice President of Community at Articulate for 12 years but added the title of Chief Learning Architect four years and eight months ago. I know Tom’s great because of our personal conversations in London and because he’s always sharing news of my good works to the Articulate community (you are, right? Tom?), but most importantly because on Tom’s LinkedIn page one of the world’s top entrepreneurs offered a testimonial that Tom improved his visual presentations by 12.9472%. You can’t make this stuff up, not even if you’re a learning experience designer high on LSD!

Clearly, this Learning Architect idea is not a new thing! But I have it on good authority that now here today, May 24, 2019, we are all learning architects!

Here are two visual representations I sent to Mirjam to help convey the breadth and depth of what a Learning Architect should do:

 

I offer these to encourage reflection and discussion. They were admittedly a rather quick creation, so certainly, they must have blind spots.

Feel free to discuss below or elsewhere the ideas discussed in this article.

And go out and be the best learning architect you can be!

I have it on good authority that you will be…