The LEARNNOVATORS team (specifically Santhosh Kumar) asked if I would join them in their Crystal Balling with Learnnovators interview series, and I accepted! They have some really great people on the series, I recommend that you check it out!

The most impressive thing was that they must have studied my whole career history and read my publication list and watched my videos because they came up with a whole set of very pertinent and important questions. I was BLOWN AWAY—completely IMPRESSED! And, given their dedication, I spent a ton of time preparing and answering their questions.

It’s a two part series and here are the links:

Here are some of the quotes they pulled out and/or I’d like to highlight:

Learning is one of the most wondrous, complex, and important areas of human functioning.

The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning.

Data can be good, but also very very bad.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important.

We will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Our senior managers don’t understand learning, they think it is easy, so they don’t support L&D like they should.

Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness.

Learning is hard and learning that is focused on work is even harder because our learners have other priorities—so we shouldn’t beat ourselves up too much.

We know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements.

Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera

Learnnovators asked me what I hoped for the learning and development field. Here’s what I said:

Nobody is good at predicting the future, so I will share the vision I hope for. I hope we in learning and development continue to be passionate about helping other people learn and perform at their best. I hope we recognize that we have a responsibility not just to our organizations, but beyond business results to our learners, their coworkers/families/friends, to the community, society, and the environs. I hope we become brilliantly professionalized, having rigorous standards, a well-researched body of knowledge, higher salaries, and career paths beyond L&D. I hope we measure better, using our results to improve what we do. I hope we, more-and-more, take a small-S scientific approach to our practices, doing more A-B testing, compiling a database of meaningful results, building virtuous cycles of continuous improvement. I hope we develop better tools to make building better learning—and better performance—easier and more effective. And I hope we continue to feel good about our contributions to learning. Learning is at the heart of our humanity!

Industry awards are hugely prominent in the workplace learning field and send a ripple of positive and negative effects on individuals and organizations. Awards affect vendor and consultant revenues and viability, learning department reputations and autonomy, individual promotion, salary, and recruitment opportunities. Because of their outsized influence, we should examine industry award processes to determine their strengths and weaknesses and to ascertain how helpful or harmful they are currently, and suggest improvements if any can be recommended.

The Promise of Learning Industry Awards

Industry awards seem to hold so much promise, with these potential benefits:

Application Effects

  • Learning and Development
    Those who apply for awards seem to have the potential to reflect on their own practices and thus learn and improve based on this reflection and any feedback they might get from those who judge their applications.
  • Nudging Improvement
    Those who apply (and even those who just review an awards application) maybe be nudged toward better practices based on the questions or requirements outlined.

Publicity of Winners Effect

  • Role Modeling
    Selected winners and the description of their work can set aspirational benchmarks for other organizations.
  • Rewarding of Good Effort
    Selected winners can be acknowledged and rewarded for their hard work, innovation, and results.
  • Promotion and Recruitment Effects
    Individuals selected for awards can be deservedly promoted or recruited to new opportunities.
  • Resourcing and Autonomy Effects
    Learning departments can earn reputation credits within their organizations that can be cashed in for resources and permission to act autonomously and avoid micromanagement.
  • Vendor Marketing
    Vendors who win can publicize and support their credibility and brand.
  • Purchasing Support
    Organizations who need products or services can be directed to vendors who have been vetted as excellent.

Benefits of Judging

  • Market Intelligence
    Judges who participate can learn about best practices, innovations, trends that they can use in their work.

NOTE: At the very end of this article, I will come back to each and every one of these promised benefits and assess how well our industry awards are helping or hurting.

The Overarching Requirements of Awards

Awards can be said to be useful if they produce valid, credible, fair, and ethical results. Ideally, we expect our awards to represent all players within the industry or subsegment—and to select from this group the objectively best exemplars based on valid, relevant, critical criteria.

The Awards Funnel

To make this happen, we can imagine a funnel, where people and/or organizations have an equal opportunity to be selected for an award. They enter the funnel at the top and then elements of the awards process winnow the field until only the best remain at the bottom of the funnel.

How Are We Doing?

How well do our awards processes meet the best practices suggested in the Awards Funnel?

Application Process Design

Award Eligibility

At the top of the funnel, everybody in the target group should be considered for an award. Particularly if we are claiming that we are choosing “The Best,” everybody should be able to enter the award application process. Ideally, we would not exclude people because they can’t afford the time or cost of the application process. We would not exclude people just because they didn’t know about the contest. Now obviously, these criterion are too stringent for the real world, but they do illustrate how an unrepresentative applicant pool can make the results less meaningful than we might like.

In a recent “Top” list on learning evaluation, none of the following organizations were included, despite these folks being leaders in learning evaluation. Non-award winners in learning evaluation were the Kirkpatrick’s, the Phillips’, Brinkerhoff, and Thalheimer. They did not end up at the end of the funnel as winners because they did not apply for the award.

Criteria

The criteria baked into the application process are fundamental to the meaningfulness of the results. If the criteria are not the most important, then the results can’t reflect a valid ranking. Unfortunately, too many awards in the workplace learning field give credit for such things as “numbers of trainers,” “hours of training provided,” “company revenues,” “average training hours per person,” “average class size,” “learner-survey ratings,” etc. These data are not related to learning effectiveness, so they should not impact applicant ratings. Unfortunately, these are taken into account in more than a few of our award contests. Indeed, in one such awards program, these types of data were worth over 20% toward the final scoring of applicants.

Application

Application questions should prompt respondents to answer with information and data that is relevant to assessing critical outcomes. Unfortunately, too many applications have generally worded questions that don’t nudge respondents to specificity. “Describe how your learning-technology innovation improved your organization’s business results.” Similarly, many applications don’t specifically ask people to show the actual learning event. Even for elearning programs, sometimes applicants are asked to include videos instead of actual programs.

Data Quality

Applicant Responses

To select the best applicants, each of the applicant responses has to be honest and substantial enough to allow judges to make considered judgments. If applicants stretch the truth, then the results will be biased. Similarly, if some applicants employ the use of awards writers—people skilled in helping companies win awards—then fair comparisons are not possible.

Information Verification

Ideally, application information would be verified to ensure accuracy. This never happens (as far as I can tell)—casting further doubt on the validity of the results.

Judge Performance

Judge Quality

Judges must be highly knowledgeable about learning and all the subsidiary areas involved in the workplace learning field, including the science of learning, memory, instruction. Ideally, judges would also be up-to-date on learning technologies, learning innovations, organization dynamics, statistics, leadership, coaching, learning evaluation, data science, and even perhaps on the topic area being taught. It is difficult to see how judges can meet all the desired criteria. One awards organizer allows unvetted conference goers to cast votes for their favorite elearning program. The judges are presumably somewhat interested and experienced in elearning, but as a whole they are clearly not all experts.

Judge Impartiality

Judges should be impartial, unbiased, blind to applicant identities, and have no conflicts of interest. This is made more difficult because screen shots and videos often include branding of the end users and learning vendors. And actually, many award applications ask for the names of the companies involved. In one contest many of the judges listed were from companies that won awards. One person I talked with who was a judge told me how when he got together with his fellow judges and the sponsor contact, he told the team that none of the applicants solutions were any good. He was first told to follow through with the process and give them a fair hearing. He said he had already done that. After some more back and forth he was told to review the applicants by trying to be appreciative. In this case there was a clear bias toward providing positive judgments—and awarding more winners.

Judge Time and Attention

Judges need to give sufficient time or their judgments won’t be accurate. Judges are largely volunteers and they have other involvements. We should assume, I think, that these volunteer judges are working in good faith and want to provide accurate ratings, but where they are squeezed for time—or the applications are confused, off-target, or include large amounts of data, there may be poor decision making. For one awards contest, the organizer claimed there were near 500 winners representing about 20% of all applicants. This would mean that there were 2,500 applicants. They said they had about 100 judges. If this was true, that would be 25 applications for each judge to review—and note that this assumes only one judge per application (which isn’t a good practice anyway, as more are needed). This seems like a recipe for judges to do as little as possible per application they review. In another award event, the judges went from table to table in a very loud room, having to judge 50-plus entries in about 90 minutes. Impossible to judge fully in this kind of atmosphere.

Judging Rubric

Bias can occur when evaluating open-ended responses like the essay questions typical on these award applications. One way to reduce bias is to give each judge a rubric with very specific options to guide judge’s decision making, or ask questions that are in the form of rubrics (see Performance-Focused Smile-Sheet questions as examples). For the award applications I reviewed, such rubrics were not a common occurrence.

Judge Reliability

Given that judging these applications is a subjective exercise—one made more chaotic by the lack of specific questions and rubrics—bias and variability can enter the judging process. It’s helpful to have a set of judges review each application to add some reliability to the judging. This seems to be a common practice, but it may not be a universal one.

Non-Interference

Sponsor Non-Interference

The organizations who sponsor these events could conceivably change or modify the results. This seems a possibility since the award organizations are not uninterested parties. They often earn revenues by getting consulting, advertising, conference, and/or awards-ceremony revenues from the same organizations who are applying for these awards. They could benefit by having low standards or relaxed judging to increase the number of award winners. Indeed, one award winner last year had 26 award categories and gave out 196 gold awards!

Awards organizations might also benefit if well-known companies are among the award winners. Judges may subconsciously give better ratings to a well-respected tech company rather than some unknown manufacturing company if company identities are not hidden. Worse, sponsors may be enticed to put their thumbs on the scale to ensure the star companies rise to the top. When applications ask for number of employees, company revenues, and even seemingly-relevant data points as number of hours trained, it’s easy to see how the books have been cooked to make the biggest, sexiest companies rise to the top of the rankings.

Except for the evidence described above where a sponsor encouraged a judge to be “appreciative,” I can’t document any cases of sponsor direct interference, but the conditions are ripe for those who might want to exploit the process. One award-sponsoring organization recognized the perception problem, and uses a third-party organization to vet the applicants. They also bestow only award one winner in each gold, silver, and bronze category, so the third-party organization has no incentive to be lenient in judging. These are good practices!

Implications

There is so much here—and I’m afraid I am only touching the surface. Despite the dirt and treasure left to be dug and discovered, I am convinced of one thing. I cannot trust the results of most of the learning industry awards. More importantly, these awards don’t give us the benefits we might hope to get from them. Let’s revisit those promised benefits from the very beginning of this article and see how things stack up.

Application Effects

  • Learning and Development
    We had hoped that applicants could learn from their involvement. However, if the wrong criteria are highlighted, they may actually learn to focus on the wrong target outcomes!
  • Nudging Improvement
    We had hoped the awards criteria would nudge applicants and other members of the community to focus on valuable design criteria and outcome measures. Unfortunately, we’ve seen that the criteria are often substandard, possibly even tangential or counter to effective learning-to-performance design.

Publicity of Winners Effect

  • Role Modeling
    We had hoped that winners would be deserving and worthy of being models, but we’ve seen that the many flaws of the various awards processes may result in winners not really being exemplars of excellence.
  • Rewarding of Good Effort
    We had hoped that those doing good work would be acknowledged and rewarded, but now we can see that we might be acknowledging mediocre efforts instead.
  • Promotion and Recruitment Effects
    We had hoped that our best and brightest might get promotions, be recruited, and be rewarded, but now it seems that people might be advantaged willy-nilly.
  • Resourcing and Autonomy Effects
    We had hoped that learning departments that do the best work would gain resources, respect, and reputational advantages; but now we see that learning departments could win an award without really deserving it. Moreover, the best resourced organizations may be able to hire award writers, allocate graphic design help, etc., to push their mediocre effort to award-winning status.
  • Vendor Marketing
    We had hoped that the best vendors would be rewarded, but we can now see that vendors with better marketing skills or resources—rather than the best learning solutions—might be rewarded instead.
  • Purchasing Support
    We had hoped that these industry awards might create market signals to help organizations procure the most effective learning solutions. We can see now that the award signals are extremely unreliable as indicators of effectiveness. If ONE awards organization can manufacture 196 gold medalists and 512 overall in a single year, how esteemed is such an award?

Benefits of Judging

  • Market Intelligence
    We had hoped that judges who participated would learn best practices and innovations, but it seems that the poor criteria involved might nudge judges to focus on information and particulars not as relevant to effective learning design.

What Should We Do Now?

You should draw your own conclusions, but here are my recommendations:

  1. Don’t assume that award winners are deserving or that non-award winners are undeserving.
  2. When evaluating vendors or consultants, ignore the awards they claim to have won—or investigate their solutions yourself.
  3. If you are a senior manager (whether on the learning team or in the broader organization), do not allow your learning teams to apply for these awards, unless you first fully vet the award process. Better to hire research-to-practice experts and evaluation experts to support your learning team’s personal development.
  4. Don’t participate as a judge in these contests unless you first vet their applications, criteria, and the way they handle judging.
  5. If your organization runs an awards contest, reevaluate your process and improve it, where needed. You can use the contents of this article as a guide for improvement.

Mea Culpa

I give an award every year, and I certainly don’t live up to all the standards in this article.

My award, the Neon Elephant Award, is designed to highlight the work of a person or group who utilizes or advocates for practical research-based wisdom. Winners include people like Ruth Clark, Paul Kirschner, K. Anders Ericsson, Julie Dirksen (among a bunch of great people, check out the link).

Interestingly, I created the award starting in 2006 because of my dissatisfaction with the awards typical in our industry at that time—awards that measured butts in seats, etc.

It Ain’t Easy — And It Will Never Be Easy!

Organizing an awards process or vetting content is not easy. A few of you may remember the excellent work of Bill Ellet, starting over two decades ago, and his company Training Media Review. It was a monumental effort to evaluate training programs. So monumental in fact that it was unsustainable. When Bill or one of his associates reviewed a training program, they spent hours and hours doing so. They spent more time than our awards judges, and they didn’t review applications; they reviewed the actual learning program.

Is a good awards process even possible?

Honestly, I don’t know. There are so many things to get right.

Can they be better?

Yes!

Are they good enough now?

Not most of them!

 

 

12th December 2019

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2019 Neon Elephant Award, given to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

Click here to learn more about the Neon Elephant Award…

 

2019 Award Winner – David Epstein

David Epstein, is an award-winning writer and journalist, having won awards for his writing from such esteemed bodies as the National Academies of Sciences, Engineering, and Medicine, the Society of Professional Journalists, and the National Center on Disability and Journalism—and has been included in the Best American Science and Nature Writing anthology. David has been a science writer for ProPublica and a senior writer at Sports Illustrated where he helped break the story on baseball legend Alex Rodriguez’s steroid use. David speaks internationally on performance science and the uses (and misuses) of data and his TED talk on human athletic performance has been viewed over eight million times.

Mr. Epstein is the author of two books:

David is honored this year for his new book on human learning and development, Range: Why Generalists Triumph in a Specialized World. The book lays out a very strong case for why most people will become better performers if they focus broadly on their development rather than focusing tenaciously and exclusively on one domain. If we want to raise our children to be great soccer players (aka “football” in most places), we’d be better off having them play multiple sports rather than just soccer. If we want to develop the most innovative cancer researchers, we shouldn’t just train them in cancer-related biology and medicine, we should give them a wealth of information and experiences from a wide range of fields.

Range is a phenomenal piece of art and science. Epstein is truly brilliant in compiling and comprehending the science he reviews, while at the same time telling stories and organizing the book in ways that engage and make complex concepts understandable. In writing the book, David is debunking the common wisdom that performance is improved most rapidly and effectively by focusing practice and learning toward a narrow foci. Where others have only hinted at the power of a broad developmental pathway, Epstein’s Range builds up a towering landmark of evidence that will remain visible on the horizon of the learning field for decades if not millennium.

We in the workplace learning-and-development field should immerse ourselves in Range—not just in thinking about how to design learning and architect learning contexts, but also in thinking about how to evaluate prospects for recruitment and hiring. It’s likely that we currently undervalue people with broad backgrounds and artificially overvalue people with extreme and narrow talents.

Here is a nice article where Epstein wrestles with a question that elucidates an issue we have in our field—what happens when many people in a field are not following research-based guidelines. The article is set in the medical profession, but there are definite parallels to what we face everyday in the learning field.

Epstein is the kind of person we should honor and emulate in the workplace learning field. He is unafraid in seeking the truth, relentless and seemingly inexhaustible in his research efforts, and clear and engaging as a conveyor of information. It is an honor to recognize him as this year’s winner of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

A huge fiery debate rages in the learning field.

 

What do we call ourselves? Are we instructional designers, learning designers, learning experience designers, learning engineers, etc.? This is an important question, of course, because words matter. But it is also a big freakin’ waste of time, so today, I’m going to end the debate! From now on we will call ourselves by one name. We will never debate this again. We will spend our valuable time on more important matters. You will thank me later! Probably after I am dead.

How do I know the name I propose is the best name? I just know. And you will know it too when you hear the simple brilliance of it.

How do I know the name I propose is the best name? Because Jim Kirkpatrick and I are in almost complete agreement on this, and, well, we have a rocky history.

How do I know the name I propose is the best name? Because it’s NOT the new stylish name everybody’s now printing on their business cards and sharing on LinkedIn. That name is a disaster, as I will explain.

The Most Popular Contenders

I will now list each of the major contenders for what we should call ourselves and then thoroughly eviscerate each one.

Instructional Designer

This is the traditional moniker—used for decades. I have called myself an instructional designer and felt good about it. The term has the benefit of being widely known in our field but it has severe deficiencies. First, if you’re at a party and you tell people you’re an instructional designer, they’re likely to hear “structural designer” or “something-something designer” and think you’re an engineer or a new-age guru who has inhaled too much incense. Second, our job is NOT to create instruction, but to help people learn. Third, our job is NOT ONLY to create instruction to help people learn, but to also create, nurture, or enable contexts that help people learn. Instructional designer is traditional, but not precise. It sends the wrong message. We should discard it.

Learning Designer

This is not bad. It’s my second choice. But it suffers from being too vanilla, too plain, too much lacking in energy. More problematic is that it conveys the notion that we can control learning. We cannot design learning! We can only create or influence situations and materials and messages that enable learning and mathemagenic processes—that is, cognitive processes that give rise to learning. We must discard this label too.

Learning Engineer

This seems reasonable at first glance. We might think our job is to engineer learning—to take the science and technology of learning and use it to blueprint learning interventions. But this is NOT our job. Again, we don’t control learning. We can’t control learning. We can just enable it. Yes! The same argument against “designing learning” can be used against “engineering learning.” We must also reject the learning engineering label because there are a bunch of crazed technology evangelists running around advocating for learning engineering who think that big data and artificial intelligence is going to solve all the problems of the learning profession. While it is true that data will help support learning efforts, we are more likely to make a mess of this by focusing on what is easy to measure and not on what is important and difficult to measure. We must reject this label too!

Learning Experience Designer

This new label is the HOT new label in our field, but it’s a disastrous turn backward! Is that who we are—designers of experiences? Look, I get it. It seems good on the surface. It overcomes the problem of control. If we design experiences, we rightly admit that we are not able to control learning but can only enable it through learning experiences. That’s good as far as it goes. But is that all there is? NO DAMMIT! It’s a freakin’ cop-out, probably generated and supported by learning-technology platform vendors to help sell their wares! What the hell are we thinking? Isn’t it our responsibility to do more than design experiences? We’re supposed to do everything we can to use learning as a tool to create benefits. We want to help people perform better! We want to help organizations get better results! We want to create benefits that ripple through our learners’ lives and through networks of humanity. Is it okay to just create experiences and be happy with that? If you think so, I wish to hell you’d get out of the learning profession and cast your lack of passion and your incompetence into a field that doesn’t matter as much as learning! Yes! This is that serious!

As learning professionals we need to create experiences, but we also need to influence or create the conditions where our learners are motivated and resourced and supported in applying their learning. We need to utilize learning factors that enable remembering. We need to create knowledge repositories and prompting mechanisms like job aids and performance support. We need to work to create organizational cultures and habits of work that enable learning. We need to support creative thinking so people have insights that they otherwise wouldn’t have. We also must create learning-evaluation systems that give us feedback so we can create cycles of continuous improvement. If we’re just creating experiences, we are in the darkest and most dangerous depths of denial. We must reject this label and immediately erase the term “Learning Experience Designer” from our email signatures, business cards, and LinkedIn profiles!

The Best Moniker for us as Learning Professionals

First, let me say that there are many roles for us learning professionals. I’ve been talking about the overarching design/development role, but there are also trainers, instructors, teachers, professors, lecturers, facilitators, graphic designers, elearning developers, evaluators, database managers, technologists, programmers, LMS technicians, supervisors, team leaders, et cetera, et cetera, et cetera. Acknowledged!!! Now let me continue. Thanks!

A month ago, Mirjam Neelen reached out to me because she is writing a book on how to use the science of learning in our role as learning professionals. She’s doing this with another brilliant research-to-practice advocate, the learning researcher Paul Kirschner, following from their blog, 3-Star Learning. Anyway, Mirjam asked me what recommendation I might have for what we call ourselves. It was a good question, and I gave her my answer.

I gave her THE answer. I’m not sure she agreed and she and Paul and their publisher probably have to negotiate a bit, but regardless, I came away from my discussions with Mirjam convinced that the learning god had spoken to me and asked me to share the good word with you. I will now end this debate. The label we should use instead of the others is Learning Architect. This is who we are! This is who we should be!

Let’s think about what architects do—architects in the traditional sense. They study human nature and human needs, as well as the science and technology of construction, and use that knowledge/wisdom to create buildings that enable us human beings to live well. Architects blueprint the plans—practical plans—for how to build the building and then they support the people who actually construct the buildings to ensure that the building’s features will work as well as possible. After the building is finished, the people in the buildings lead their lives under the influence of the building’s design features. The best architects then assess the outcomes of those design features and suggest modifications and improvements to meet the goals and needs of the inhabitants.

We aspire to be like architects. We don’t control learning, but we’d like to influence it. We’d like to motivate our learners to engage in learning and to apply what they’ve learned. We’d like to support our learners in remembering. We’d like to help them overcome obstacles. We’d like to put structures in place to enable a culture of learning, to give learners support and resources, to keep learners focused on applying what they’ve learned. We’d like to support teams and supervisors in their roles of enabling learning. We’d like to measure learning to get feedback on learning so that we can improve learning and troubleshoot if our learners are having problems using what we’ve created or applying what they’ve learned.

We are learning architects so let’s start calling ourselves by that name!

But Isn’t “Architect” a Protected Name?

Christy Tucker (thanks Christy!) raised an important concern in the comments below, and her concern was echoed by Sean Rea and Brett Christensen. The term “architect” is a protected term, which you can read about on Wikipedia. Architects rightly want to protect their professional reputation and keep their fees high, protected from competition from people with less education, experience, and competence.

But, to my non-legal mind, this is completely irrelevant to our discussion. When we add an adjective, the name is a different name. It’s not legal to call yourself a doctor if you’re not a doctor, but it’s okay to call yourself the computer doctor, the window doctor, the cakemix doctor, the toilet doctor, or the LMS doctor.

While the term “architect” is protected, putting an adjective in front of the name changes everything. A search of LinkedIn for “data architects” lists 57,624 of them. A search of “software architect” finds 172,998. There are 3,110 “performance architects,” 24 “justice architects,” and 178 “sustainability architects.”

Already on LinkedIn, 2,396 people call themselves “learning architects.”

Searching DuckDuckGo, some of the top results were consultants calling themselves learning architects from the UK, New Zealand, Australia. LinkedIn says there are almost 10,000 learning architecture jobs in the United States.

This is a non-issue. First, adding the adjective changes the name legally. Second, even if it didn’t, there is no way that architect credentialing bodies are going to take legal action against the hundreds of thousands of people using the word “architect” with an adjective. I say this, of course, not as a lawyer—and you should not rely on my advice as legal advice.

But still, this has every appearance of being a non-issue and we learning professionals should not be so meek as to shy away from using the term learning architect.

I was listening to a podcast last week that interviewed Jim Kirkpatrick. I like to listen to what Jim and Wendy have to say because many people I speak with in my work doing learning evaluation are influenced by what they say and write. As you probably know, I think the Kirkpatrick-Katzell Four-Level Model causes more harm then good, but I like to listen and learn things from the Kirkpatrick’s even though I never hear them sharing ideas that are critical of their models and teachings. Yes! I’m offering constructive criticism! Anyway, I was listening to the podcast and agreeing with most of what Jim was saying when he mentioned that what we ought to call ourselves is, wait for it, wait for it, wait for it: “Learning-and-Performance Architects!” Did I mention that I just love Jim Kirkpatrick! Jim and I are in complete agreement on this. I’ll quibble in that the name Learning-and-Performance Architect is too long, but I agree with the sentiment that we ought to see performance as part of our responsibility.

So I did some internet searching this week for the term “Learning Architect.” I found a job at IBM with that title, estimated by Glassdoor to pay between $104,000 and $146,000, and I think I’m going to apply for that job as this consulting thing is kind of difficult these days, especially having to write incisive witty profound historic blog posts for no money and no fame.

I also found a podcast by the eLearning Coach Connie Malamed on her excellent podcast where she reviews a book by the brilliant and provocative Clive Shepherd with the title, The New Learning Architect. It was published in 2011 and now has an updated 2016 edition. Interestingly, in a post from just this year in 2019, Clive is much less demonstrative about advocating for the term Learning Architect, and casually mentions that Learning Solutions Designer is a possibility before rejecting it because of the acronym LSD. I will reject it because designing solutions may give some the idea that we are designing things, when we need to design more than tangible objects.

In searching the internet, I also found three consultants or group of consultants calling themselves learning architects. I also searched LinkedIn and found that the amazing Tom Kuhlmann has been Vice President of Community at Articulate for 12 years but added the title of Chief Learning Architect four years and eight months ago. I know Tom’s great because of our personal conversations in London and because he’s always sharing news of my good works to the Articulate community (you are, right? Tom?), but most importantly because on Tom’s LinkedIn page one of the world’s top entrepreneurs offered a testimonial that Tom improved his visual presentations by 12.9472%. You can’t make this stuff up, not even if you’re a learning experience designer high on LSD!

Clearly, this Learning Architect idea is not a new thing! But I have it on good authority that now here today, May 24, 2019, we are all learning architects!

Here are two visual representations I sent to Mirjam to help convey the breadth and depth of what a Learning Architect should do:

 

I offer these to encourage reflection and discussion. They were admittedly a rather quick creation, so certainly, they must have blind spots.

Feel free to discuss below or elsewhere the ideas discussed in this article.

And go out and be the best learning architect you can be!

I have it on good authority that you will be…

 

 

 

I’m trying to develop a taxonomy for types of learning. I’ve been working on this for several years, but I want to get one more round of feedback to see if I’m missing anything. Please provide your feedback below or contact me directly.

Types of Learning (Proposed Taxonomy)

SHORT LEARNING

  • READ AND ACKNOWLEDGE (rules, regulations, or policies)
  • WEBINAR (90 minutes or less)
  • DISCUSSION-BASED LEARNING (not training, but more of a discussion to enable learning)

TRADITIONAL GUIDED LEARNING

  • CLASSROOM LEARNING (where an instructor/facilitator leads classroom activities)
  • LIVE-FACILITATED ELEARNING (eLearning facilitated and/or presented by a live person; more involved than a basic webinar)
  • SEMI-FACILITATED ELEARNING (eLearning periodically facilitated by an instructor or learning leader as learning takes place over time)
  • NON-FACILITATED ELEARNING (where materials are presented/available, but no person is actively guiding the learning)

LEARNING OVER TIME

  • SELF-STUDY LEARNING (learners provided materials that they largely learn from on their own)
  • SUBSCRIPTION LEARNING (short nuggets delivered over a week or more)

PRACTICE-BASED LEARNING

  • SKILL-PRACTICE (where focus is on improving based on practicing, not on learning lots of new information)
  • ACTION LEARNING (involving both training and on-the-job experiences designed to support learning)
  • APPRENTICESHIP (where people learn by working under the close guidance of more-experienced others)
  • MENTORSHIP, INTERNSHIP, COACHING, SUPERVISION (where a person gets periodic feedback and guidance to elicit learning)

MISCELLANEOUS LEARNING

  • ONBOARDING (where people are introduced to a new organization, unit, or job role)
  • TEAM LEARNING (where groups of people plan and organize themselves to intentionally learn from each other)

I just came across a letter I wrote back in 2001 to the editor of the magazine of The Association for Psychological Science. I’m sharing it because it shows that we have made only a little progress in creating an ecosystem where research translators play a vital role in facilitating the dissemination of research wisdom.

In the letter, I argued that research wholesalers are needed to bridge the chasm between academic researchers and practitioners.

You can read the letter here.

When I started playing the research-translator role full-time in 1998, I was full of hope that the role would allow me to prosper and that many more research translators would join the fold. At that time, only Ruth Clark and I were doing this in the workplace learning field.

Where are we now? Have things gotten better?

Yes! And No! We now have a handful of folks doing research translation full bore outside the academy, while earning their living as consultants, speakers, research directors, book writers, workshop presenters, learning strategists, learning evaluators or some combination. Ruth Clark is semi-retired. I, Will Thalheimer, am still at it. We’ve got Patti Shank, Julie Dirksen, Mirjan Neelen, Donald Clark, Jane Bozarth, Clark Quinn. We’ve got folks who focus more generally on learning like Ulrich Boser. It’s not always an easy existence for most of these folks, but they don’t show signs of backing down.

Back in 2001, I envisioned something a bit different however. Today’s research translators are scratching out a living through sheer entrepreneurial ingenuity. I had envisioned the academy embracing research translators as critical to their mission—and paying them a sustainable salary for their efforts. This is not going to happen any time soon, nor are our trade associations stepping up to provide well-paying roles for research translators. You’d think that the most well compensated of our trade-association leaders—those bringing home seven-figure incomes and funding dancing musical extravagances—could afford to nick their salaries and fund a research translator or two to ensure their members were being presented with the most powerful science-based recommendations.

Unfortunately, the forces in the workplace learning field are misaligned. There is no journalism in our field to keep the powerful accountable. There is little or no learning-measurement accountability to push us toward better learning designs and hence require proven research-based recommendations.

But! There are some damn good people who want to create the most effective learning possible. They are driving excellence even with the perfect storm blowing us hither and yon. I’m probably a bit biased, but I see more and more people who want to know what the research says—who want to build the most effective learning possible. I also see, on the flip side, an unwillingness for organizations to pay for research wisdom. Well, they’ll pay for opinion research to find out what everybody else is doing, but they won’t pay for scientific research. They seem to expect that this can be gained quickly from Google.

I’m always an optimist. I figure, if we stand by the river long enough, we will see poor practices washed away.

Anyway, back to that letter. I’m kind of proud of it. I’m happy to have happened upon it today.

The 70-20-10 Framework has been all the rage for the last five or ten years in the workplace learning field. Indeed, I organized a great debate about 70-20-10 through The Debunker Club (you can see the tweet stream here). I have gone on record saying that the numbers don’t have a sound research backing, but that the concept is a good one—particularly the idea that we as learning professionals ought to leverage on-the-job learning where we can.

What is 70-20-10?

The 70-20-10 framework is built on the belief that 10% of workplace learning is, or should be, propelled by formal training; that 20% is, or should be, enabled by learning directly from others; and that 70% of workplace learning is, or should, come from employee’s learning through workplace experiences.

Supported by Research?

Given all the energy around 70-20-10, you might think that lots of rigorous scientific research has been done on the framework. Well, you would be wrong!

In fact, up until today (April 19, 2019), only one study has been published in a scientific journal (my search of PsycINFO only reveals one study). In this post, I will review that one study, published last year:

Johnson, S. J., Blackman, D. A., & Buick, F. (2018). The 70:20:10 framework and the transfer of learning. Human Resource Development Quarterly. Advance online publication.

Caveats

All research has strengths, weaknesses, and limitations—and it’s helpful to acknowledge these so we can think clearly. First, one study cannot be definitive, and this is just one study. Also, this study is qualitative and relies on subjective inputs to draw its conclusions. Ideally, we’d like to have more objective measures utilized. It is also gathering data from a small sample of public sector workers, where ideally we want a wider range of diverse participants.

Methodology

The researchers found a group of organizations who had been bombarded with messages and training to encourage the use of the 70-20-10 model. Specifically, the APSC (The Australian Public Sector Commission), starting in 2011, encouraged the Australian public sector to embrace 70-20-10.

The specific study “draws from the experiences of two groups of Australian public sector managers: senior managers responsible for implementing the 70:20:10 framework within their organization; and middle managers who have undergone management capability development aligned to the 70:20:10 framework. All managers were drawn from the Commonwealth, Victorian, Queensland, and Northern Territory governments.”

A qualitative approach was chosen according to the researchers “given the atheoretical nature of the 70:20:10 framework and the lack of theory or evidence to provide a research framework.”

The qualitative approaches used by the researchers were individual structured interviews and group structured interviews.

The researchers chose people to interview based on their experience using the 70-20-10 framework to develop middle managers. “A purposive sampling technique was adopted, selecting participants who had specific knowledge of, and experience with, middle management capability development in line with the 70:20:10 framework.”

The researchers used a text-processing program (NVivo) to help them organize and make sense of the qualitative data (the words collected in the interviews). According to Wikipedia, “NVivo is intended to help users organize and analyze non-numerical or unstructured data. The software allows users to classify, sort and arrange information; examine relationships in the data; and combine analysis with linking, shaping, searching and modeling.”

Overall Results

The authors conclude the following:

“In terms of implications for practice, the 70:20:10 framework has the potential to better guide the achievement of capability development through improved learning transfer in the public sector. However, this will only occur if future implementation guidelines focus on both the types of learning required and how to integrate them in a meaningful way. Actively addressing the impact that senior managers and peers have in how learning is integrated into the workplace through both social modeling and organizational support… will also need to become a core part of any effective implementation.”

“Using a large qualitative data set that enabled the exploration of participant perspectives and experiences of using the 70:20:10 framework in situ, we found that, despite many Australian public sector organizations implementing the framework, to date it is failing to deliver desired learning transfer results. This failure can be attributed to four misconceptions in the framework’s implementation: (a) an overconfident assumption that unstructured experiential learning will automatically result in capability development; (b) a narrow interpretation of social learning and a failure to recognize the role social learning has in integrating experiential, social and formal learning; (c) the expectation that managerial behavior would automatically change following formal training and development activities without the need to actively support the process; and (d) a lack of recognition of the requirement of a planned and integrated relationship between the elements of the 70:20:10 framework.”

Specific Difficulties

With Experiential Learning

“Senior managers indicated that one reason for adopting the 70:20:10 framework was that the dominant element of 70% development achieved through experiential learning reflected their expectation that employees should learn on the job. However, when talking to the middle managers themselves, it was not clear how such learning was being supported. Participants suggested that one problem was a leadership perception across senior managers that middle managers could automatically transition into middle management roles without a great deal of support or development.”

“The most common concern, however, was that experiential learning efficacy was challenged because managers were acquiring inappropriate behaviors on the job based on what they saw around them every day.”

“We found that experiential learning, as it is currently being implemented, is predominantly unstructured and unmanaged, that is, systems are not put in place in the work environment to support learning. It was anticipated that managers would learn on the job, without adequate preparation, additional support, or resourcing to facilitate effective learning.”

With Social Learning

“Overall, participants welcomed the potential of social learning, which could help them make sense of their con-text, enabling both sense making of new knowledge acquired and reinforcing what was appropriate both in, and for, their organization. However, they made it clear that, despite apparent organizational awareness of the value of social learning, it was predominantly dependent upon the preferences and working styles of individual managers, rather than being supported systematically through organizationally designed learning programs. Consequently, it was apparent that social learning was not being utilized in the way intended in the 70:20:10 framework in that it was not usually integrated with formal or experiential learning.”

Mentoring

“Mentoring was consistently highlighted by middle and senior managers as being important for both supporting a middle manager’s current job and for building future capacity.”

“Despite mentoring being consistently raised as the most favored form of development, it was not always formally supported by the organization, meaning that, in many instances, mentoring was lacking for middle managers.”

“A lack of systemic approaches to mentoring meant it was fragile and often temporary.”

Peer Support

“Peer support and networking encouraged middle managers to adopt a broader perspective and engage in a community of practice to develop ideas regarding implementing new skills.”

“However, despite managers agreeing that networks and peer support would assist them to build capability and transfer learning to the workplace, there appeared to be few organizationally supported peer learning opportunities. It was largely up to individuals to actively seek out and join their own networks.”

With Formal Learning

“Formal learning programs were recognized by middle and senior managers as important forms of capability development. Attendance was often encouraged for new middle managers.”

“However, not all experiences with formal training programs were positive, with both senior and middle managers reflecting on their ineffectiveness.”

“For the most part, participants reported finishing formal development programs with little to no follow up.”

“There was a lack of both social and experiential support for embedding this learning. The lack of social learning support partly revolved around the high workloads of managers and the lack of time devoted to development activities.”

“The lack of experiential support and senior management feedback meant that many middle managers did not have the opportunity to practice and further develop their new skills, despite their initial enthusiasm.”

“A key issue with this was the lack of direct and clear guidance provided by their line managers.”

“A further issue with formal learning was that it was often designed generically for groups of participants…  The need for specificity also related to the lack of explicit, individualized feedback provided by their line manager to reinforce and embed learning.”

What Should We Make of This Preliminary Research?

Again, with only one study—and a qualitative one conducted on a narrow type of participant—we should be very careful in drawing conclusions.

Still, the study can be helpful in helping us develop hypotheses for further testing—both by researchers and by us as learning professionals.

We also ought to be careful in casting doubt on the 70-20-10 framework itself. Indeed, the research seems to suggest that the framework was not always implemented as intended. On the other hand, when it is demonstrated that a model tends to be used poorly in its routine use, then we should become skeptical that it will produce reliable benefits.

Here are a list of reflections generated in me by the research:

  1. Why so much excitement for 70-20-10 with so little research backing?
  2. Formal training was found to have all the problems normally associated with it, especially the lack of follow-through and after-training support—so we still need to work to improve it!
  3. Who will provide continuous support for experiential and social learning? In the research case, the responsibility for implementing on-the-job learning experiences was not clear, and so the implementation was not done or was poorly done.
  4. What does it take in terms of resources, responsibility, and tasking to make experiential and social learning useful? Or, is this just a bridge too far?
  5. The most likely leverage point for on-the-job learning still seems, to me, to be managers. If this is a correct assumption—and really it should be tested—how can we in Learning & Development encourage, support, and resource managers for this role?

Sign Up For Will’s News by Clicking Here

 

 

 

I want to thank David Kelly and the eLearning Guild for awarding me the prestigious title of Guild Master.

Guild Masters including an amazing list of folks, including lots of research-to-practice legends like Ruth Clark, Julie Dirksen, Clark Quinn, Jane Bozarth, Karl Kapp, and others who utilize research-based recommendations in their work.

Delighted to be included!

 

 

Donald Taylor, learning-industry visionary, has just come out with his annual Global Sentiment Survey asking practitioners in the field what topics are the most important right now. The thing that struck me is that the results show that data is becoming more and more important to people, especially as represented in adaptive learning through personalization, artificial intelligence, and learning analytics.

Learning analytics was most important category for the opinion leaders represented in social media. This seems right to me as someone who will be focused mostly on learning evaluation in 2019.

As Don said in the GoodPractice podcast with Ross Dickie and Owen Ferguson, “We don’t have to prove. We have to improve through learning analytics.”

What I love about Don Taylor’s work here is that he’s clear as sunshine about the strengths and limitations of this survey—and, most importantly, that he takes the time to explain what things mean without over-hyping and slight-of-hand. It’s a really simple survey, but the results are fascinating—not necessarily about what we should be doing, but what people in our field think we should be paying attention to. This kind of information is critical to all of us who might need to persuade our teams and stakeholders on how we can be most effective in our learning interventions.

Other findings:

  • Businessy-stuff fell in rated importance, for example, “consulting more deeply in the business,” “showing value,” and “developing the L&D function.”
  • Neuroscience/Cognitive Science fell in importance (most likely I think because some folks have been debunking the neuroscience-and-learning connections). And note: These should not be one category really, especially given that people in the know know that cognitive science, or more generally learning research, has shown to have proven value. Neuroscience not so much.
  • Mobile delivery and artificial intelligence were to two biggest gainers in terms of popularity.
  • Very intriguing that people active on social media (perhaps thought leaders, perhaps the opinionated mob) have different views that a more general population of workplace learning professionals. There is an interesting analysis in the book and a nice discussion in the podcast mentioned above.

For those interested in Don Taylor’s work, check out his website.