21st December 2022

Neon Elephant Award Announcement

Dr. Will Thalheimer, Principal at TiER1 Performance, Founder of Work-Learning Research, announces the winner of the 2022 Neon Elephant Award, given this year to Donald Clark, for writing the book, Learning Experience Design: How to Create Effective Learning that Works, and for his collation of the Greatest Minds on Learning (both in the podcast series with John Helmer and in Donald’s tireless work researching and curating critical ideas and thinkers in his Plan B blog).

Click here to learn more about the Neon Elephant Award…

 

2022 Award Winner – Donald Clark

Donald Clark is a successful entrepreneur, professor, researcher, author, blogger, and speaker. He is an internationally-renowned thinker in the field of learning-technology, having worked in EdTech for over 30 years; having been a leader in many successful learning-technology businesses (both as an executive and board member); and having written extensively on a wide range of topics related to learning and development—in books, articles, and his legendary blog. Relatively early in his career, Donald earned success as one of the original founders of Epic Group plc, a leading online learning company in the UK, an enterprise subsequently floated on the Stock Market in 1996 and sold in 2005. Since then, as Donald has described it, he has felt “free from the tyranny of employment,” using this privilege to the advantage of the learning field. Donald has becoming an advocate for research-based practices and intelligent uses of learning technologies. He has also founded, run, and supported learning-technology enterprises, which has further helped spread good learning practices.

Donald Clark’s most recent book—Learning Experience Design: How to Create Effective Learning that Works—stands above and apart from most writing on Learning Experience Design. It is fully and thoroughly inspired by the scientific research on learning and on real-world experience in using and developing learning technologies. Chapter after chapter it shares an inspiring introduction, a robust review of best practices, and a concise set of practical recommendations. Anyone practicing learning experience design should buy this book today—study it and apply it’s recommendations. Your learners and organizations will thank you. You will build wildly more effective learning!

Donald Clark’s compilations of our field’s best thinkers and ideas is legendary—or should be! Over many decades, he has tirelessly curated an almost endless treasure-trove of golden nuggets on many of the most important ideas in the learning field. This past year, he has brought these to a larger audience through his excellent collaboration with John Helmer, known most famously for his Learning Hack podcast. Donald Clark’s and John Helmer’s Great Minds on Learning webcast and podcast collaboration is fantastic. Donald’s written reviews provide us with an overview of the deep history of the learning field. Here is a blog post that lists many of Donald’s reviews of the great thinkers in our field.

Notable contributions from Donald Clark:


With Gratitude

In his decades of work, Donald Clark has been a tireless advocate for improvements and innovation in the field of learning-and-development and learning technology. He often refers to his work as “provocative,” and deserves admiration for (1) urging the learning field to embrace scientifically-informed practices, (2) urging us to be more forward looking in our embrace of learning technologies (particularly AI), and (3) being one of our field’s preeminent historians—reminding us of the rich and valuable work of researchers, writers, and practitioners from the past centuries to today. It is an honor to recognize Donald as this year’s winner of the Neon Elephant Award.

Click here to learn more about the Neon Elephant Award…

 

 

21st December 2021

Neon Elephant Award Announcement

Dr. Will Thalheimer, Principal at TiER1 Performance, Founder of Work-Learning Research, announces the winner of the 2021 Neon Elephant Award, given to two people this year, Clark Quinn and Patti Shank. Clark Quinn for writing the book, Learning Science for Instructional Designers. Patti Shank for writing the book Write Better Multiple-Choice Questions to Assess Learning—and for their many years translating learning research into practical recommendations.

Click here to learn more about the Neon Elephant Award…

2021 Award Winners – Clark Quinn and Patti Shank

Clark Quinn, PhD is an internationally-recognized consultant and thought-leader in learning science, learning technology and organizational learning. Clark holds a doctorate in Cognitive Psychology from the University of California at San Diego. Since 2001, Clark has been consulting, researching, writing, and speaking through his consulting practice, Quinnovation (website). Clark has been at the forefront of some of the most important trends in workplace learning, including his early advocacy for mobile learning, his work with the Internet Time Group advocating for a greater emphasis on workplace learning, and his many efforts to bring research-based wisdom to elearning design. With the publication of his new book, Clark again shows leadership—now in the cause of giving instructional designers a clear and highly-readable guide to the learning sciences.

Clark is the author of numerous books. The following are representative:

 

Patti Shank, PhD

Patti Shank, PhD, is an internationally-recognized learning analyst, writer, and translational researcher in the learning, performance, and talent space. Dr. Shank holds a doctorate in Educational Leadership and Innovation, Instructional Technology from the University of Colorado, Denver and a Masters degree in Education and Human Development from George Washington University. Since 1996, Patti has been consulting, researching, and writing through her consulting practice, Learning Peaks LLC (pattishank.com). As the best research-to-practice professionals tend to do, Patti has extensive experience as a practitioner, including roles such as training specialist, training supervisor, and manager of training and education. Patti has also played a critical role collaborating with the workplace learning’s most prominent trade associations—working, sometimes quixotically, to encourage the adoption of research-based wisdom for learning.

Patti is the author of numerous books, focusing not only on evidence-based practices, but also on online learning, elearning, and learning assessment. The following are her most recent books:


With Gratitude

In their decades of work, both Patti Shank and Clark Quinn have lived careers of heroic effort, perseverance, and passion. Their love for the learning-and-development field is deep and true. They don’t settle for half the truth, they don’t settle for half measures. But rather, they show their mettle even when they get pushback, even when times are tough, even when easier paths might call. It is an honor to recognize Patti and Clark as this year’s winners of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

 

 

21st December 2020

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2020 Neon Elephant Award, given to Mirjam Neelen and Paul Kirschner for writing the book, Evidence-Informed Learning Design: Use Evidence to Create Training Which Improves Performanceand for their many years publishing their blog 3-Star Learning Experiences.

Click here to learn more about the Neon Elephant Award…

2020 Award Winners – Mirjam Neelen and Paul Kirschner

Mirjam Neelen is one of the world’s most accomplished research-to-practice practitioners in the workplace learning field. On the practical side, Mirjam has played many roles. As of this writing, she is the Head of Global Learning Design and Learning Sciences at Novartis. She has been a Learning Experience Design Lead at Accenture and at the Learnovate Centre in Dublin, an Instructional Designer at Google, and Instructional Design Lead at Houghton Mifflin Harcourt. Mirjam utilizes evidence-informed wisdom in her work and also partners with Paul A. Kirschner in the 3-Star Learning Experience blog to bring research and evidence-informed insights to the workplace learning field. Mirjam is a member of the Executive Advisory Board of The Learning Development Accelerator.

Paul A. Kirschner is Professor Emeritus at the Open University of the Netherlands and owner of kirschner-ED, an educational consulting practice. Paul is an internationally recognized expert in learning and educational research, with many classic studies to his name. He has served as President of the International Society for the Learning Sciences, is an AERA (American Education Research Association) Research Fellow (the first European to receive this honor). He has published several very successful books: Ten Steps to Complex Learning, Urban Myths about Learning and Education. More Urban Myths about Learning and Education, and this year he published How Learning Happens: Seminal Works in Educational Psychology and What They Mean in Practice with Carl Hendrick — as well as the book he and Mirjam are honored for here. Kirschner previously won the Neon Elephant Award in 2016 for the book Urban Myths about Learning and Education written with Pedro De Bruyckere and Casper D. Hulshof. Also, Paul’s co-author on the Ten-Steps book, Jeroen van Merriënboer, won the Neon-Elephant award in 2011.

Relevant Websites

Mirjam’s and Paul’s book, Evidence-Informed Learning Design was published only ten months ago, but has already swept the world as a book critical to learning architects and learning executives in their efforts to build the most effective learning designs. In my book review earlier this year I wrote, “Mirjam Neelen and Paul Kirschner have written a truly beautiful book—one that everyone in the workplace learning field should read, study, and keep close at hand. It’s a book of transformational value because it teaches us how to think about our jobs as practitioners in utilizing research-informed ideas to build maximally effective learning architectures.”

Mirjam Neelen and Paul Kirschner are the kind of research translators we should honor and emulate in the workplace learning field. They are unafraid in seeking the truth, passionate in sharing research- and evidence-informed wisdom, dogged in compiling research from scientific journals, and thoughtful in making research ideas accessible to practitioners in our field. It is an honor to recognize Mirjam and Paul as this year’s winners of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

CrowdThinking Project on L&D Professionalization

At the L&D Conference 2020, which starts in a few days (seats still available), we are hosting the CrowdThinking Project, a two-pronged crowdsourced exploration designed to create a future-vision for the L&D field.

You and I, as learning professionals, are effective–but certainly, if there were more and better structural supports within the industry, within our organizations, within ourselves; you and I might be even more effective in our work.


First Part

The first part of the CrowdThinking Project is a survey of people like you to gather data along seven factors that influence our effectiveness and professionalization.

  1. The competencies, skills, and abilities we have as professionals
  2. The requirements and training/education needed to enter the L&D field
  3. The feedback we get on our effectiveness (learning evaluation)
  4. The support we get from our trade organizations
  5. The support/guidance we get from our graduate programs and universities
  6. The support and constraints from our business/organizational stakeholders
  7. The effort, direction, and perseverance we lend to our own development.

I developed this survey with the help from Fernando Senior.

This survey is open to all L&D professionals. I ask you to share it widely with your colleagues and friends in the L&D field.

 

Second Part

The second part of the CrowdThinking Project will take place within the L&D Conference (sorry, only if you’ve enrolled). Fernando Senior will take us through a modified world-cafe-style dialogue, focusing on four key questions.

  1. Consider your current circumstances in your L&D work situation—and more importantly, how those circumstances will change as a result of future trends in learning, technology, business, and society. Given the future you imagine, what will be the most important challenges to your work in L&D?
  2. What capabilities will L&D professionals like us need to acquire in anticipation of these upcoming challenges—to maximize our level of professionalization and our effectiveness?
  3. Whether today or in the future, how can we L&D professionals evidence and document our level of professionalization or maturity—in ways that will be understood and respected, and in ways that will add to our effectiveness.
  4. What other factors—besides our knowledge, skills, and attitudes—influence our ability to maximize our effectiveness? And, how will we be able to utilize these factors in the future to support our effectiveness?

 

Third Part

We will generate a report or reports on the findings of the survey and the discussions with recommendations for how the L&D field can continue to maintain and develop professionalization standards and practices.


How You Can Help

The most important thing I’d ask you to do right now, if you are in the workplace learning field is:

  1. Complete the survey (it’s not short. It takes 30 minutes)
  2. Ask others you know in L&D if they would consider it.


Joining the Conference

The L&D Conference 2020 runs over six weeks, it’s going to be truly amazing, and it starts in a few days (June 22 to July 31). Here’s the conference website: https://www.learningdevelopmentconference.com/

This is my conference. I’m the co-host along with my podcast partner, Matt Richter.

I know it’s last minute, so if you have trouble getting the funding figured out from your organization and want to get started, feel free to contact me to see if I can help.

Use this contact page to email me: https://www.worklearning.com/contact/

The LEARNNOVATORS team (specifically Santhosh Kumar) asked if I would join them in their Crystal Balling with Learnnovators interview series, and I accepted! They have some really great people on the series, I recommend that you check it out!

The most impressive thing was that they must have studied my whole career history and read my publication list and watched my videos because they came up with a whole set of very pertinent and important questions. I was BLOWN AWAY—completely IMPRESSED! And, given their dedication, I spent a ton of time preparing and answering their questions.

It’s a two part series and here are the links:

Here are some of the quotes they pulled out and/or I’d like to highlight:

Learning is one of the most wondrous, complex, and important areas of human functioning.

The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning.

Data can be good, but also very very bad.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important.

We will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Our senior managers don’t understand learning, they think it is easy, so they don’t support L&D like they should.

Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness.

Learning is hard and learning that is focused on work is even harder because our learners have other priorities—so we shouldn’t beat ourselves up too much.

We know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements.

Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera

Learnnovators asked me what I hoped for the learning and development field. Here’s what I said:

Nobody is good at predicting the future, so I will share the vision I hope for. I hope we in learning and development continue to be passionate about helping other people learn and perform at their best. I hope we recognize that we have a responsibility not just to our organizations, but beyond business results to our learners, their coworkers/families/friends, to the community, society, and the environs. I hope we become brilliantly professionalized, having rigorous standards, a well-researched body of knowledge, higher salaries, and career paths beyond L&D. I hope we measure better, using our results to improve what we do. I hope we, more-and-more, take a small-S scientific approach to our practices, doing more A-B testing, compiling a database of meaningful results, building virtuous cycles of continuous improvement. I hope we develop better tools to make building better learning—and better performance—easier and more effective. And I hope we continue to feel good about our contributions to learning. Learning is at the heart of our humanity!

Industry awards are hugely prominent in the workplace learning field and send a ripple of positive and negative effects on individuals and organizations. Awards affect vendor and consultant revenues and viability, learning department reputations and autonomy, individual promotion, salary, and recruitment opportunities. Because of their outsized influence, we should examine industry award processes to determine their strengths and weaknesses and to ascertain how helpful or harmful they are currently, and suggest improvements if any can be recommended.

The Promise of Learning Industry Awards

Industry awards seem to hold so much promise, with these potential benefits:

Application Effects

  • Learning and Development
    Those who apply for awards seem to have the potential to reflect on their own practices and thus learn and improve based on this reflection and any feedback they might get from those who judge their applications.
  • Nudging Improvement
    Those who apply (and even those who just review an awards application) maybe be nudged toward better practices based on the questions or requirements outlined.

Publicity of Winners Effect

  • Role Modeling
    Selected winners and the description of their work can set aspirational benchmarks for other organizations.
  • Rewarding of Good Effort
    Selected winners can be acknowledged and rewarded for their hard work, innovation, and results.
  • Promotion and Recruitment Effects
    Individuals selected for awards can be deservedly promoted or recruited to new opportunities.
  • Resourcing and Autonomy Effects
    Learning departments can earn reputation credits within their organizations that can be cashed in for resources and permission to act autonomously and avoid micromanagement.
  • Vendor Marketing
    Vendors who win can publicize and support their credibility and brand.
  • Purchasing Support
    Organizations who need products or services can be directed to vendors who have been vetted as excellent.

Benefits of Judging

  • Market Intelligence
    Judges who participate can learn about best practices, innovations, trends that they can use in their work.

NOTE: At the very end of this article, I will come back to each and every one of these promised benefits and assess how well our industry awards are helping or hurting.

The Overarching Requirements of Awards

Awards can be said to be useful if they produce valid, credible, fair, and ethical results. Ideally, we expect our awards to represent all players within the industry or subsegment—and to select from this group the objectively best exemplars based on valid, relevant, critical criteria.

The Awards Funnel

To make this happen, we can imagine a funnel, where people and/or organizations have an equal opportunity to be selected for an award. They enter the funnel at the top and then elements of the awards process winnow the field until only the best remain at the bottom of the funnel.

How Are We Doing?

How well do our awards processes meet the best practices suggested in the Awards Funnel?

Application Process Design

Award Eligibility

At the top of the funnel, everybody in the target group should be considered for an award. Particularly if we are claiming that we are choosing “The Best,” everybody should be able to enter the award application process. Ideally, we would not exclude people because they can’t afford the time or cost of the application process. We would not exclude people just because they didn’t know about the contest. Now obviously, these criterion are too stringent for the real world, but they do illustrate how an unrepresentative applicant pool can make the results less meaningful than we might like.

In a recent “Top” list on learning evaluation, none of the following organizations were included, despite these folks being leaders in learning evaluation. Non-award winners in learning evaluation were the Kirkpatrick’s, the Phillips’, Brinkerhoff, and Thalheimer. They did not end up at the end of the funnel as winners because they did not apply for the award.

Criteria

The criteria baked into the application process are fundamental to the meaningfulness of the results. If the criteria are not the most important, then the results can’t reflect a valid ranking. Unfortunately, too many awards in the workplace learning field give credit for such things as “numbers of trainers,” “hours of training provided,” “company revenues,” “average training hours per person,” “average class size,” “learner-survey ratings,” etc. These data are not related to learning effectiveness, so they should not impact applicant ratings. Unfortunately, these are taken into account in more than a few of our award contests. Indeed, in one such awards program, these types of data were worth over 20% toward the final scoring of applicants.

Application

Application questions should prompt respondents to answer with information and data that is relevant to assessing critical outcomes. Unfortunately, too many applications have generally worded questions that don’t nudge respondents to specificity. “Describe how your learning-technology innovation improved your organization’s business results.” Similarly, many applications don’t specifically ask people to show the actual learning event. Even for elearning programs, sometimes applicants are asked to include videos instead of actual programs.

Data Quality

Applicant Responses

To select the best applicants, each of the applicant responses has to be honest and substantial enough to allow judges to make considered judgments. If applicants stretch the truth, then the results will be biased. Similarly, if some applicants employ the use of awards writers—people skilled in helping companies win awards—then fair comparisons are not possible.

Information Verification

Ideally, application information would be verified to ensure accuracy. This never happens (as far as I can tell)—casting further doubt on the validity of the results.

Judge Performance

Judge Quality

Judges must be highly knowledgeable about learning and all the subsidiary areas involved in the workplace learning field, including the science of learning, memory, instruction. Ideally, judges would also be up-to-date on learning technologies, learning innovations, organization dynamics, statistics, leadership, coaching, learning evaluation, data science, and even perhaps on the topic area being taught. It is difficult to see how judges can meet all the desired criteria. One awards organizer allows unvetted conference goers to cast votes for their favorite elearning program. The judges are presumably somewhat interested and experienced in elearning, but as a whole they are clearly not all experts.

Judge Impartiality

Judges should be impartial, unbiased, blind to applicant identities, and have no conflicts of interest. This is made more difficult because screen shots and videos often include branding of the end users and learning vendors. And actually, many award applications ask for the names of the companies involved. In one contest many of the judges listed were from companies that won awards. One person I talked with who was a judge told me how when he got together with his fellow judges and the sponsor contact, he told the team that none of the applicants solutions were any good. He was first told to follow through with the process and give them a fair hearing. He said he had already done that. After some more back and forth he was told to review the applicants by trying to be appreciative. In this case there was a clear bias toward providing positive judgments—and awarding more winners.

Judge Time and Attention

Judges need to give sufficient time or their judgments won’t be accurate. Judges are largely volunteers and they have other involvements. We should assume, I think, that these volunteer judges are working in good faith and want to provide accurate ratings, but where they are squeezed for time—or the applications are confused, off-target, or include large amounts of data, there may be poor decision making. For one awards contest, the organizer claimed there were near 500 winners representing about 20% of all applicants. This would mean that there were 2,500 applicants. They said they had about 100 judges. If this was true, that would be 25 applications for each judge to review—and note that this assumes only one judge per application (which isn’t a good practice anyway, as more are needed). This seems like a recipe for judges to do as little as possible per application they review. In another award event, the judges went from table to table in a very loud room, having to judge 50-plus entries in about 90 minutes. Impossible to judge fully in this kind of atmosphere.

Judging Rubric

Bias can occur when evaluating open-ended responses like the essay questions typical on these award applications. One way to reduce bias is to give each judge a rubric with very specific options to guide judge’s decision making, or ask questions that are in the form of rubrics (see Performance-Focused Smile-Sheet questions as examples). For the award applications I reviewed, such rubrics were not a common occurrence.

Judge Reliability

Given that judging these applications is a subjective exercise—one made more chaotic by the lack of specific questions and rubrics—bias and variability can enter the judging process. It’s helpful to have a set of judges review each application to add some reliability to the judging. This seems to be a common practice, but it may not be a universal one.

Non-Interference

Sponsor Non-Interference

The organizations who sponsor these events could conceivably change or modify the results. This seems a possibility since the award organizations are not uninterested parties. They often earn revenues by getting consulting, advertising, conference, and/or awards-ceremony revenues from the same organizations who are applying for these awards. They could benefit by having low standards or relaxed judging to increase the number of award winners. Indeed, one award winner last year had 26 award categories and gave out 196 gold awards!

Awards organizations might also benefit if well-known companies are among the award winners. Judges may subconsciously give better ratings to a well-respected tech company rather than some unknown manufacturing company if company identities are not hidden. Worse, sponsors may be enticed to put their thumbs on the scale to ensure the star companies rise to the top. When applications ask for number of employees, company revenues, and even seemingly-relevant data points as number of hours trained, it’s easy to see how the books have been cooked to make the biggest, sexiest companies rise to the top of the rankings.

Except for the evidence described above where a sponsor encouraged a judge to be “appreciative,” I can’t document any cases of sponsor direct interference, but the conditions are ripe for those who might want to exploit the process. One award-sponsoring organization recognized the perception problem, and uses a third-party organization to vet the applicants. They also bestow only award one winner in each gold, silver, and bronze category, so the third-party organization has no incentive to be lenient in judging. These are good practices!

Implications

There is so much here—and I’m afraid I am only touching the surface. Despite the dirt and treasure left to be dug and discovered, I am convinced of one thing. I cannot trust the results of most of the learning industry awards. More importantly, these awards don’t give us the benefits we might hope to get from them. Let’s revisit those promised benefits from the very beginning of this article and see how things stack up.

Application Effects

  • Learning and Development
    We had hoped that applicants could learn from their involvement. However, if the wrong criteria are highlighted, they may actually learn to focus on the wrong target outcomes!
  • Nudging Improvement
    We had hoped the awards criteria would nudge applicants and other members of the community to focus on valuable design criteria and outcome measures. Unfortunately, we’ve seen that the criteria are often substandard, possibly even tangential or counter to effective learning-to-performance design.

Publicity of Winners Effect

  • Role Modeling
    We had hoped that winners would be deserving and worthy of being models, but we’ve seen that the many flaws of the various awards processes may result in winners not really being exemplars of excellence.
  • Rewarding of Good Effort
    We had hoped that those doing good work would be acknowledged and rewarded, but now we can see that we might be acknowledging mediocre efforts instead.
  • Promotion and Recruitment Effects
    We had hoped that our best and brightest might get promotions, be recruited, and be rewarded, but now it seems that people might be advantaged willy-nilly.
  • Resourcing and Autonomy Effects
    We had hoped that learning departments that do the best work would gain resources, respect, and reputational advantages; but now we see that learning departments could win an award without really deserving it. Moreover, the best resourced organizations may be able to hire award writers, allocate graphic design help, etc., to push their mediocre effort to award-winning status.
  • Vendor Marketing
    We had hoped that the best vendors would be rewarded, but we can now see that vendors with better marketing skills or resources—rather than the best learning solutions—might be rewarded instead.
  • Purchasing Support
    We had hoped that these industry awards might create market signals to help organizations procure the most effective learning solutions. We can see now that the award signals are extremely unreliable as indicators of effectiveness. If ONE awards organization can manufacture 196 gold medalists and 512 overall in a single year, how esteemed is such an award?

Benefits of Judging

  • Market Intelligence
    We had hoped that judges who participated would learn best practices and innovations, but it seems that the poor criteria involved might nudge judges to focus on information and particulars not as relevant to effective learning design.

What Should We Do Now?

You should draw your own conclusions, but here are my recommendations:

  1. Don’t assume that award winners are deserving or that non-award winners are undeserving.
  2. When evaluating vendors or consultants, ignore the awards they claim to have won—or investigate their solutions yourself.
  3. If you are a senior manager (whether on the learning team or in the broader organization), do not allow your learning teams to apply for these awards, unless you first fully vet the award process. Better to hire research-to-practice experts and evaluation experts to support your learning team’s personal development.
  4. Don’t participate as a judge in these contests unless you first vet their applications, criteria, and the way they handle judging.
  5. If your organization runs an awards contest, reevaluate your process and improve it, where needed. You can use the contents of this article as a guide for improvement.

Mea Culpa

I give an award every year, and I certainly don’t live up to all the standards in this article.

My award, the Neon Elephant Award, is designed to highlight the work of a person or group who utilizes or advocates for practical research-based wisdom. Winners include people like Ruth Clark, Paul Kirschner, K. Anders Ericsson, Julie Dirksen (among a bunch of great people, check out the link).

Interestingly, I created the award starting in 2006 because of my dissatisfaction with the awards typical in our industry at that time—awards that measured butts in seats, etc.

It Ain’t Easy — And It Will Never Be Easy!

Organizing an awards process or vetting content is not easy. A few of you may remember the excellent work of Bill Ellet, starting over two decades ago, and his company Training Media Review. It was a monumental effort to evaluate training programs. So monumental in fact that it was unsustainable. When Bill or one of his associates reviewed a training program, they spent hours and hours doing so. They spent more time than our awards judges, and they didn’t review applications; they reviewed the actual learning program.

Is a good awards process even possible?

Honestly, I don’t know. There are so many things to get right.

Can they be better?

Yes!

Are they good enough now?

Not most of them!

 

 

12th December 2019

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2019 Neon Elephant Award, given to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

Click here to learn more about the Neon Elephant Award…

 

2019 Award Winner – David Epstein

David Epstein, is an award-winning writer and journalist, having won awards for his writing from such esteemed bodies as the National Academies of Sciences, Engineering, and Medicine, the Society of Professional Journalists, and the National Center on Disability and Journalism—and has been included in the Best American Science and Nature Writing anthology. David has been a science writer for ProPublica and a senior writer at Sports Illustrated where he helped break the story on baseball legend Alex Rodriguez’s steroid use. David speaks internationally on performance science and the uses (and misuses) of data and his TED talk on human athletic performance has been viewed over eight million times.

Mr. Epstein is the author of two books:

David is honored this year for his new book on human learning and development, Range: Why Generalists Triumph in a Specialized World. The book lays out a very strong case for why most people will become better performers if they focus broadly on their development rather than focusing tenaciously and exclusively on one domain. If we want to raise our children to be great soccer players (aka “football” in most places), we’d be better off having them play multiple sports rather than just soccer. If we want to develop the most innovative cancer researchers, we shouldn’t just train them in cancer-related biology and medicine, we should give them a wealth of information and experiences from a wide range of fields.

Range is a phenomenal piece of art and science. Epstein is truly brilliant in compiling and comprehending the science he reviews, while at the same time telling stories and organizing the book in ways that engage and make complex concepts understandable. In writing the book, David is debunking the common wisdom that performance is improved most rapidly and effectively by focusing practice and learning toward a narrow foci. Where others have only hinted at the power of a broad developmental pathway, Epstein’s Range builds up a towering landmark of evidence that will remain visible on the horizon of the learning field for decades if not millennium.

We in the workplace learning-and-development field should immerse ourselves in Range—not just in thinking about how to design learning and architect learning contexts, but also in thinking about how to evaluate prospects for recruitment and hiring. It’s likely that we currently undervalue people with broad backgrounds and artificially overvalue people with extreme and narrow talents.

Here is a nice article where Epstein wrestles with a question that elucidates an issue we have in our field—what happens when many people in a field are not following research-based guidelines. The article is set in the medical profession, but there are definite parallels to what we face everyday in the learning field.

Epstein is the kind of person we should honor and emulate in the workplace learning field. He is unafraid in seeking the truth, relentless and seemingly inexhaustible in his research efforts, and clear and engaging as a conveyor of information. It is an honor to recognize him as this year’s winner of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

A huge fiery debate rages in the learning field.

 

What do we call ourselves? Are we instructional designers, learning designers, learning experience designers, learning engineers, etc.? This is an important question, of course, because words matter. But it is also a big freakin’ waste of time, so today, I’m going to end the debate! From now on we will call ourselves by one name. We will never debate this again. We will spend our valuable time on more important matters. You will thank me later! Probably after I am dead.

How do I know the name I propose is the best name? I just know. And you will know it too when you hear the simple brilliance of it.

How do I know the name I propose is the best name? Because Jim Kirkpatrick and I are in almost complete agreement on this, and, well, we have a rocky history.

How do I know the name I propose is the best name? Because it’s NOT the new stylish name everybody’s now printing on their business cards and sharing on LinkedIn. That name is a disaster, as I will explain.

The Most Popular Contenders

I will now list each of the major contenders for what we should call ourselves and then thoroughly eviscerate each one.

Instructional Designer

This is the traditional moniker—used for decades. I have called myself an instructional designer and felt good about it. The term has the benefit of being widely known in our field but it has severe deficiencies. First, if you’re at a party and you tell people you’re an instructional designer, they’re likely to hear “structural designer” or “something-something designer” and think you’re an engineer or a new-age guru who has inhaled too much incense. Second, our job is NOT to create instruction, but to help people learn. Third, our job is NOT ONLY to create instruction to help people learn, but to also create, nurture, or enable contexts that help people learn. Instructional designer is traditional, but not precise. It sends the wrong message. We should discard it.

Learning Designer

This is not bad. It’s my second choice. But it suffers from being too vanilla, too plain, too much lacking in energy. More problematic is that it conveys the notion that we can control learning. We cannot design learning! We can only create or influence situations and materials and messages that enable learning and mathemagenic processes—that is, cognitive processes that give rise to learning. We must discard this label too.

Learning Engineer

This seems reasonable at first glance. We might think our job is to engineer learning—to take the science and technology of learning and use it to blueprint learning interventions. But this is NOT our job. Again, we don’t control learning. We can’t control learning. We can just enable it. Yes! The same argument against “designing learning” can be used against “engineering learning.” We must also reject the learning engineering label because there are a bunch of crazed technology evangelists running around advocating for learning engineering who think that big data and artificial intelligence is going to solve all the problems of the learning profession. While it is true that data will help support learning efforts, we are more likely to make a mess of this by focusing on what is easy to measure and not on what is important and difficult to measure. We must reject this label too!

Learning Experience Designer

This new label is the HOT new label in our field, but it’s a disastrous turn backward! Is that who we are—designers of experiences? Look, I get it. It seems good on the surface. It overcomes the problem of control. If we design experiences, we rightly admit that we are not able to control learning but can only enable it through learning experiences. That’s good as far as it goes. But is that all there is? NO DAMMIT! It’s a freakin’ cop-out, probably generated and supported by learning-technology platform vendors to help sell their wares! What the hell are we thinking? Isn’t it our responsibility to do more than design experiences? We’re supposed to do everything we can to use learning as a tool to create benefits. We want to help people perform better! We want to help organizations get better results! We want to create benefits that ripple through our learners’ lives and through networks of humanity. Is it okay to just create experiences and be happy with that? If you think so, I wish to hell you’d get out of the learning profession and cast your lack of passion and your incompetence into a field that doesn’t matter as much as learning! Yes! This is that serious!

As learning professionals we need to create experiences, but we also need to influence or create the conditions where our learners are motivated and resourced and supported in applying their learning. We need to utilize learning factors that enable remembering. We need to create knowledge repositories and prompting mechanisms like job aids and performance support. We need to work to create organizational cultures and habits of work that enable learning. We need to support creative thinking so people have insights that they otherwise wouldn’t have. We also must create learning-evaluation systems that give us feedback so we can create cycles of continuous improvement. If we’re just creating experiences, we are in the darkest and most dangerous depths of denial. We must reject this label and immediately erase the term “Learning Experience Designer” from our email signatures, business cards, and LinkedIn profiles!

The Best Moniker for us as Learning Professionals

First, let me say that there are many roles for us learning professionals. I’ve been talking about the overarching design/development role, but there are also trainers, instructors, teachers, professors, lecturers, facilitators, graphic designers, elearning developers, evaluators, database managers, technologists, programmers, LMS technicians, supervisors, team leaders, et cetera, et cetera, et cetera. Acknowledged!!! Now let me continue. Thanks!

A month ago, Mirjam Neelen reached out to me because she is writing a book on how to use the science of learning in our role as learning professionals. She’s doing this with another brilliant research-to-practice advocate, the learning researcher Paul Kirschner, following from their blog, 3-Star Learning. Anyway, Mirjam asked me what recommendation I might have for what we call ourselves. It was a good question, and I gave her my answer.

I gave her THE answer. I’m not sure she agreed and she and Paul and their publisher probably have to negotiate a bit, but regardless, I came away from my discussions with Mirjam convinced that the learning god had spoken to me and asked me to share the good word with you. I will now end this debate. The label we should use instead of the others is Learning Architect. This is who we are! This is who we should be!

Let’s think about what architects do—architects in the traditional sense. They study human nature and human needs, as well as the science and technology of construction, and use that knowledge/wisdom to create buildings that enable us human beings to live well. Architects blueprint the plans—practical plans—for how to build the building and then they support the people who actually construct the buildings to ensure that the building’s features will work as well as possible. After the building is finished, the people in the buildings lead their lives under the influence of the building’s design features. The best architects then assess the outcomes of those design features and suggest modifications and improvements to meet the goals and needs of the inhabitants.

We aspire to be like architects. We don’t control learning, but we’d like to influence it. We’d like to motivate our learners to engage in learning and to apply what they’ve learned. We’d like to support our learners in remembering. We’d like to help them overcome obstacles. We’d like to put structures in place to enable a culture of learning, to give learners support and resources, to keep learners focused on applying what they’ve learned. We’d like to support teams and supervisors in their roles of enabling learning. We’d like to measure learning to get feedback on learning so that we can improve learning and troubleshoot if our learners are having problems using what we’ve created or applying what they’ve learned.

We are learning architects so let’s start calling ourselves by that name!

But Isn’t “Architect” a Protected Name?

Christy Tucker (thanks Christy!) raised an important concern in the comments below, and her concern was echoed by Sean Rea and Brett Christensen. The term “architect” is a protected term, which you can read about on Wikipedia. Architects rightly want to protect their professional reputation and keep their fees high, protected from competition from people with less education, experience, and competence.

But, to my non-legal mind, this is completely irrelevant to our discussion. When we add an adjective, the name is a different name. It’s not legal to call yourself a doctor if you’re not a doctor, but it’s okay to call yourself the computer doctor, the window doctor, the cakemix doctor, the toilet doctor, or the LMS doctor.

While the term “architect” is protected, putting an adjective in front of the name changes everything. A search of LinkedIn for “data architects” lists 57,624 of them. A search of “software architect” finds 172,998. There are 3,110 “performance architects,” 24 “justice architects,” and 178 “sustainability architects.”

Already on LinkedIn, 2,396 people call themselves “learning architects.”

Searching DuckDuckGo, some of the top results were consultants calling themselves learning architects from the UK, New Zealand, Australia. LinkedIn says there are almost 10,000 learning architecture jobs in the United States.

This is a non-issue. First, adding the adjective changes the name legally. Second, even if it didn’t, there is no way that architect credentialing bodies are going to take legal action against the hundreds of thousands of people using the word “architect” with an adjective. I say this, of course, not as a lawyer—and you should not rely on my advice as legal advice.

But still, this has every appearance of being a non-issue and we learning professionals should not be so meek as to shy away from using the term learning architect.

I was listening to a podcast last week that interviewed Jim Kirkpatrick. I like to listen to what Jim and Wendy have to say because many people I speak with in my work doing learning evaluation are influenced by what they say and write. As you probably know, I think the Kirkpatrick-Katzell Four-Level Model causes more harm then good, but I like to listen and learn things from the Kirkpatrick’s even though I never hear them sharing ideas that are critical of their models and teachings. Yes! I’m offering constructive criticism! Anyway, I was listening to the podcast and agreeing with most of what Jim was saying when he mentioned that what we ought to call ourselves is, wait for it, wait for it, wait for it: “Learning-and-Performance Architects!” Did I mention that I just love Jim Kirkpatrick! Jim and I are in complete agreement on this. I’ll quibble in that the name Learning-and-Performance Architect is too long, but I agree with the sentiment that we ought to see performance as part of our responsibility.

So I did some internet searching this week for the term “Learning Architect.” I found a job at IBM with that title, estimated by Glassdoor to pay between $104,000 and $146,000, and I think I’m going to apply for that job as this consulting thing is kind of difficult these days, especially having to write incisive witty profound historic blog posts for no money and no fame.

I also found a podcast by the eLearning Coach Connie Malamed on her excellent podcast where she reviews a book by the brilliant and provocative Clive Shepherd with the title, The New Learning Architect. It was published in 2011 and now has an updated 2016 edition. Interestingly, in a post from just this year in 2019, Clive is much less demonstrative about advocating for the term Learning Architect, and casually mentions that Learning Solutions Designer is a possibility before rejecting it because of the acronym LSD. I will reject it because designing solutions may give some the idea that we are designing things, when we need to design more than tangible objects.

In searching the internet, I also found three consultants or group of consultants calling themselves learning architects. I also searched LinkedIn and found that the amazing Tom Kuhlmann has been Vice President of Community at Articulate for 12 years but added the title of Chief Learning Architect four years and eight months ago. I know Tom’s great because of our personal conversations in London and because he’s always sharing news of my good works to the Articulate community (you are, right? Tom?), but most importantly because on Tom’s LinkedIn page one of the world’s top entrepreneurs offered a testimonial that Tom improved his visual presentations by 12.9472%. You can’t make this stuff up, not even if you’re a learning experience designer high on LSD!

Clearly, this Learning Architect idea is not a new thing! But I have it on good authority that now here today, May 24, 2019, we are all learning architects!

Here are two visual representations I sent to Mirjam to help convey the breadth and depth of what a Learning Architect should do:

 

I offer these to encourage reflection and discussion. They were admittedly a rather quick creation, so certainly, they must have blind spots.

Feel free to discuss below or elsewhere the ideas discussed in this article.

And go out and be the best learning architect you can be!

I have it on good authority that you will be…

 

 

 

I’m trying to develop a taxonomy for types of learning. I’ve been working on this for several years, but I want to get one more round of feedback to see if I’m missing anything. Please provide your feedback below or contact me directly.

Types of Learning (Proposed Taxonomy)

SHORT LEARNING

  • READ AND ACKNOWLEDGE (rules, regulations, or policies)
  • WEBINAR (90 minutes or less)
  • DISCUSSION-BASED LEARNING (not training, but more of a discussion to enable learning)

TRADITIONAL GUIDED LEARNING

  • CLASSROOM LEARNING (where an instructor/facilitator leads classroom activities)
  • LIVE-FACILITATED ELEARNING (eLearning facilitated and/or presented by a live person; more involved than a basic webinar)
  • SEMI-FACILITATED ELEARNING (eLearning periodically facilitated by an instructor or learning leader as learning takes place over time)
  • NON-FACILITATED ELEARNING (where materials are presented/available, but no person is actively guiding the learning)

LEARNING OVER TIME

  • SELF-STUDY LEARNING (learners provided materials that they largely learn from on their own)
  • SUBSCRIPTION LEARNING (short nuggets delivered over a week or more)

PRACTICE-BASED LEARNING

  • SKILL-PRACTICE (where focus is on improving based on practicing, not on learning lots of new information)
  • ACTION LEARNING (involving both training and on-the-job experiences designed to support learning)
  • APPRENTICESHIP (where people learn by working under the close guidance of more-experienced others)
  • MENTORSHIP, INTERNSHIP, COACHING, SUPERVISION (where a person gets periodic feedback and guidance to elicit learning)

MISCELLANEOUS LEARNING

  • ONBOARDING (where people are introduced to a new organization, unit, or job role)
  • TEAM LEARNING (where groups of people plan and organize themselves to intentionally learn from each other)