Industry awards are hugely prominent in the workplace learning field and send a ripple of positive and negative effects on individuals and organizations. Awards affect vendor and consultant revenues and viability, learning department reputations and autonomy, individual promotion, salary, and recruitment opportunities. Because of their outsized influence, we should examine industry award processes to determine their strengths and weaknesses and to ascertain how helpful or harmful they are currently, and suggest improvements if any can be recommended.

The Promise of Learning Industry Awards

Industry awards seem to hold so much promise, with these potential benefits:

Application Effects

  • Learning and Development
    Those who apply for awards seem to have the potential to reflect on their own practices and thus learn and improve based on this reflection and any feedback they might get from those who judge their applications.
  • Nudging Improvement
    Those who apply (and even those who just review an awards application) maybe be nudged toward better practices based on the questions or requirements outlined.

Publicity of Winners Effect

  • Role Modeling
    Selected winners and the description of their work can set aspirational benchmarks for other organizations.
  • Rewarding of Good Effort
    Selected winners can be acknowledged and rewarded for their hard work, innovation, and results.
  • Promotion and Recruitment Effects
    Individuals selected for awards can be deservedly promoted or recruited to new opportunities.
  • Resourcing and Autonomy Effects
    Learning departments can earn reputation credits within their organizations that can be cashed in for resources and permission to act autonomously and avoid micromanagement.
  • Vendor Marketing
    Vendors who win can publicize and support their credibility and brand.
  • Purchasing Support
    Organizations who need products or services can be directed to vendors who have been vetted as excellent.

Benefits of Judging

  • Market Intelligence
    Judges who participate can learn about best practices, innovations, trends that they can use in their work.

NOTE: At the very end of this article, I will come back to each and every one of these promised benefits and assess how well our industry awards are helping or hurting.

The Overarching Requirements of Awards

Awards can be said to be useful if they produce valid, credible, fair, and ethical results. Ideally, we expect our awards to represent all players within the industry or subsegment—and to select from this group the objectively best exemplars based on valid, relevant, critical criteria.

The Awards Funnel

To make this happen, we can imagine a funnel, where people and/or organizations have an equal opportunity to be selected for an award. They enter the funnel at the top and then elements of the awards process winnow the field until only the best remain at the bottom of the funnel.

How Are We Doing?

How well do our awards processes meet the best practices suggested in the Awards Funnel?

Application Process Design

Award Eligibility

At the top of the funnel, everybody in the target group should be considered for an award. Particularly if we are claiming that we are choosing “The Best,” everybody should be able to enter the award application process. Ideally, we would not exclude people because they can’t afford the time or cost of the application process. We would not exclude people just because they didn’t know about the contest. Now obviously, these criterion are too stringent for the real world, but they do illustrate how an unrepresentative applicant pool can make the results less meaningful than we might like.

In a recent “Top” list on learning evaluation, none of the following organizations were included, despite these folks being leaders in learning evaluation. Non-award winners in learning evaluation were the Kirkpatrick’s, the Phillips’, Brinkerhoff, and Thalheimer. They did not end up at the end of the funnel as winners because they did not apply for the award.

Criteria

The criteria baked into the application process are fundamental to the meaningfulness of the results. If the criteria are not the most important, then the results can’t reflect a valid ranking. Unfortunately, too many awards in the workplace learning field give credit for such things as “numbers of trainers,” “hours of training provided,” “company revenues,” “average training hours per person,” “average class size,” “learner-survey ratings,” etc. These data are not related to learning effectiveness, so they should not impact applicant ratings. Unfortunately, these are taken into account in more than a few of our award contests. Indeed, in one such awards program, these types of data were worth over 20% toward the final scoring of applicants.

Application

Application questions should prompt respondents to answer with information and data that is relevant to assessing critical outcomes. Unfortunately, too many applications have generally worded questions that don’t nudge respondents to specificity. “Describe how your learning-technology innovation improved your organization’s business results.” Similarly, many applications don’t specifically ask people to show the actual learning event. Even for elearning programs, sometimes applicants are asked to include videos instead of actual programs.

Data Quality

Applicant Responses

To select the best applicants, each of the applicant responses has to be honest and substantial enough to allow judges to make considered judgments. If applicants stretch the truth, then the results will be biased. Similarly, if some applicants employ the use of awards writers—people skilled in helping companies win awards—then fair comparisons are not possible.

Information Verification

Ideally, application information would be verified to ensure accuracy. This never happens (as far as I can tell)—casting further doubt on the validity of the results.

Judge Performance

Judge Quality

Judges must be highly knowledgeable about learning and all the subsidiary areas involved in the workplace learning field, including the science of learning, memory, instruction. Ideally, judges would also be up-to-date on learning technologies, learning innovations, organization dynamics, statistics, leadership, coaching, learning evaluation, data science, and even perhaps on the topic area being taught. It is difficult to see how judges can meet all the desired criteria. One awards organizer allows unvetted conference goers to cast votes for their favorite elearning program. The judges are presumably somewhat interested and experienced in elearning, but as a whole they are clearly not all experts.

Judge Impartiality

Judges should be impartial, unbiased, blind to applicant identities, and have no conflicts of interest. This is made more difficult because screen shots and videos often include branding of the end users and learning vendors. And actually, many award applications ask for the names of the companies involved. In one contest many of the judges listed were from companies that won awards. One person I talked with who was a judge told me how when he got together with his fellow judges and the sponsor contact, he told the team that none of the applicants solutions were any good. He was first told to follow through with the process and give them a fair hearing. He said he had already done that. After some more back and forth he was told to review the applicants by trying to be appreciative. In this case there was a clear bias toward providing positive judgments—and awarding more winners.

Judge Time and Attention

Judges need to give sufficient time or their judgments won’t be accurate. Judges are largely volunteers and they have other involvements. We should assume, I think, that these volunteer judges are working in good faith and want to provide accurate ratings, but where they are squeezed for time—or the applications are confused, off-target, or include large amounts of data, there may be poor decision making. For one awards contest, the organizer claimed there were near 500 winners representing about 20% of all applicants. This would mean that there were 2,500 applicants. They said they had about 100 judges. If this was true, that would be 25 applications for each judge to review—and note that this assumes only one judge per application (which isn’t a good practice anyway, as more are needed). This seems like a recipe for judges to do as little as possible per application they review. In another award event, the judges went from table to table in a very loud room, having to judge 50-plus entries in about 90 minutes. Impossible to judge fully in this kind of atmosphere.

Judging Rubric

Bias can occur when evaluating open-ended responses like the essay questions typical on these award applications. One way to reduce bias is to give each judge a rubric with very specific options to guide judge’s decision making, or ask questions that are in the form of rubrics (see Performance-Focused Smile-Sheet questions as examples). For the award applications I reviewed, such rubrics were not a common occurrence.

Judge Reliability

Given that judging these applications is a subjective exercise—one made more chaotic by the lack of specific questions and rubrics—bias and variability can enter the judging process. It’s helpful to have a set of judges review each application to add some reliability to the judging. This seems to be a common practice, but it may not be a universal one.

Non-Interference

Sponsor Non-Interference

The organizations who sponsor these events could conceivably change or modify the results. This seems a possibility since the award organizations are not uninterested parties. They often earn revenues by getting consulting, advertising, conference, and/or awards-ceremony revenues from the same organizations who are applying for these awards. They could benefit by having low standards or relaxed judging to increase the number of award winners. Indeed, one award winner last year had 26 award categories and gave out 196 gold awards!

Awards organizations might also benefit if well-known companies are among the award winners. Judges may subconsciously give better ratings to a well-respected tech company rather than some unknown manufacturing company if company identities are not hidden. Worse, sponsors may be enticed to put their thumbs on the scale to ensure the star companies rise to the top. When applications ask for number of employees, company revenues, and even seemingly-relevant data points as number of hours trained, it’s easy to see how the books have been cooked to make the biggest, sexiest companies rise to the top of the rankings.

Except for the evidence described above where a sponsor encouraged a judge to be “appreciative,” I can’t document any cases of sponsor direct interference, but the conditions are ripe for those who might want to exploit the process. One award-sponsoring organization recognized the perception problem, and uses a third-party organization to vet the applicants. They also bestow only award one winner in each gold, silver, and bronze category, so the third-party organization has no incentive to be lenient in judging. These are good practices!

Implications

There is so much here—and I’m afraid I am only touching the surface. Despite the dirt and treasure left to be dug and discovered, I am convinced of one thing. I cannot trust the results of most of the learning industry awards. More importantly, these awards don’t give us the benefits we might hope to get from them. Let’s revisit those promised benefits from the very beginning of this article and see how things stack up.

Application Effects

  • Learning and Development
    We had hoped that applicants could learn from their involvement. However, if the wrong criteria are highlighted, they may actually learn to focus on the wrong target outcomes!
  • Nudging Improvement
    We had hoped the awards criteria would nudge applicants and other members of the community to focus on valuable design criteria and outcome measures. Unfortunately, we’ve seen that the criteria are often substandard, possibly even tangential or counter to effective learning-to-performance design.

Publicity of Winners Effect

  • Role Modeling
    We had hoped that winners would be deserving and worthy of being models, but we’ve seen that the many flaws of the various awards processes may result in winners not really being exemplars of excellence.
  • Rewarding of Good Effort
    We had hoped that those doing good work would be acknowledged and rewarded, but now we can see that we might be acknowledging mediocre efforts instead.
  • Promotion and Recruitment Effects
    We had hoped that our best and brightest might get promotions, be recruited, and be rewarded, but now it seems that people might be advantaged willy-nilly.
  • Resourcing and Autonomy Effects
    We had hoped that learning departments that do the best work would gain resources, respect, and reputational advantages; but now we see that learning departments could win an award without really deserving it. Moreover, the best resourced organizations may be able to hire award writers, allocate graphic design help, etc., to push their mediocre effort to award-winning status.
  • Vendor Marketing
    We had hoped that the best vendors would be rewarded, but we can now see that vendors with better marketing skills or resources—rather than the best learning solutions—might be rewarded instead.
  • Purchasing Support
    We had hoped that these industry awards might create market signals to help organizations procure the most effective learning solutions. We can see now that the award signals are extremely unreliable as indicators of effectiveness. If ONE awards organization can manufacture 196 gold medalists and 512 overall in a single year, how esteemed is such an award?

Benefits of Judging

  • Market Intelligence
    We had hoped that judges who participated would learn best practices and innovations, but it seems that the poor criteria involved might nudge judges to focus on information and particulars not as relevant to effective learning design.

What Should We Do Now?

You should draw your own conclusions, but here are my recommendations:

  1. Don’t assume that award winners are deserving or that non-award winners are undeserving.
  2. When evaluating vendors or consultants, ignore the awards they claim to have won—or investigate their solutions yourself.
  3. If you are a senior manager (whether on the learning team or in the broader organization), do not allow your learning teams to apply for these awards, unless you first fully vet the award process. Better to hire research-to-practice experts and evaluation experts to support your learning team’s personal development.
  4. Don’t participate as a judge in these contests unless you first vet their applications, criteria, and the way they handle judging.
  5. If your organization runs an awards contest, reevaluate your process and improve it, where needed. You can use the contents of this article as a guide for improvement.

Mea Culpa

I give an award every year, and I certainly don’t live up to all the standards in this article.

My award, the Neon Elephant Award, is designed to highlight the work of a person or group who utilizes or advocates for practical research-based wisdom. Winners include people like Ruth Clark, Paul Kirschner, K. Anders Ericsson, Julie Dirksen (among a bunch of great people, check out the link).

Interestingly, I created the award starting in 2006 because of my dissatisfaction with the awards typical in our industry at that time—awards that measured butts in seats, etc.

It Ain’t Easy — And It Will Never Be Easy!

Organizing an awards process or vetting content is not easy. A few of you may remember the excellent work of Bill Ellet, starting over two decades ago, and his company Training Media Review. It was a monumental effort to evaluate training programs. So monumental in fact that it was unsustainable. When Bill or one of his associates reviewed a training program, they spent hours and hours doing so. They spent more time than our awards judges, and they didn’t review applications; they reviewed the actual learning program.

Is a good awards process even possible?

Honestly, I don’t know. There are so many things to get right.

Can they be better?

Yes!

Are they good enough now?

Not most of them!

 

 

12th December 2019

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2019 Neon Elephant Award, given to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

Click here to learn more about the Neon Elephant Award…

 

2019 Award Winner – David Epstein

David Epstein, is an award-winning writer and journalist, having won awards for his writing from such esteemed bodies as the National Academies of Sciences, Engineering, and Medicine, the Society of Professional Journalists, and the National Center on Disability and Journalism—and has been included in the Best American Science and Nature Writing anthology. David has been a science writer for ProPublica and a senior writer at Sports Illustrated where he helped break the story on baseball legend Alex Rodriguez’s steroid use. David speaks internationally on performance science and the uses (and misuses) of data and his TED talk on human athletic performance has been viewed over eight million times.

Mr. Epstein is the author of two books:

David is honored this year for his new book on human learning and development, Range: Why Generalists Triumph in a Specialized World. The book lays out a very strong case for why most people will become better performers if they focus broadly on their development rather than focusing tenaciously and exclusively on one domain. If we want to raise our children to be great soccer players (aka “football” in most places), we’d be better off having them play multiple sports rather than just soccer. If we want to develop the most innovative cancer researchers, we shouldn’t just train them in cancer-related biology and medicine, we should give them a wealth of information and experiences from a wide range of fields.

Range is a phenomenal piece of art and science. Epstein is truly brilliant in compiling and comprehending the science he reviews, while at the same time telling stories and organizing the book in ways that engage and make complex concepts understandable. In writing the book, David is debunking the common wisdom that performance is improved most rapidly and effectively by focusing practice and learning toward a narrow foci. Where others have only hinted at the power of a broad developmental pathway, Epstein’s Range builds up a towering landmark of evidence that will remain visible on the horizon of the learning field for decades if not millennium.

We in the workplace learning-and-development field should immerse ourselves in Range—not just in thinking about how to design learning and architect learning contexts, but also in thinking about how to evaluate prospects for recruitment and hiring. It’s likely that we currently undervalue people with broad backgrounds and artificially overvalue people with extreme and narrow talents.

Here is a nice article where Epstein wrestles with a question that elucidates an issue we have in our field—what happens when many people in a field are not following research-based guidelines. The article is set in the medical profession, but there are definite parallels to what we face everyday in the learning field.

Epstein is the kind of person we should honor and emulate in the workplace learning field. He is unafraid in seeking the truth, relentless and seemingly inexhaustible in his research efforts, and clear and engaging as a conveyor of information. It is an honor to recognize him as this year’s winner of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

I want to thank David Kelly and the eLearning Guild for awarding me the prestigious title of Guild Master.

Guild Masters including an amazing list of folks, including lots of research-to-practice legends like Ruth Clark, Julie Dirksen, Clark Quinn, Jane Bozarth, Karl Kapp, and others who utilize research-based recommendations in their work.

Delighted to be included!

 

 

 

 

15th December 2018

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2018 Neon Elephant Award, given to Clark Quinn for writing the book Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions, and for his many years advocating for research-based practices in the workplace learning field.

Click here to learn more about the Neon Elephant Award…

 

2018 Award Winner – Clark Quinn, PhD

Clark Quinn, PhD, is an internationally-recognized consultant and thought-leader in learning technology and organizational learning. Dr. Quinn holds a doctorate in Cognitive Psychology from the University of California at San Diego. Since 2001, Clark has been consulting, researching, writing, and speaking through his consulting practice, Quinnovation (website). Clark has been at the forefront of some of the most important trends in workplace learning, including his early advocacy for mobile learning, his work with the Internet Time Group advocating for a greater emphasis on workplace learning, and his collaboration on the Serious eLearning Manifesto to bring research-based wisdom to elearning design. With the publication of his new book, Clark again shows leadership—now in the cause of debunking learning myths and misconceptions.

Clark is the author of numerous books, focusing not only on debunking learning myths, but also on the practice of learning and development and mobile learning. The following are representative:

In addition to his lifetime of work, Clark is honored for his new book on debunking the learning myths, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions.

Millennials, Goldfish & Other Training Misconceptions provides a quick overview of some of the most popular learning myths, misconceptions, and mistakes. The book is designed as a quick reference for practitioners—to help trainers, instructional designers, and elearning developers avoid wasting their efforts and their organizations’ resources in using faulty concepts. As I wrote in the book’s preface, “Clark Quinn has compiled, for the first time, the myths, misconceptions, and confusions that imbue the workplace learning field with faulty decision making and ineffective learning practices.”

When we think about how much time and money has been wasted by learning myths, when we consider the damage done to learners and organizations, when we acknowledge the harm done to the reputation of the learning profession, we can see how important it is to have a quick reference like Clark has provided.

Clark’s passion for good learning is always evident. From his strategic work with clients, to his practical recommendations around learning technology, to his polemic hyperbole in the revolution book, to his longstanding energy in critiquing industry frailties and praising great work, to his eLearning Guild participatory leadership, to his editorial board contributions at eLearn Magazine, and to his excellent new book; Clark is a kinetic force in the workplace learning field. For his research-inspired recommendations, his tenacity in persevering as a thought-leader consultant, and for his ability to collaborate and share his wisdom, we in the learning field owe Clark Quinn our grateful thanks!

 

 

Click here to learn more about the Neon Elephant Award…

On August 25th 1998, Work-Learning Research was officially born in Portland, Maine, in the United States of America. Please help me celebrate an eventful 20 years!!

In lieu of a big birthday-party bash, I’d like to offer some thanks, brag a little, and invite you to leave a comment below if my work has touched you in any beneficial ways.

If you want a history of the early years, that’s already written here.

I set out 20 years ago to help bridge the gap between scientific research and practice. I had some naive views about how easy that would be, but I’ve tried over the years to compile research from top-tier scientific journals on learning, memory, and instruction and translate what I find into practical recommendations for learning professionals—particularly for those in the workplace learning field. I haven’t done even one-tenth of what I thought I could do, but I see only a little harm in keeping at it.

Thanks!

I have a ton of people to thank for enabling me to persevere. First, my wife, who has been more than patient. Second, my daughter who, still in her mid-teens, brings me hope for the future. Also, my parents and family who have built a foundation of values and strength. A great deal of credit goes to my clients who, let’s face it, pay the bills and enable this operation to continue. Special thanks for the 227 people who sponsored my Kickstarter campaign to get my smile-sheet book published. Thanks also for the other research-to-practice professionals who are there with ideas, feedback, inspiration, and support. Thanks go out to all those who care about research-based work and evidence-based practice. I thank you for standing up for learning practices that work!

Brags

I’ve made a ton of mistakes as a entrepreneur/consultant, but I’m really proud of a few things, so permit me a moment of hubris to share what they are:

  1. Work-Learning Research has freakin’ survived 20 years!! As the legendary Red Sox radio announcer Joe Castiglione might say, “Can you believe it?”
  2. I have avoided selling out. While vendors regularly approach me asking for research or writing that will publicly praise their offerings, I demur.
  3. I published a book that added a fundamental innovation to the workplace learning field. Performance-Focused Smile Sheets will be, in my not-so-humble opinion, an historic text. I’m also proud that 227 people in our field stood up and contributed $13,614 to help me get the book published!
  4. I was talking about fundamental research-based concepts like retrieval practice and spacing back in the early 2000’s, over ten years before books like Brown, Roediger, and McDaniel (2014) popularized these concepts, and I continue to emphasize fundamental learning factors because they matter the most.
  5. I have developed a new Learning Evaluation model (LTEM) that enables us to abandon the problematic Kirkpatrick-Katzell Four-Level Model of Evaluation.
  6. I have developed a number of extremely useful models and frameworks, including the Learning Maximizers Model, the Learning Landscape Model, the SEDA Model, the Decisive Dozen, etc.
  7. I have pioneered methods to overcome the limitations of multiple-choice tests, specifically enabling multiple-choice tests to overcome its recognition-only problem.
  8. I have created a robust catalog of publications, blog posts, and videos that share research-based practical wisdom.
  9. I have, at least a little bit, encouraged people in our field to be more skeptical and more careful and to be less inclined to buy into some of the biggest myths in the learning field. I’m attempting now to reinvigorate the Debunker.Club to enable those who care about research-based practice to support each other.
  10. I have, in a small way (not as much as I wish I could) attempted to speak truth to power.
  11. I have, I hope at least a little bit, supported other research-to-practice advocates and thought leaders.
  12. I have had the honor of helping many clients and organizations, including notable engagements with The Navy Seals, the Defense Intelligence Agency, Bloomberg, The Centers for Disease Control and Prevention, Walgreens, ADP, Oxfam, Practising Law Institute, U.S. National Park Service, Society of Actuaries, the Kauffman Foundation, ISPI, the eLearning Guild, ATD, and Learning Technologies among many others.
  13. To make it a baker’s dozen, let me say I’m also proud that I’ve still got things I want to do…

Celebrate with Me!

While I would have loved to host a big party and invited you all, in lieu of that dream, I invite you to leave a comment.

Thank you for embracing me and my work for so many years!

Maybe it’s weird that I’m leading the celebration. Maybe it seems sad! Let me just say, f*ck that! The world doesn’t hand out accolades to most of us. We have to do our own work and celebrate where we can! I’m happy it’s Work-Learning Research’s 20 anniversary. I invite you to be happy with me!

I am truly grateful…

One more thing… the official anniversary is in a week, when I’ll be pleasantly lost in a family vacation… Apologies if I can’t respond quickly if you leave a note below!

 

 

 

15th December 2017

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2017 Neon Elephant Award, given to Patti Shank for writing and publishing two research-to-practice books this year, Write and Organize for Deeper Learning and Practice and Feedback for Deeper Learning—and for her many years advocating for research-based practices in the workplace learning field.

Click here to learn more about the Neon Elephant Award…

 

2017 Award Winner – Patti Shank, PhD

Patti Shank, PhD, is an internationally-recognized learning analyst, writer, and translational researcher in the learning, performance, and talent space. Dr. Shank holds a doctorate in Educational Leadership and Innovation, Instructional Technology from the University of Colorado, Denver and a Masters degree in Education and Human Development from George Washington University. Since 1996, Patti has been consulting, researching, and writing through her consulting practice, Learning Peaks LLC (pattishank.com). As the best research-to-practice professionals tend to do, Patti has extensive experience as a practitioner, including roles such as training specialist, training supervisor, and manager of training and education. Patti has also played a critical role collaborating with the workplace learning’s most prominent trade associations—working, sometimes quixotically, to encourage the adoption of research-based wisdom for learning.

Patti is the author of numerous books, focusing not only on evidence-based practices, but also on online learning, elearning, and learning assessment. The following are her most recent books:

In addition to her lifetime of work, Patti is honored for the two research-to-practice books she published this year!

Write and Organize for Deeper Learning provides research-based recommendations for instructional designers and others who write instructional text. Writing is fundamental to instructional design, but too often, instructional designers don’t get the guidance they need. As I wrote for the back cover of the book, “Write and Organize for Deeper Learning is the book I wish I had back when I was recruiting and developing instructional writers. Based on science, crafted in a voice from hard-earned experience, [the] book presents clear and urgent advice to help instructional writing practitioners.

Practice and Feedback for Deeper Learning also provides research-based recommendations. This time, Patti’s subject are two of the most important, but too often neglected, learning approaches: practice and feedback. As learning practitioners, we still too often focus on conveying information. As a seminal review in a top tier scientific journal put it, “we know from the body of research that learning occurs through the practice and feedback components.” (Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012, p. 86). As I wrote for the book jacket, Patti’s book “is a research-to-practice powerhouse! …A book worthy of being in the personal library of every instructional designer.

Patti has worked many years in the trenches, pushing for research-based practices, persevering against lethargic institutions, unexamined traditions, and commercial messaging biased toward sales not learning effectiveness. For her research, her grit, and her Sisyphean determination, we in the learning field owe Patti Shank our most grateful thanks!

 

 

Click here to learn more about the Neon Elephant Award…

 

21st December 2016

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2016 Neon Elephant Award, given this year to Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof for their book, Urban Myths about Learning and Education. Pedro, Paul, and Casper provide a research-based reality check on the myths and misinformation that float around the learning field. Their incisive analysis takes on such myths as learning styles, multitasking, discovery learning, and various and sundry neuromyths.

Urban Myths about Learning and Education is a powerful salve on the wounds engendered by the weak and lazy thinking that abounds too often in the learning field — whether on the education side or the workplace learning side. Indeed, in a larger sense, De Bruyckere, Kirschner, and Hulshof are doing important work illuminating key truths in a worldwide era of post-truth communication and thought. Now, more than ever, we need to celebrate the truth-tellers!

Click here to learn more about the Neon Elephant Award…

2016 Award Winners – Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof

Pedro De Bruyckere (1974) is an educational scientist at Arteveldehogeschool, Ghent since 2001. He co-wrote two books with Bert Smits in which they debunk popular myths on GenY and GenZ, education and pop culture. He co-wrote a book on girls culture with Linda Duits. And, of course, he co-wrote the book for which he and his co-authors are being honored, Urban Myths about Learning and Education. Pedro is an often-asked public speaker, one of his strongest points is that he “is funny in explaining serious stuff.”

Paul A. Kirschner (1951) is University Distinguished Professor at the Open University of the Netherlands as well as Visiting Professor of Education with a special emphasis on Learning and Interaction in Teacher Education at the University of Oulu, Finland. He is an internationally recognized expert in learning and educational research, with many classic studies to his name. He has served as President of the International Society for the Learning Sciences, is an AERA (American Education Research Association) Research Fellow (the first European to receive this honor). He is chief editor of the Journal of Computer Assisted Learning, associate editor of Computers in Human Behavior, and has published two very successful books: Ten Steps to Complex Learning and Urban Legends about Learning and Education. His co-author on the Ten-Steps book, Jeroen van Merriënboer, won the Neon-Elephant award in 2011.

Casper D. Hulshof is a teacher (assistant professor) at Utrecht University where he supervises bachelors and masters students. He teaches psychological topics, and is especially intrigued with the intersection of psychology and philosophy, mathematics, biology and informatics. He uses his experience in doing experimental research (mostly quantitative work in the areas of educational technology and psychology) to inform his teaching and writing. More than once he has been awarded teaching honors.

Why Honored?

Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof are honored this year for their book Urban Myths about Learning and Education, a research-based reality check on the myths and misinformation that float around the learning field. With their research-based recommendations, they are helping practitioners in the education and workplace-learning fields make better decisions, create more effective learning interventions, and avoid the most dangerous myths about learning.

For their efforts in sharing practical research-based insights on learning design, the workplace learning-and-performance field owes a grateful thanks to Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof.

Book Link:

Click here to learn more about the Neon Elephant Award…

 

 

21st December 2015

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2015 Neon Elephant Award, given this year to Julie Dirksen for her book, Design for How People Learn — just recently released in its second edition. Julie does an incredible job bridging the gap between research and learning practice. Based on decades of working with clients in building learning interventions, Julie utilizes her practical experience  to draw wisdom from the learning research. Her book is wonderfully written and illustrated, utilizes research in a practical way, and covers the most critical leverage points for learning effectiveness. Julie speaks with a voice that is authentic and experienced, providing a soothing guidebook for those who dare to learn the truth and complexities of learning design.

Click here to learn more about the Neon Elephant Award…

 

2015 Award Winner – Julie Dirksen

Julie Dirksen is the principal at Usable Learning, providing consulting services in learning strategy and design. For almost two decades, Julie has been working in the workplace learning-and-performance field; playing roles such as instructional designer, elearning developer, university instructor, learning strategist, keynote speaker, and consultant. Julie is one of the leading voices in our field in recommending research-based learning design and is one of the authors of the Serious eLearning Manifesto.

 

Why Honored?

Julie Dirksen is honored this year for her book, Design for How People Learn, and for her ongoing work bringing research wisdom to learning design. By creating this book, and updating it just this month, Julie has built a foundational platform to help people get a full and accurate view of learning design. Amazon reviewers speak warmly about how valuable and accessible they find the book.

Like last year’s award winners — Brown, Roediger, and McDaniel; authors of Make it Stick: The Science of Successful Learning — Dirksen excels in the difficult work of research translation. Julie’s unique value-add is that she speaks from years of experience as an instructional designer and learning strategist. When we read her book we feel led by a wise and experienced savant — someone who has an incredible depth of practical experience.

For her efforts sharing practical research-based insights on learning design, the workplace learning-and-performance field owes a grateful thanks to Julie Dirksen.

 

Some Key Links:

 

Click here to learn more about the Neon Elephant Award…

 

21st December 2014

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2014 Neon Elephant Award, given this year to Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel for their book, Make it Stick: The Science of Successful Learning—a book that brilliantly conveys scientific principles of learning in prose that is easy to digest, comprehensive and true in its recommendations, highly-credible, and impossible to ignore or forget.

Roediger and McDaniel are highly-respected learning researchers and Brown is an author and former management consultant. The book is singularly successful because it brings together researchers with a person who is highly skilled in conveying complex concepts to the public. Where too often important scientific research never leaves the darkened halls of the academy, Roediger and McDaniel demonstrate incredible wisdom and humility in collaborating with Peter C. Brown.

Click here to learn more about the Neon Elephant Award…

2014 Award Winners –
Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel

Peter C. Brown is an author and retired management consultant. He’s written non-fiction books and even a novel, which was reviewed favorably by many of the top media outlets. Indeed, the Washington Post said this: “Peter C. Brown’s sure and often lyrical evocation of the wild Alaskan coast speaks not only of knowledge but also of love.” His contribution to Make It Stick surely was in his skill in taking cold steely knowledge and bringing warmth and relevance to it.

Henry L. Roediger is the James S. McDonnell Distinguished University Professor of Psychology at Washington University in St. Louis. He’s had a long and distinguished career as a learning-and-memory researcher. His bio highlights his research background:  “Roediger’s research has centered on human learning and memory and he has published on many different topics within this area. He has published over 200 articles and chapters on various aspects of memory.” Roediger has served as an editor on numerous scientific journals and helped found the journal, Psychological Science in the Public Interest, which reviews research and makes it available and accessible to the public. He was President of the American Psychological Society (now the Association for Psychological Science), the largest psychological organization dedicated to scientific psychology. He’s held a Guggenheim fellowship. He has been named one of the most highly-cited researchers in psychology.

Mark A. McDaniel is a Professor of Psychology at Washington University in St. Louis. He’s also had a long and distinguished career as a learning-and-memory researcher. As capture on his faculty webpage, “His most significant lines of work encompass several areas: prospective memory, encoding processes in enhancing memory, retrieval processes and mnemonic effects of retrieval, functional and intervening concept learning, and aging and memory. One unifying theme in this research is the investigation of factors and processes that lead to memory and learning failures. In much of this work, he has extended his theories and investigations to educationally relevant paradigms.” He has been a Fellow of the Society of Experimental Psychologists and President of the American Psychological Association, Division 3.

 

Why Honored?

Brown, Roediger, and McDaniel are being honored this year for their book, Make it Stick: The Science of Successful Learning. By creating this wonderful work, they have reached thousands and will continue to influence many teachers, professors, trainers, instructional designers, and elearning developers for years to come. Already, within the same year of publication, the book has over 100 Amazon reviews!!

It is difficult work to synthesize research into digestible chunks for public consumption. Brown, Roediger, and McDaniel have done an absolutely superlative job in making the research relevant, in engaging the reader, in conveying deeply complex concepts in a manner that makes sense, and in motivating readers to feel urgency to make learning-design improvements.

I know they’ve already made a difference in the workplace learning-and-performance field because my clients have told me how valuable they’ve found Make It Stick. I’ve even seen senior managers (non-learning professionals) get a new religion for learning by reading Make It Stick. By seeing the gaps between ideal learning practices and current learning practices, I’ve seen a senior military leader engage his folks in an intense learning audit to determine how well their current learning was aligned with the learning research. It’s only when research creates action like this that its full benefits are realized.

For bringing potent learning research to the public, the workplace learning-and-performance field owes a grateful thanks to Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel.

 

Some Key Links:

 

 

 

Click here to learn more about the Neon Elephant Award…

 

 

21st December 2013

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, announces the winner of the 2013 Neon Elephant Award, given this year to Gary Klein for his many years doing research and practice in naturalistic decision making, cognitive task analysis, and insight learning–and for reminding us that real-world explorations of human behavior are essential in enabling us to distill key insights.

Click here to learn more about the Neon Elephant Award…

2013 Award Winner –   Gary Klein

Gary Klein is a research psychologist who specializes in how people make decisions and gain insights in real-world situations. His research on how firefighters made decisions in their work showed that laboratory models of decision making were not fully accurate. His work in developing cognitive task analysis and his co-authorship of the book Working Minds have provided the training-and-development field with a seminal guide. His recent work on how people develop real-world insights is reorienting the field of creativity research and practice. In 1969 Klein received his Ph.D. in Experimental Psychology from the University of Pittsburgh. In the 1990’s he founded his own R&D company, Klein Associates, which he sold in 2005. He was one of the leaders in redesigning the White House’s Situation Room. He continues to be a leading research-to-practice professional.

Klein is honored this year for his lifetime of work straddling the research and practice sides of the learning-and-performance field. By doing great research and great practice and using both to augment the other, he has been able to advance science and practice to new levels.

One of Klein’s most important contributions is the insight that real-world human behavior cannot always be distilled from laboratory experiments. He has brought this wisdom to firefighting, military decision-making, and most recently to everyday insight-creation.

While we in the learning-and-performance field may focus on Klein’s work on cognitive task analysis as his most important contribution to our field, as we now look to discover ways to support employees in on-the-job learning–what some have called informal learning–Klein’s focus on naturalistic decision-making and insight-development are also likely to be seminal contributions.

For deeply exploring naturalistic decision-making and real-world insight–the workplace learning-and-performance field owes a grateful thanks to Gary Klein.

 

Some Key Links:

 

Some Key Publications:

  • Klein, G., & Jarosz, A. (2011). A naturalistic study of insight. Journal of Cognitive Engineering and Decision Making, 5, 335-351.
  • Klein, G. (2008). Naturalistic decision making. Human Factors, 50(3), 456-460.
  • Klein, G. A. (1993). A recognition-primed decision (RPD) model of rapid decision making. In G. A. Klein, J. Orasanu, R. Calderwood, & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 138–147). Norwood, NJ: Ablex.
  • Klein, G., & Hoffman, R. (2008). Macrocognition, mental models, and cognitive task analysis methodology. In Schraagen, J. M., Militello, L., Ormerod, T., & Lipshitz, R. (Eds.). Naturalistic decision making and macrocognition. (pps. 3-25). Ashgate: Hampshire, U.K.
  • Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64, 515-526.

 

Click here to learn more about the Neon Elephant Award…