The LEARNNOVATORS team (specifically Santhosh Kumar) asked if I would join them in their Crystal Balling with Learnnovators interview series, and I accepted! They have some really great people on the series, I recommend that you check it out!

The most impressive thing was that they must have studied my whole career history and read my publication list and watched my videos because they came up with a whole set of very pertinent and important questions. I was BLOWN AWAY—completely IMPRESSED! And, given their dedication, I spent a ton of time preparing and answering their questions.

It’s a two part series and here are the links:

Here are some of the quotes they pulled out and/or I’d like to highlight:

Learning is one of the most wondrous, complex, and important areas of human functioning.

The explosion of different learning technologies beyond authoring tools and LMSs is likely to create a wave of innovations in learning.

Data can be good, but also very very bad.

Learning Analytics is poised to cause problems as well. People are measuring all the wrong things. They are measuring what is easy to measure in learning, but not what is important.

We will be bamboozled by vendors who say they are using AI, but are not, or who are using just 1% AI and claiming that their product is AI-based.

Our senior managers don’t understand learning, they think it is easy, so they don’t support L&D like they should.

Because our L&D leaders live in a world where they are not understood, they do stupid stuff like pretending to align learning with business terminology and business-school vibes—forgetting to align first with learning.

We lie to our senior leaders when we show them our learning data—our smile sheets and our attendance data. We then manage toward these superstitious targets, causing a gross loss of effectiveness.

Learning is hard and learning that is focused on work is even harder because our learners have other priorities—so we shouldn’t beat ourselves up too much.

We know from the science of human cognition that when people encounter visual stimuli, their eyes move rapidly from one object to another and back again trying to comprehend what they see. I call this the “eye-path phenomenon.” So, because of this inherent human tendency, we as presenters—as learning designers too!—have to design our presentation slides to align with these eye-path movements.

Organizations now—and even more so in the near future—will use many tools in a Learning-Technology Stack. These will include (1) platforms that offer asynchronous cloud-based learning environments that enable and encourage better learning designs, (2) tools that enable realistic practice in decision-making, (3) tools that reinforce and remind learners, (4) spaced-learning tools, (5) habit-support tools, (6) insight-learning tools (those that enable creative ideation and innovation), et cetera

Learnnovators asked me what I hoped for the learning and development field. Here’s what I said:

Nobody is good at predicting the future, so I will share the vision I hope for. I hope we in learning and development continue to be passionate about helping other people learn and perform at their best. I hope we recognize that we have a responsibility not just to our organizations, but beyond business results to our learners, their coworkers/families/friends, to the community, society, and the environs. I hope we become brilliantly professionalized, having rigorous standards, a well-researched body of knowledge, higher salaries, and career paths beyond L&D. I hope we measure better, using our results to improve what we do. I hope we, more-and-more, take a small-S scientific approach to our practices, doing more A-B testing, compiling a database of meaningful results, building virtuous cycles of continuous improvement. I hope we develop better tools to make building better learning—and better performance—easier and more effective. And I hope we continue to feel good about our contributions to learning. Learning is at the heart of our humanity!

I’m thrilled and delighted to share the news that Jane Bozarth, research-to-practice advocate, author of Show Your Work, and Director of Research for the eLearning Guild, is pledging $1,000 to the Learning Styles Challenge!!

 

 

Jane has been a vigorous debunker of the Learning-Styles Myth for many, many years! For those of you who don’t know, the Learning-Styles Notion is the idea that different people have different styles of learning and that by designing our learning programs to meet each style—that is, to actually provide different learning content or activities to different learners—learning will be improved. Sounds great, but unfortunately, dozens and dozens of research studies and many major research reviews have found the Learning-Styles Notion to be untrue!

 

“Decades of research suggest that learning styles, or the belief that people learn better when they receive instruction in their dominant way of learning, may be one of the most pervasive myths about cognition.”

Nancekivell, S. E., Shah, P., & Gelman, S. A. (2020).
Maybe they’re born with it, or maybe it’s experience:
Toward a deeper understanding of the learning style myth.
Journal of Educational Psychology, 112(2), 221–235.

 

 

“Several reviews that span decades have evaluated the literature on learning styles (e.g., Arter & Jenkins, 1979; Kampwirth & Bates, 1980; Kavale & Forness, 1987; Kavale, Hirshoren, & Forness, 1998; Pashler et al., 2009; Snider, 1992; Stahl, 1999; Tarver & Dawson, 1978), and each has drawn the conclusion that there is no viable evidence to support the theory.”

Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015).
The scientific status of learning styles theories.
Teaching of Psychology, 42(3), 266-271.

 

With Jane’s contribution, the Learning Styles Challenge is up to $6,000! That is, if someone can demonstrate a beneficial effect from using learning styles to design learning, the underwriters will pay that person or group $6,000.

The Learning Styles Challenge began on August 4th 2006 when I offered $1,000 for the first challenge. In 2014, it expanded to $5,000 when additional pledges were made by Guy Wallace, Sivasailam “Thiagi” Thiagarajan, Bob Carleton, and Bob’s company, Vector Group.

Thank you to Jane Bozarth for her generous contribution to the cause! And check out her excellent research review of the learning-styles literature. Jane’s report is filled with tons of research, but also many very practical recommendations for learning professionals.

 

 

12th December 2019

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2019 Neon Elephant Award, given to David Epstein for writing the book Range: Why Generalists Triumph in a Specialized World, and for his many years as a journalist and science-inspired truth teller.

Click here to learn more about the Neon Elephant Award…

 

2019 Award Winner – David Epstein

David Epstein, is an award-winning writer and journalist, having won awards for his writing from such esteemed bodies as the National Academies of Sciences, Engineering, and Medicine, the Society of Professional Journalists, and the National Center on Disability and Journalism—and has been included in the Best American Science and Nature Writing anthology. David has been a science writer for ProPublica and a senior writer at Sports Illustrated where he helped break the story on baseball legend Alex Rodriguez’s steroid use. David speaks internationally on performance science and the uses (and misuses) of data and his TED talk on human athletic performance has been viewed over eight million times.

Mr. Epstein is the author of two books:

David is honored this year for his new book on human learning and development, Range: Why Generalists Triumph in a Specialized World. The book lays out a very strong case for why most people will become better performers if they focus broadly on their development rather than focusing tenaciously and exclusively on one domain. If we want to raise our children to be great soccer players (aka “football” in most places), we’d be better off having them play multiple sports rather than just soccer. If we want to develop the most innovative cancer researchers, we shouldn’t just train them in cancer-related biology and medicine, we should give them a wealth of information and experiences from a wide range of fields.

Range is a phenomenal piece of art and science. Epstein is truly brilliant in compiling and comprehending the science he reviews, while at the same time telling stories and organizing the book in ways that engage and make complex concepts understandable. In writing the book, David is debunking the common wisdom that performance is improved most rapidly and effectively by focusing practice and learning toward a narrow foci. Where others have only hinted at the power of a broad developmental pathway, Epstein’s Range builds up a towering landmark of evidence that will remain visible on the horizon of the learning field for decades if not millennium.

We in the workplace learning-and-development field should immerse ourselves in Range—not just in thinking about how to design learning and architect learning contexts, but also in thinking about how to evaluate prospects for recruitment and hiring. It’s likely that we currently undervalue people with broad backgrounds and artificially overvalue people with extreme and narrow talents.

Here is a nice article where Epstein wrestles with a question that elucidates an issue we have in our field—what happens when many people in a field are not following research-based guidelines. The article is set in the medical profession, but there are definite parallels to what we face everyday in the learning field.

Epstein is the kind of person we should honor and emulate in the workplace learning field. He is unafraid in seeking the truth, relentless and seemingly inexhaustible in his research efforts, and clear and engaging as a conveyor of information. It is an honor to recognize him as this year’s winner of the Neon Elephant Award.

 

Click here to learn more about the Neon Elephant Award…

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

The 70-20-10 Framework has been all the rage for the last five or ten years in the workplace learning field. Indeed, I organized a great debate about 70-20-10 through The Debunker Club (you can see the tweet stream here). I have gone on record saying that the numbers don’t have a sound research backing, but that the concept is a good one—particularly the idea that we as learning professionals ought to leverage on-the-job learning where we can.

What is 70-20-10?

The 70-20-10 framework is built on the belief that 10% of workplace learning is, or should be, propelled by formal training; that 20% is, or should be, enabled by learning directly from others; and that 70% of workplace learning is, or should, come from employee’s learning through workplace experiences.

Supported by Research?

Given all the energy around 70-20-10, you might think that lots of rigorous scientific research has been done on the framework. Well, you would be wrong!

In fact, up until today (April 19, 2019), only one study has been published in a scientific journal (my search of PsycINFO only reveals one study). In this post, I will review that one study, published last year:

Johnson, S. J., Blackman, D. A., & Buick, F. (2018). The 70:20:10 framework and the transfer of learning. Human Resource Development Quarterly. Advance online publication.

Caveats

All research has strengths, weaknesses, and limitations—and it’s helpful to acknowledge these so we can think clearly. First, one study cannot be definitive, and this is just one study. Also, this study is qualitative and relies on subjective inputs to draw its conclusions. Ideally, we’d like to have more objective measures utilized. It is also gathering data from a small sample of public sector workers, where ideally we want a wider range of diverse participants.

Methodology

The researchers found a group of organizations who had been bombarded with messages and training to encourage the use of the 70-20-10 model. Specifically, the APSC (The Australian Public Sector Commission), starting in 2011, encouraged the Australian public sector to embrace 70-20-10.

The specific study “draws from the experiences of two groups of Australian public sector managers: senior managers responsible for implementing the 70:20:10 framework within their organization; and middle managers who have undergone management capability development aligned to the 70:20:10 framework. All managers were drawn from the Commonwealth, Victorian, Queensland, and Northern Territory governments.”

A qualitative approach was chosen according to the researchers “given the atheoretical nature of the 70:20:10 framework and the lack of theory or evidence to provide a research framework.”

The qualitative approaches used by the researchers were individual structured interviews and group structured interviews.

The researchers chose people to interview based on their experience using the 70-20-10 framework to develop middle managers. “A purposive sampling technique was adopted, selecting participants who had specific knowledge of, and experience with, middle management capability development in line with the 70:20:10 framework.”

The researchers used a text-processing program (NVivo) to help them organize and make sense of the qualitative data (the words collected in the interviews). According to Wikipedia, “NVivo is intended to help users organize and analyze non-numerical or unstructured data. The software allows users to classify, sort and arrange information; examine relationships in the data; and combine analysis with linking, shaping, searching and modeling.”

Overall Results

The authors conclude the following:

“In terms of implications for practice, the 70:20:10 framework has the potential to better guide the achievement of capability development through improved learning transfer in the public sector. However, this will only occur if future implementation guidelines focus on both the types of learning required and how to integrate them in a meaningful way. Actively addressing the impact that senior managers and peers have in how learning is integrated into the workplace through both social modeling and organizational support… will also need to become a core part of any effective implementation.”

“Using a large qualitative data set that enabled the exploration of participant perspectives and experiences of using the 70:20:10 framework in situ, we found that, despite many Australian public sector organizations implementing the framework, to date it is failing to deliver desired learning transfer results. This failure can be attributed to four misconceptions in the framework’s implementation: (a) an overconfident assumption that unstructured experiential learning will automatically result in capability development; (b) a narrow interpretation of social learning and a failure to recognize the role social learning has in integrating experiential, social and formal learning; (c) the expectation that managerial behavior would automatically change following formal training and development activities without the need to actively support the process; and (d) a lack of recognition of the requirement of a planned and integrated relationship between the elements of the 70:20:10 framework.”

Specific Difficulties

With Experiential Learning

“Senior managers indicated that one reason for adopting the 70:20:10 framework was that the dominant element of 70% development achieved through experiential learning reflected their expectation that employees should learn on the job. However, when talking to the middle managers themselves, it was not clear how such learning was being supported. Participants suggested that one problem was a leadership perception across senior managers that middle managers could automatically transition into middle management roles without a great deal of support or development.”

“The most common concern, however, was that experiential learning efficacy was challenged because managers were acquiring inappropriate behaviors on the job based on what they saw around them every day.”

“We found that experiential learning, as it is currently being implemented, is predominantly unstructured and unmanaged, that is, systems are not put in place in the work environment to support learning. It was anticipated that managers would learn on the job, without adequate preparation, additional support, or resourcing to facilitate effective learning.”

With Social Learning

“Overall, participants welcomed the potential of social learning, which could help them make sense of their con-text, enabling both sense making of new knowledge acquired and reinforcing what was appropriate both in, and for, their organization. However, they made it clear that, despite apparent organizational awareness of the value of social learning, it was predominantly dependent upon the preferences and working styles of individual managers, rather than being supported systematically through organizationally designed learning programs. Consequently, it was apparent that social learning was not being utilized in the way intended in the 70:20:10 framework in that it was not usually integrated with formal or experiential learning.”

Mentoring

“Mentoring was consistently highlighted by middle and senior managers as being important for both supporting a middle manager’s current job and for building future capacity.”

“Despite mentoring being consistently raised as the most favored form of development, it was not always formally supported by the organization, meaning that, in many instances, mentoring was lacking for middle managers.”

“A lack of systemic approaches to mentoring meant it was fragile and often temporary.”

Peer Support

“Peer support and networking encouraged middle managers to adopt a broader perspective and engage in a community of practice to develop ideas regarding implementing new skills.”

“However, despite managers agreeing that networks and peer support would assist them to build capability and transfer learning to the workplace, there appeared to be few organizationally supported peer learning opportunities. It was largely up to individuals to actively seek out and join their own networks.”

With Formal Learning

“Formal learning programs were recognized by middle and senior managers as important forms of capability development. Attendance was often encouraged for new middle managers.”

“However, not all experiences with formal training programs were positive, with both senior and middle managers reflecting on their ineffectiveness.”

“For the most part, participants reported finishing formal development programs with little to no follow up.”

“There was a lack of both social and experiential support for embedding this learning. The lack of social learning support partly revolved around the high workloads of managers and the lack of time devoted to development activities.”

“The lack of experiential support and senior management feedback meant that many middle managers did not have the opportunity to practice and further develop their new skills, despite their initial enthusiasm.”

“A key issue with this was the lack of direct and clear guidance provided by their line managers.”

“A further issue with formal learning was that it was often designed generically for groups of participants…  The need for specificity also related to the lack of explicit, individualized feedback provided by their line manager to reinforce and embed learning.”

What Should We Make of This Preliminary Research?

Again, with only one study—and a qualitative one conducted on a narrow type of participant—we should be very careful in drawing conclusions.

Still, the study can be helpful in helping us develop hypotheses for further testing—both by researchers and by us as learning professionals.

We also ought to be careful in casting doubt on the 70-20-10 framework itself. Indeed, the research seems to suggest that the framework was not always implemented as intended. On the other hand, when it is demonstrated that a model tends to be used poorly in its routine use, then we should become skeptical that it will produce reliable benefits.

Here are a list of reflections generated in me by the research:

  1. Why so much excitement for 70-20-10 with so little research backing?
  2. Formal training was found to have all the problems normally associated with it, especially the lack of follow-through and after-training support—so we still need to work to improve it!
  3. Who will provide continuous support for experiential and social learning? In the research case, the responsibility for implementing on-the-job learning experiences was not clear, and so the implementation was not done or was poorly done.
  4. What does it take in terms of resources, responsibility, and tasking to make experiential and social learning useful? Or, is this just a bridge too far?
  5. The most likely leverage point for on-the-job learning still seems, to me, to be managers. If this is a correct assumption—and really it should be tested—how can we in Learning & Development encourage, support, and resource managers for this role?

Sign Up For Will’s News by Clicking Here

 

 

 

Happened to notice these two statements printed in vendor literature at a recent conference. I’ve obscured their names just enough so I’m not obviously picking on them but they will know who they are.

Statement #1 from vendor named “C*g*i*o”

  • “We all know that up to 80% of what learners are taught in training will be lost in 30 days if there is no practice or reinforcement.”

Statement #2: from vendor named “A*ea*”

  • “We have known for more than 150 years that humans forget up to 70% of what they learn within 24 hours!”

These statements are false and misleading. To get a more accurate view of human forgetting, check out this well-researched document.

The Sad Reality of Faux or Misleading Research Citations in Vendor Literature

Buyer beware! Vendors are now utilizing confirmatory-bias methodologies to sprinkle their verbal and visual communications with research-sounding sound bites. Because we are human, this persuasion technique is likely to snare us.

We may even buy a product or service that doesn’t work.

My recommendation: Spend $500 on a research-to-practice expert to save yourself tens or hundreds of thousands of dollars, euros, pounds, etc.

 

 

This is NOT a post about Bob Mager. It is something else entirely.

In probably the best video I will ever create, I made the case that learning professionals and learners should NOT receive the same set of learning objectives.

The rationale is this: Because objectives are designed to guide behavior, how could one statement possibly guide the behaviors of two separate audiences? Sometimes maybe! But not always!

Arguments for the Infallibility of an Instructional-Design Hero

Recently, I’ve heard it argued that Bob Mager, in his classic text, “Preparing Instructional Objectives,” urged us to create instructional objectives only for us as learning professionals, that he never intended that instructional objectives be presented to learners. This is a testable assertion, which is great! We can agree that Mager gave us some good advice on how to craft objectives for ourselves as learning professionals. But did Mager also, perhaps, suggest that objectives could be presented to learners?

Here are several word-for-word quotes from Mager’s book:

Page 16: Heading: “Goal Posts for Students

Page 16: “Clearly defined objectives also can be used to provide students with the means to organize their own time and efforts toward accomplishment of those objectives.

Page 17: “With clear objectives, it is possible to organize the instruction itself so that instructors and students alike can focus their efforts on bridging the gap…

Page 19: Chapter Summary. “Objectives are useful for providing: … Tools for guiding student efforts…

Page 43: “Objectives in the hands of students prevent the students from having to guess at how they might best organize their time and effort.

So Mager clearly started the confusion! But Mager wrote at a time before research on cognition enabled greater insight.

Forget Mager’s contribution. The big problem is that the most common practice seems to still be efforts to create a set of learning objectives to use for both learners and learning practitioners.

Scolded

I was even scolded for not knowing the difference between an instructional objective (for learning professionals) and a learning objective (for learners). Of course, these revisionist definitions are not true and are not helpful. They are fake news, concocted perhaps by a person who thinks or was taught that our instructional-design heroes are perfect and their work is sacrosanct. The truth is that these terms have been used interchangeably. For example, in a research study by my mentor and academic advisor, Ernie Rothkopf, he and his research partner used the term instructional objectives to refer to objectives presented to learners.

Rothkopf, E. Z., & Kaplan, R. (1972). An exploration of the effect of density and specificity of instructional objectives on learning from text. Journal of Educational Psychology, 6, 295-302.

My Main Points

  • We need at least two types of objectives (although I’ve argued for more)—one to guide the design, development, and evaluation of learning; one to guide learners as they are learning. I’ve called these “focusing objectives,” because the research shows that they guide attention toward objective-relevant content.
  • When we make arguments, we ought to at least skim the sources to see if we know what we’re talking about.
  • We ought to stop with hero worship. All of us do some good things and some bad things. Even the best of us.
  • Hero worship in the learning field is particularly problematic because learning is so complex and we all still have so much to learn. All of us attempting to make recommendations are likely to be wrong some of the time.
  • It is ironic that our schools of instructional design teach graduate students to memorize facts and hold up heroes as infallible immortals—when instead they ought to be educating these future citizens how progress gets made over long periods of time by a large collective of people. They also ought to be teaching students to understand at a deeper level, not just a knowledge level. But truly, we can’t blame the schools of instructional design. After all, they started with canonically-correct instructional objectives (focused on low-level knowledge because they are easier to create).

Finally, let me say that in the video I praise Bob Mager’s work on learning objectives for us learning professionals. This post is not about Mager.

 

The Debunker Club — where I am an organizer — is sponsoring a members-only Book Group Discussion of The Knowledge Illusion — by Steven Sloman & Philip Fernbach.

This book is fascinating, laying out the argument that human cognition, because it is so resource intensive, is something that we humans tend to offload to others. That is, we have a tendency to avoid the hard work of learning when we can rely on simple heuristics or objects in our environment to inform our actions or other people who have more knowledge.

The book’s discussions are focused on knowledge, and have great relevance to those of us in the learning field.

If you’re a Debunker Club member, please join the community book discussion group starting tomorrow January 11th. You can join the discussion by clicking here.

Note: The Discussion will unfold over several months asynchronously and chapter by chapter so people from around the world can easily join. Don’t worry if you haven’t read the book yet. Grab it and join us.

If you’re not a member, it’s easy to join The Debunker Club. You can join by clicking here.

More information about The Debunker Club can be found by clicking here. We have over 800 members from around the world dedicated to eliminating learning myths and sharing evidence-based practices.

 

 

15th December 2018

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2018 Neon Elephant Award, given to Clark Quinn for writing the book Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions, and for his many years advocating for research-based practices in the workplace learning field.

Click here to learn more about the Neon Elephant Award…

 

2018 Award Winner – Clark Quinn, PhD

Clark Quinn, PhD, is an internationally-recognized consultant and thought-leader in learning technology and organizational learning. Dr. Quinn holds a doctorate in Cognitive Psychology from the University of California at San Diego. Since 2001, Clark has been consulting, researching, writing, and speaking through his consulting practice, Quinnovation (website). Clark has been at the forefront of some of the most important trends in workplace learning, including his early advocacy for mobile learning, his work with the Internet Time Group advocating for a greater emphasis on workplace learning, and his collaboration on the Serious eLearning Manifesto to bring research-based wisdom to elearning design. With the publication of his new book, Clark again shows leadership—now in the cause of debunking learning myths and misconceptions.

Clark is the author of numerous books, focusing not only on debunking learning myths, but also on the practice of learning and development and mobile learning. The following are representative:

In addition to his lifetime of work, Clark is honored for his new book on debunking the learning myths, Millennials, Goldfish & Other Training Misconceptions: Debunking Learning Myths and Superstitions.

Millennials, Goldfish & Other Training Misconceptions provides a quick overview of some of the most popular learning myths, misconceptions, and mistakes. The book is designed as a quick reference for practitioners—to help trainers, instructional designers, and elearning developers avoid wasting their efforts and their organizations’ resources in using faulty concepts. As I wrote in the book’s preface, “Clark Quinn has compiled, for the first time, the myths, misconceptions, and confusions that imbue the workplace learning field with faulty decision making and ineffective learning practices.”

When we think about how much time and money has been wasted by learning myths, when we consider the damage done to learners and organizations, when we acknowledge the harm done to the reputation of the learning profession, we can see how important it is to have a quick reference like Clark has provided.

Clark’s passion for good learning is always evident. From his strategic work with clients, to his practical recommendations around learning technology, to his polemic hyperbole in the revolution book, to his longstanding energy in critiquing industry frailties and praising great work, to his eLearning Guild participatory leadership, to his editorial board contributions at eLearn Magazine, and to his excellent new book; Clark is a kinetic force in the workplace learning field. For his research-inspired recommendations, his tenacity in persevering as a thought-leader consultant, and for his ability to collaborate and share his wisdom, we in the learning field owe Clark Quinn our grateful thanks!

 

 

Click here to learn more about the Neon Elephant Award…