Posts

I just learned of a new initiative to push professionalization in the workplace learning-and-performance field. Because I'm a strong believer that we regularly fail in this, I'm signing on!

The Four Responsibilities of The Learning Professional

Here's what I wrote in my commitment statement:

Brilliant! And needed! The workplace learning-and-performance field is severely under-professionalized. When an organization asks an architect to design a building, the architect works from time-tested principles, from a code of ethics, from an architecture-first integrity. Certainly architects try to help their clients meet their goals, but they don't capitulate on the important stuff. I believe in the four principles!

Still, I think maybe we need to be even more directive. For example, we need to have a responsibility focused on working toward successive improvement, to doing effective measurement, etc. Maybe the four responsibilities can provide the umbrella hierarchy for more specific responsibilities… I'm a professional quibbler. Sorry about that! I wholeheartedly support your effort!

Check out the FOUR-RESPONSIBILITIES website by CLICKING HERE!

 

More Noble Causes in the Learning Field:

And, if you're the kind to sign up for noble causes, please check out the Serious eLearning Manifesto, which I worked on with Michael Allen, Julie Dirksen, and Clark Quinn.

Also, check out The Debunker Club, an effort to rid the learning field of myths and misconceptions. You can join the Debunker.Club too!

 

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model’s recommendations leverageable for learning professionals). People keep asking me about the model, so I’m going to briefly describe it here. If you want to look at my original YouTube video about the model — which goes into more depth — you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here’s a graphic depiction:

 

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can’t make decisions (D), learners who can’t remember what they’ve learned (E), learners who can’t apply what they’ve learned (F), and learners who can’t persevere in their own learning (G); are learners who simply haven’t received leverageable benefits.

When we train or teach only to A, B, and C, we aren’t really helping our learners, we aren’t providing a return on the learning investments, we haven’t done enough to support our learners’ future performance.

 

 

In an article by Farhad Manjoo in the New York Times reports on Google's efforts to improve diversity. This is a compendable effort.

I was struck that while Google was utilizing scientists to devise the content of a diversity training program, it didn't seem to be utilizing research on the learning-to-performance process at all. It could be that Manjoo left it out of the article, or it could be that Google is missing the boat. Here's my commentary:

Dear Farhad,

Either this article is missing vital information–or Google, while perhaps using research on unconscious biases, is completely failing to utilize research-based best practices in learning-to-performance design. Ask almost any thought leader in the training-and-development field and they'll tell you that training by itself is extremely unlikely to substantially change behavior on its own, without additional supports.

By the way, the anecdotes cited for the success of Google's 90-minute training program are not persuasive. It's easy to find some anecdotes that support one's claims. Scientists call this "confirmation bias."

Believe it or not, there is a burgeoning science around what successful learning-to-performance solutions look like. This article, unfortunately, encourages the false notion that training programs alone will be successful in producing behavior change.

About two years ago, four enterprising learning researchers reviewed the research on training and development and published their findings in a top-tier refereed scientific journal. They did a really nice job!

Unfortunately, a vast majority of professionals in the workplace learning-and-performance field have never read the research review, nor have they even heard about it.

As a guy whose consulting practice is premised on the idea that good learning research can be translated into practical wisdom for instructional designers, trainers, elearning developers, chief learning officers and other learning executives, I have been curious to see to what extent this seminal research review has been utilized by other learning professionals. So, for the last year and a half or so, I’ve been asking the audiences I encounter in my keynotes and other conference presentations whether they have encountered this research review.

Often I use the image below to ask the question:

Click here to see original research article…

 

What would be your guess as to the percentage of folks in our industry who have read this?

10%

30%

50%

70%

90%

Sadly, in almost all of the audiences I’ve encountered, less than 5% of the learning professionals have read this research review.

Indeed, usually more than 95% of workplace learning professionals have “never heard of it” even two years after it was published!!!

THIS IS DEEPLY TROUBLING!

And the slur this dumps on our industry’s most potent institutions should be self-evident. And I, too, must take blame for not being more successful in getting these issues heard.

A Review of the Review

People who are subscribed to my email newsletter (you can sign up here), have already been privy to this review many months ago.

I hope the following review will be helpful, and remember, when you’re gathering knowledge to help you do your work, make sure you’re gathering it from sources who are mindful of the scientific research. There is a reason that civilization progresses through its scientific efforts–science provides a structured process of insight generation and testing, creating a self-improving knowledge-generation process that maximizes innovation while minimizing bias.

——————————-

Quotes from the Research Review:

“It has long been recognized that traditional,
stand-up lectures are an inefficient and
unengaging strategy for imparting
new knowledge and skills.” (p. 86)

 

“Training costs across organizations remain
relatively constant as training shifts from
face-to-face to technology-based methods.” (p. 87)

 

“Even when trainees master new knowledge and
skills in training, a number of contextual factors
determine whether that learning is applied
back on the job…” (p. 90)

 

“Transfer is directly related to opportunities
to practice—opportunities provided either by
the direct supervisor or the organization
as a whole.” (p. 90)

 

“The Kirkpatrick framework has a number of
theoretical and practical shortcomings…” (p. 91)

Introduction

I, Will Thalheimer, am a research translator. I study research from peer-reviewed scientific journals on learning, memory, and instruction and attempt to distill whatever practical wisdom might lurk in the dark cacophony of the research catacomb. It’s hard work—and I love it—and the best part is that it gives me some research-based wisdom to share with my consulting clients. It helps me not sound like a know-nothing. Working to bridge the research-practice gap also enables me to talk with trainers, instructional designers, elearning developers, chief learning officers, and other learning executives about their experiences using research-based concepts.

 

It is from this perspective that I have a sad, and perhaps horrifying, story to tell. In 2012—an excellent research review on training was published in a top-tier journal. Unbelievably, most training practitioners have never heard of this research review. I know because when I speak at conferences and chapters in our field I often ask how many people have read the article. Typically, less than 5% of experienced training practitioners have! Less than 1 in 20 people in our field have read a very important review article.

 

What the hell are we doing wrong? Why does everyone know what a MOOC is, but hardly anyone has looked at a key research article?

 

You can access the article by clicking here. You can also read my review of some of the article’s key points as I lay them out below.

 

Is This Research Any Good?

Not all research is created equal. Some is better than others. Some is crap. Too much “research” in the learning-and-performance industry is crap so it’s important to first acknowledge the quality of the research review.

The research review by Eduardo Salas, Scott Tannenbaum, Kurt Kraiger, and Kimberly Smith-Jentsch from November 2012 was published in the highly-regarded peer-reviewed scientific journal, Psychological Science in the Public Interest, published by the Association for Psychological Science, one of the most respected social-science professional organizations in the world. The research review not only reviews research, but also utilizes meta-analytic techniques to distill findings from multiple research studies. In short, it’s high-quality research.

 

The rest of this article will highlight key messages from the research review.

 

Training & Development Gets Results!

The research review by Salas, Tannenbaum, Kraiger, and Smith-Jentsch shows that training and development is positively associated with organizational effectiveness. This is especially important in today’s economy because the need for innovation is greater and more accelerated—and innovation comes from the knowledge and creativity of our human resources. As the researchers say, “At the organizational level, companies need employees who are both ready to perform today’s jobs and able to learn and adjust to changing demands. For employees, that involves developing both job-specific and more generalizable skills; for companies, it means taking actions to ensure that employees are motivated to learn.” (p. 77). Companies spend a ton of money every year on training—in the United States the estimate is $135 billion—so it’s first important to know whether this investment produces positive outcomes. The bottom line: Yes, training does produce benefits.

 

To Design Training, It Is Essential to Conduct a Training Needs Analysis

“The first step in any training development effort ought to be a training needs analysis (TNA)—conducting a proper diagnosis of what needs to be trained, for whom, and within what type of organizational system. The outcomes of this step are (a) expected learning outcomes, (b) guidance for training design and delivery, (c) ideas for training evaluation, and (d) information about the organizational factors that will likely facilitate or hinder training effectiveness. It is, however, important to recognize that training is not always the ideal solution to address performance deficiencies, and a well-conducted TNA can also help determine whether a non-training solution is a better alternative.” (p. 80-81) “In sum, TNA is a must. It is the first and probably the most important step toward the design and delivery of any training.” (p. 83) “The research shows that employees are often not able to articulate what training they really need” (p. 81) so just asking them what they need to learn is not usually an effective strategy.

 

Learning Isn’t Always Required—Some Information can be Looked Up When Needed

When doing a training-needs analysis and designing training, it is imperative to separate information that is “need-to-know” from that which is “need-to-access.” Since learners forget easily, it’s better to use training time to teach the need-to-know information and prepare people on how to access the need-to-access information.

 

Do NOT Offer Training if It is NOT Relevant to Trainees

In addition to being an obvious waste of time and resources, training courses that are not specifically relevant to trainees can hurt motivation for training in general. “Organizations are advised, when possible, to not only select employees who are likely to be motivated to learn when training is provided but to foster high motivation to learn by supporting training and offering valuable training programs.” (p. 79) This suggests that every one of the courses on our LMS should have relevance and value.

 

It’s about Training Transfer—Not Just about Learning!

“Transfer refers to the extent to which learning during training is subsequently applied on the job or affects later job performance.” (p. 77) “Transfer is critical because without it, an organization is less likely to receive any tangible benefits from its training investments.” (p. 77-78) To ensure transfer, we have to utilize proven scientific research-based principles in our instructional designs. Relying on our intuitions is not enough—because they may steer us wrong.

 

We must go Beyond Training!

“What happens in training is not the only thing that matters—a focus on what happens before and after training can be as important. Steps should be taken to ensure that trainees perceive support from the organization, are motivated to learn the material, and anticipate the opportunity to use their skills once on (or back on) the job.” (p. 79)

 

Training can be Designed for Individuals or for Teams

“Today, training is not limited to building individual skills—training can be used to improve teams as well.” (p. 79)

 

Management and Leadership Training Works

“Research evidence suggests that management and leadership development efforts work.” (p. 80) “Management and leadership development typically incorporate a variety of both formal and informal learning activities, including traditional training, one-on-one mentoring, coaching, action learning, and feedback.” (p. 80)

 

Forgetting Must Be Minimized, Remembering Must Be Supported

One meta-analysis found that one year after training, “trainees [had] lost over 90% of what they learned.” (p. 84) “It helps to schedule training close in time to when trainees will be able to apply what they have learned so that continued use of the trained skill will help avert skill atrophy. In other words, trainees need the chance to ‘use it before they lose it.’ Similarly, when skill decay is inevitable (e.g., for infrequently utilized skills or knowledge) it can help to schedule refresher training.” (p. 84)

 

Common Mistakes in Training Design Should Be Avoided

“Recent reports suggest that information and demonstrations (i.e., workbooks, lectures, and videos) remain the strategies of choice in industry. And this is a problem [because] we know from the body of research that learning occurs through the practice and feedback components.” (p. 86) “It has long been recognized that traditional, stand-up lectures are an inefficient and unengaging strategy for imparting new knowledge and skills.” (p. 86) Researchers have “noted that trainee errors are typically avoided in training, but because errors often occur on the job, there is value in training people to cope with errors both strategically and on an emotional level.” (p. 86) “Unfortunately, systematic training needs analysis, including task analysis, is often skipped or replaced by rudimentary questions.” (p. 81)

 

Effective Training Requires At Least Four Components

“We suggest incorporating four concepts into training: information, demonstration, practice, and feedback.” (p. 86) Information must be presented clearly and in a way that enables the learners to fully understand the concepts and skills being taught. Skill demonstrations should provide clarity to enable comprehension. Realistic practice should be provided to enable full comprehension and long-term remembering. Proving feedback after decision-making and skill practice should be used to correct misconceptions and improve the potency of later practice efforts.

The bottom line is that more realistic practice is needed. Indeed, the most effective training utilizes relatively more practice and feedback than is typically provided. “The demonstration component is most effective when both positive and negative models are shown rather than positive models only.” (p. 87)

Will’s Note: While these four concepts are extremely valuable, personally I think they are insufficient. See my research review on the Decisive Dozen for my alternative.

 

E-Learning Can Be Effective, But It May Not Lower the Cost of Training

“Both traditional forms of training and technology-based training can work, but both can fail as well. (p. 87) While the common wisdom argues that e-learning is less costly, recent “survey data suggest that training costs across organizations remain relatively constant as training shifts from face-to-face to technology-based methods.” (p. 87) This doesn’t mean that e-learning can’t offer a cost savings, but it does mean that most organizations so far haven’t realized cost savings. “Well-designed technology-based training can be quite effective, but not all training needs are best addressed with that approach. Thus, we advise that organizations use technology-based training wisely—choose the right media and incorporate effective instructional design principles.” (p. 87)

 

Well-Designed Simulations Provide Potent Learning and Practice

“When properly constructed, simulations and games enable exploration and experimentation in realistic scenarios. Properly constructed simulations also incorporate a number of other research-supported learning aids, in particular practice, scaffolding or context-sensitive support, and feedback. Well-designed simulation enhances learning, improves performance, and helps minimize errors; it is also particularly valuable when training dangerous tasks. (p. 88)

 

To Get On-the-Job Improvement, Training Requires After-Training Support

“The extent to which trainees perceive the posttraining environment (including the supervisor) as supportive of the skills covered in training had a significant effect on whether those skills are practiced and maintained.” (p. 88) “Even when trainees master new knowledge and skills in training, a number of contextual factors determine whether that learning is applied back on the job: opportunities to perform; social, peer, and supervisory support; and organizational policies.” (p. 90) A trainee’s supervisor is particularly important in this regard. As repeated from above, researchers have “discovered that transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization as a whole.” (p. 90)

 

On-the-Job Learning can be Leveraged with Coaching and Support

“Learning on the job is more complex than just following someone or seeing what one does. The experience has to be guided. Researchers reported that team leaders are a key to learning on the job. These leaders can greatly influence performance and retention. In fact, we know that leaders can be trained to be better coaches…Organizations should therefore provide tools, training, and support to help team leaders to coach employees and use work assignments to reinforce training and to enable trainees to continue their development.” (p. 90)

 

Trainees’ Supervisors Can Make or Break Training Success

Researchers have “found that one misdirected comment by a team leader can wipe out the full effects of a training program.” (p. 83) “What organizations ought to do is provide leaders with information they need to (a) guide trainees to the right training, (b) clarify trainees’ expectations, (c) prepare trainees, and (d) reinforce learning…” (p. 83) Supervisors can increase trainees’ motivation to engage in the learning process. (p. 85) “After trainees have completed training, supervisors should be positive about training, remove obstacles, and ensure ample opportunity for trainees to apply what they have learned and receive feedback.” (p. 90) “Transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization.” (p. 90)

 

Will’s Note: I’m a big believer in the power of supervisors to enable learning. I’ll be speaking on this in an upcoming ASTD webinar.

 

Basing Our Evaluations on the Kirkpatrick 4 Levels is Insufficient!!!

“Historically, organizations and training researchers have relied on Kirkpatrick’s [4-Level] hierarchy as a framework for evaluating training programs…[Unfortunately,] The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… Although the Kirkpatrick hierarchy has clear limitations, using it for training evaluation does allow organizations to compare their efforts to those of others in the same industry. The authors recommendations for improving training evaluation fit into two categories. First, [instead of only using the Kirkpatrick framework] “organizations should begin training evaluation efforts by clearly specifying one or more purposes for the evaluation and should then link all subsequent decisions of what and how to measure to the stated purposes.” (p. 91) Second, the authors recommend that training evaluations should “use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes.” (p. 91)

 

This is a devastating critique that should give us all pause. Of course it is not the first such critique, nor will it have to be the last I’m afraid. The worst part about the Kirkpatrick model is that it controls the way we think about learning measurement. It doesn’t allow us to see alternatives.

 

Leadership is Needed for Successful Training and Development

“Human resources executives, learning officers, and business leaders can influence the effectiveness of training in their organizations and the extent to which their company’s investments in training produce desired results. Collectively, the decisions these leaders make and the signals they send about training can either facilitate or hinder training effectiveness…Training is best viewed as an investment in an organization’s human capital, rather than as a cost of doing business. Underinvesting can leave an organization at a competitive disadvantage. But the adjectives “informed” and “active” are the key to good investing. When we use the word “informed,” we mean being knowledgeable enough about training research and science to make educated decisions. Without such knowledge, it is easy to fall prey to what looks and sounds cool—the latest training fad or technology.”  (p. 92)

Thank you!

I’d like to thank all my clients over the years for hiring me as a consultant, learning auditor, workshop provider, and speaker–and thus enabling me to continue in the critical work of translating research into practical recommendations.

If you think I might be able to help your organization, please feel free to contact me directly by emailing me at “info at worklearning dot com” or calling me at 617-718-0767.

 

Robert Slavin, Director of the Center for Research and Reform in Education at Johns Hopkins University, recently wrote the following:

"Sooner or later, schools throughout the U.S. and other countries will be making informed choices among proven programs and practices, implementing them with care and fidelity, and thereby improving outcomes for their children. Because of this, government, foundations, and for-profit organizations will be creating, evaluating, and disseminating proven programs to meet high standards of evidence required by schools and their funders. The consequences of this shift to evidence-based reform will be profound immediately and even more profound over time, as larger numbers of schools and districts come to embrace evidence-based reform and as more proven programs are created and disseminated."

To summarize, Slavin says that (1) schools and other education providers will be using research-based criteria to make decisions (2) that this change will have profound effects, significantly improving learning results, and (3) many stakeholders and institutions within the education field will be making radical changes, including holding themselves and others to account for these improvements.

In Workplace Learning and Performance

But what about us? What about we workplace learning-and-performance professionals? What about our institutions? Will we be left behind? Are we moving toward evidence-based practices ourselves?

My career over the last 16 years is devoted to helping the field bridge the gap between research and practice, so you might imagine that I have a perspective on this. Here it is, in brief:

Some of our field is moving towards research-based practices. But we have lots of roadblocks and gatekeepers that are stalling the journey for the large majority of the industry. I've been pleasantly surprised in working on the Serious eLearning Manifesto about the large number of people who are already using research-based practices; but as a whole, we are still stalled.

Of course, I'm still a believer. I think we'll get there eventually. In the meantime, I want to work with those who are marching ahead, using research wisely, creating better learning for their learners. There are research translaters who we can follow, folks like Ruth Clark, Rich Mayer, K. Anders Ericsson, Jeroen van Merriënboer, Richard E. Clark, Julie Dirksen, Clark Quinn, Gary Klein, and dozens more. There are practitioners who we can emulate–because they are already aligning themselves with the research: Marty Rosenheck, Eric Blumthal, Michael Allen, Cal Wick, Roy Pollock, Andy Jefferson, JC Kinnamon, and thousands of others.

Here's the key question for you who are reading this: "How fast do you want to begin using research-based recommendations?"

And, do you really want to wait for our sister profession to perfect this before taking action?

More and more training departments are considering the use of the Net Promoter Score as a question–or the central question–on their smile sheets.

This is one of the stupidest ideas yet for smile sheets, but I understand the impetus–traditional smile sheets provide poor information. In this blog post I am going to try and put a finely-honed dagger through the heart of this idea.

Note that I have written a replacement question for the Net Promoter Score (for getting learner responses).

What is the Net Promoter Score?

Here’s what the folks who wrote the book on the Net Promoter Score say it is:

The Net Promoter Score, or NPS®, is based on the fundamental perspective that every company’s customers can be divided into three categories: Promoters, Passives, and Detractors.

By asking one simple question — How likely is it that you would recommend [your company] to a friend or colleague? — you can track these groups and get a clear measure of your company’s performance through your customers’ eyes. Customers respond on a 0-to-10 point rating scale and are categorized as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others, fueling growth.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are vulnerable to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who can damage your brand and impede growth through negative word-of-mouth.

To calculate your company’s NPS, take the percentage of customers who are Promoters and subtract the percentage who are Detractors.

So, the NPS is about Customer Perceptions, Right?

Yes, its intended purpose is to measure customer loyalty. It was designed as a marketing tool. It was specifically NOT designed to measure training outcomes. Therefore, we might want to be skeptical before using it.

It kind of makes sense for marketing right? Marketing is all about customer perceptions of a given product, brand, or company? Also, there is evidence–yes, actual evidence–that customers are influenced by others in their purchasing decisions. So again, asking about whether someone might recommend a company or product to another person seems like a reasonable thing to ask.

Of course, just because something seems reasonable, doesn’t mean it is. Even for its intended purpose, the Net Promoter Score has a substantial number of critics. See wikipedia for details.

But Why Not for Training?

To measure training with a Net-Promoter approach, we would ask a question like, “How likely is it that you would recommend this training course to a friend or colleague?” 

Some reasonable arguments for why the NPS is stupid as a training metric:

  1. First we should ask, what is the causal pathway that would explain how the Net Promoter Score is a good measure of training effectiveness? We shouldn’t willy-nilly take a construct from another field and apply it to our field without having some “theory-of-causality” that supports its likely effectiveness.Specifically we should ask whether it is reasonable to assume that a learner’s recommendation about a training program tells us SOMETHING important about the effectiveness of that training program? And, for those using the NPS as the central measure of training effectiveness–which sends shivers down my spine–the query than becomes, is it reasonable to assume that a learner’s recommendation about a training program tells us EVERYTHING important about the effectiveness of that training program?Those who would use the Net Promoter Score for training must have one of the following beliefs:
    • Learners know whether or not training has been effective.
    • Learners know whether their friends/colleagues are likely to have the same beliefs about the effectiveness of training as they themselves have.

    The second belief is not worth much, but it is probably what really happens. It is the first belief that is critical, so we should examine that belief in more depth. Are learners likely to be good judges of training effectiveness?

  2. Scientific evidence demonstrates that learners are not very good at judging their own learning. They have been shown to have many difficulties adequately judging how much they know and how much they’ll be able to remember. For example, learners fail to utilize retrieval practice to support long-term remembering, even though we know this is one of the most powerful learning methods (e.g., Karpicke, Butler, & Roediger, 2009). Learners don’t always overcome their incorrect prior knowledge when reading (Kendeou & van den Broek, 2005). Learners often fail to utilize examples in ways that would foster deeper learning (Renkl, 1997). These are just a few examples of many.
  3. Similarly, two meta-analyses on the potency of traditional smile sheets, which tend to measure the same kind of beliefs as NPS measures, have shown almost no correlation between learner responses and actual learning results (Alliger, Tannenbaum, Bennett, Traver, & Shotland, 1997; Sitzmann, Brown, Casper, Ely, & Zimmerman, 2008).
  4. Similarly, when we assess learning in the training context at the end of learning, several cognitive biases creep in to make learners perform much better than they would perform if they were in a more realistic situation back on the job at a later time (Thalheimer, 2007).
  5. Even if we did somehow prove that NPS was a good measure for training, is there evidence that it is the best measure? Obviously not!
  6. Should it be used as the most important measure. No! As stated in the Science of Training review article from last year: “The researchers [in talking about learning measurement] noted that researchers, authors, and practitioners are increasingly cognizant of the need to adopt a multidimensional perspective on learning [when designing learning measurement approaches].”
    Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012).
  7. Finally, we might ask are there better types of questions to ask on our smile sheets? The answer to that is an emphatic YES! Performance-Focused Smile Sheets provide a whole new approach to smile sheet questions. You can learn more by attending my workshop on how to create and deploy these more powerful questions.

The Bottom Line

The Net Promoter Score was designed to measure customer loyalty and is not relevant for training. Indeed, it is likely to give us dangerously misguided information.

When we design courses solely so that learners like the courses, we create learning that doesn’t stick, that fails to create long-term remembering, that fails to push for on-the-job application, etc.
Seriously, this is one of the stupidest ideas to come along for learning measurement in a long time. Buyers beware!! Please!

Too many organizations insist on using slide templates (slide decorations) in their training slides and presentations. This is a bad idea, and I've created the following narrated slide deck to make a research-based case against these bedeviling adornments:

Great article in the Economist on the Information Explosion.

This has huge implications for human learning and performance.

Here's what Bob Cialdini wrote in his masterful book, "Influence: Science and Practice."

More and more frequently, we will find ourselves in the position of the lower animals—with a mental apparatus that is unequipped to deal thoroughly with the intricacy and richness of the outside environment…The consequence of our new deficiency is the same as that of the animals' long-standing one: when making a decision, we will less frequently engage in a fully considered analysis of the total situation. In response to this "paralysis of analysis," we will revert increasingly to a focus on a single, usually reliable feature of the situation…The problem comes when something causes the normally trustworthy cues to counsel us poorly, to lead us to erroneous actions and wrongheaded decisions. (p. 232)

As learning professionals, our clients—our fellow workers—will be more and more confused and duped by information overload. To be successful, we'll have to figure out ways to help them fight their way through the accelerating storm of information.

Again, read the Economist article.

Thanks to Bill Ellet, editor of the unbiased Training Media Review, writes about the awards in our industry and how hopelessly biased and corrupt they are.

Click to read Bill's excellent article.

If you work in the workplace learning-and-performance field, one of your jobs is to ensure that employees are maximizing their cognitive performance, their decision making, and their overall work output. If people’s cognitive abilities decreased with age, that would be a problem. More importantly, if we can improve our employee’s cognitive abilities, we have a responsibility to do just that. The benefits will accrue to our organizations and to our employees too (and probably then to their families and society at large).

This begs the following questions then:

  • “Is there research in refereed scientific journals that provides evidence for cognitive decline as people age?”
  • “Is there research in refereed scientific journals that provides evidence that we can help improve people’s cognitive abilities as they age?”

I’ve created a short 4-item quiz for you to test your knowledge in this area. Take the quiz. When you are done it will return you directly to this blog post (is that cool or what)?

Take the Quiz. Test your Knowledge of Aging’s Effect on Cognitive Ability.

 

Click here to take the quiz

 

The quiz is based on an article by Christopher Hertzog, Arthur F. Kramer, Robert S. Wilson, and Ulman Lindenberger.

HEY, what are you doing? Go take the quiz first. There’s research to show that the sort of questions I ask in the quiz will actually help you remember this topic. Doh!

The article by Hertzog, Kramer, Wilson, and Lindenberger is in Volume 9—Number 1 in the refereed scientific journal Psychological Science in the Public Interest that was just published in 2009. The title of the article is: Enrichment Effects on Adult Cognitive Development Can the Functional Capacity of Older Adults Be Preserved and Enhanced?

HEY, really. Go take the quiz first!!

Both of my parents (75+) are doing everything right according to the article.

Findings:

Cognitive ability does tend to decline with age. See graph from the article:

 

But notice that though AVERAGE cognitive ability declines there are wide ranges. And since I’m 51 years old as I write this, I’d like you to note that maximum cognitive performance seems highest near 50 years of age.

Can Cognitive Ability be Improved?

Yes, these researchers conclude that it can. Although they admit that more research is needed.

What Can Improve Cognitive Ability?

Well, they didn’t look at everything that might impact cognitive ability, so we don’t have a clear picture yet.

They highlighted the strongest findings in their conclusion:

“The literature is far from definitive, which is no surprise given the inherent difficulties in empirically testing the enrichment hypothesis. However, we believe there is a strong and sound empirical basis for arguing that a variety of factors, including engaging in intellectually and mentally stimulating activities, both (a) slow rates of cognitive aging and (b) enhance levels of cognitive functioning in later life.” p. 41
“What is most impressive to us is the evidence demonstrating benefits of aerobic physical exercise on cognitive functioning in older adults. Such a conclusion would have been controversial in the not-too-distant past, but the evidence that has accumulated since 2000 from both human and animal studies argues overwhelmingly that aerobic exercise enhances cognitive function in older adults. The hypothesis of exercise-induced cognitive-enrichment effects is supported by longitudinal studies of predictors of cognitive decline and incidence of dementia, but also by short-term intervention studies in human and animal populations. The exercise-intervention work suggest relatively general cognitive benefits of aerobic exercise but indicates that cognitive tasks that require executive functioning, working memory, and attentional control are most likely to benefit.” p. 41

They also noted some other more-tentative findings:

“…these data support the idea that a higher level of social engagement is related to a reduced risk of cognitive decline and dementia in old age. The basis of the association is not well understood, however.” p. 33

“…these data suggest that chronic psychological distress may contribute to late-life loss of cognition by causing neurodeteriorative changes in portions of the limbic system that help regulate affect and cognition, changes that do not leave a pathologic footprint (e.g., dendritic atrophy) or whose pathology is not recognizable with currently available methods. These changes, when extreme, might actually be sufficient to cause dementia, but it is more likely that they contribute to cognitive impairment and thereby increase the likelihood that other common age-related neuropathologies are clinically expressed as dementia” p. 36

“…in observational studies that examine more than one lifestyle factor, cognitive activities appear to be the strongest predictor of cognitive change. However, this could be the result of several factors, including the following: (a) Rarely are physical activities characterized in terms of intensity, frequency, and duration; (b) the period across which activities are assessed has been different for cognitive and physical activities; (c) with one exception, activities have been treated as unidimensional in nature. Clearly, these issues require additional consideration in future studies.” p. 39

They also offer a word of caution about software programs that are marketed as ways to improve cognitive ability:

“The majority of software programs marketed as enhancing cognition or brain function lack supporting empirical evidence for training and transfer effects. Clearly, there is a need to introduce standards of good practice in this area. Software developers should be urged to report the reliability and validity of the trained tasks, the magnitude of training effects, the scope and maintenance of transfer to untrained tasks, and the population to which effects are likely to generalize. Arriving at thisinformation requires experiments with random assignment to treatment and control groups, and an adequate sample description. Just as the pharmaceutical industry is required to show benefit and provide evidence regarding potential side effects, companies marketing cognitive-enhancement products should be required to provide empirical evidence of product effectiveness.” p.48

So, to answer the quiz questions:

1. What happens to most people’s cognitive abilities as they age from 50 years onward? Answer: Declines with age.

2. Is there valid research evidence from scientific refereed journals that suggests that people can improve their cognitive outcomes by engaging in certain activities? Answer: Solid evidence, but still some controversy.

3. Which of the following have been shown to improve cognitive ability as people age. Answer: The article didn’t cover all the territory, but the strongest evidence is for (1) mentally and intellectually challenging activities and (2) aerobic physical activity.

4. Imagine that you work for a company that consists of a substantial number of workers over the age of 50. If you had a set budget to spend to improve their cognitive functioning, which of the following investments would garner the greatest results? Answer: Well, the research review does NOT compare the differences between (1) mentally challenging activities, (2) aerobic exercise, and (3) social engagement. However, see their overall conclusion below, which suggests that intellectual engagement and physically activity are key.

Their overall conclusion:

“We conclude that, on balance, the available evidence favors the hypothesis that maintaining an intellectually engaged and physically active lifestyle promotes successful cognitive aging.” p.1

More research on benefits of exercise:

“Unlike the literature on an active lifestyle, there is already an impressive array of work with humans and animal populations showing that exercise interventions have substantial benefits
for cognitive function, particularly for aspects of fluid intelligence and executive function. Recent neuroscience research on this topic indicates that exercise has substantial effects on brain morphology and function, representing a plausible brain substrate for the observed effects of aerobic exercise and other activities on cognition.” p. 1

They cite the potential for training interventions:

“…cognitive-training studies have demonstrated that older adults can improve cognitive functioning when provided with intensive training in strategies that promote thinking and remembering. The early training literature suggested little transfer of function from specifically trained skills to new cognitive tasks; learning was highly specific to the cognitive processes targeted by training. Recently, however, a new generation of studies suggests that providing structured experience in situations demanding executive coordination of skills—such as complex video games, task-switching paradigms, and divided attention tasks—train strategic control over cognition that does show transfer to different task environments. These studies suggest that there is considerable reserve potential in older adults’ cognition that can be enhanced through training.” p. 1

But they offer a warning against one-shot interventions:

“There is no magic pill or no one-shot vaccine that inoculates the individual against the possibility of cognitive decline in old age. As noted earlier, participation in intervention programs is unlikely to affect long-term outcomes unless the relevant behaviors are continued over time.” p. 47

What do we have to do?

Well, if we take our job seriously, we ought to heed the research. We can improve our fellow employees cognitive abilities as they age, so we ought to figure out how we might support that.

I certainly haven’t got this nailed but if your company is interested, I think it would be fascinating to see what we might do.