About two years ago, four enterprising learning researchers reviewed the research on training and development and published their findings in a top-tier refereed scientific journal. They did a really nice job!

Unfortunately, a vast majority of professionals in the workplace learning-and-performance field have never read the research review, nor have they even heard about it.

As a guy whose consulting practice is premised on the idea that good learning research can be translated into practical wisdom for instructional designers, trainers, elearning developers, chief learning officers and other learning executives, I have been curious to see to what extent this seminal research review has been utilized by other learning professionals. So, for the last year and a half or so, I’ve been asking the audiences I encounter in my keynotes and other conference presentations whether they have encountered this research review.

Often I use the image below to ask the question:

Click here to see original research article…

 

What would be your guess as to the percentage of folks in our industry who have read this?

10%

30%

50%

70%

90%

Sadly, in almost all of the audiences I’ve encountered, less than 5% of the learning professionals have read this research review.

Indeed, usually more than 95% of workplace learning professionals have “never heard of it” even two years after it was published!!!

THIS IS DEEPLY TROUBLING!

And the slur this dumps on our industry’s most potent institutions should be self-evident. And I, too, must take blame for not being more successful in getting these issues heard.

A Review of the Review

People who are subscribed to my email newsletter (you can sign up here), have already been privy to this review many months ago.

I hope the following review will be helpful, and remember, when you’re gathering knowledge to help you do your work, make sure you’re gathering it from sources who are mindful of the scientific research. There is a reason that civilization progresses through its scientific efforts–science provides a structured process of insight generation and testing, creating a self-improving knowledge-generation process that maximizes innovation while minimizing bias.

——————————-

Quotes from the Research Review:

“It has long been recognized that traditional,
stand-up lectures are an inefficient and
unengaging strategy for imparting
new knowledge and skills.” (p. 86)

 

“Training costs across organizations remain
relatively constant as training shifts from
face-to-face to technology-based methods.” (p. 87)

 

“Even when trainees master new knowledge and
skills in training, a number of contextual factors
determine whether that learning is applied
back on the job…” (p. 90)

 

“Transfer is directly related to opportunities
to practice—opportunities provided either by
the direct supervisor or the organization
as a whole.” (p. 90)

 

“The Kirkpatrick framework has a number of
theoretical and practical shortcomings…” (p. 91)

Introduction

I, Will Thalheimer, am a research translator. I study research from peer-reviewed scientific journals on learning, memory, and instruction and attempt to distill whatever practical wisdom might lurk in the dark cacophony of the research catacomb. It’s hard work—and I love it—and the best part is that it gives me some research-based wisdom to share with my consulting clients. It helps me not sound like a know-nothing. Working to bridge the research-practice gap also enables me to talk with trainers, instructional designers, elearning developers, chief learning officers, and other learning executives about their experiences using research-based concepts.

 

It is from this perspective that I have a sad, and perhaps horrifying, story to tell. In 2012—an excellent research review on training was published in a top-tier journal. Unbelievably, most training practitioners have never heard of this research review. I know because when I speak at conferences and chapters in our field I often ask how many people have read the article. Typically, less than 5% of experienced training practitioners have! Less than 1 in 20 people in our field have read a very important review article.

 

What the hell are we doing wrong? Why does everyone know what a MOOC is, but hardly anyone has looked at a key research article?

 

You can access the article by clicking here. You can also read my review of some of the article’s key points as I lay them out below.

 

Is This Research Any Good?

Not all research is created equal. Some is better than others. Some is crap. Too much “research” in the learning-and-performance industry is crap so it’s important to first acknowledge the quality of the research review.

The research review by Eduardo Salas, Scott Tannenbaum, Kurt Kraiger, and Kimberly Smith-Jentsch from November 2012 was published in the highly-regarded peer-reviewed scientific journal, Psychological Science in the Public Interest, published by the Association for Psychological Science, one of the most respected social-science professional organizations in the world. The research review not only reviews research, but also utilizes meta-analytic techniques to distill findings from multiple research studies. In short, it’s high-quality research.

 

The rest of this article will highlight key messages from the research review.

 

Training & Development Gets Results!

The research review by Salas, Tannenbaum, Kraiger, and Smith-Jentsch shows that training and development is positively associated with organizational effectiveness. This is especially important in today’s economy because the need for innovation is greater and more accelerated—and innovation comes from the knowledge and creativity of our human resources. As the researchers say, “At the organizational level, companies need employees who are both ready to perform today’s jobs and able to learn and adjust to changing demands. For employees, that involves developing both job-specific and more generalizable skills; for companies, it means taking actions to ensure that employees are motivated to learn.” (p. 77). Companies spend a ton of money every year on training—in the United States the estimate is $135 billion—so it’s first important to know whether this investment produces positive outcomes. The bottom line: Yes, training does produce benefits.

 

To Design Training, It Is Essential to Conduct a Training Needs Analysis

“The first step in any training development effort ought to be a training needs analysis (TNA)—conducting a proper diagnosis of what needs to be trained, for whom, and within what type of organizational system. The outcomes of this step are (a) expected learning outcomes, (b) guidance for training design and delivery, (c) ideas for training evaluation, and (d) information about the organizational factors that will likely facilitate or hinder training effectiveness. It is, however, important to recognize that training is not always the ideal solution to address performance deficiencies, and a well-conducted TNA can also help determine whether a non-training solution is a better alternative.” (p. 80-81) “In sum, TNA is a must. It is the first and probably the most important step toward the design and delivery of any training.” (p. 83) “The research shows that employees are often not able to articulate what training they really need” (p. 81) so just asking them what they need to learn is not usually an effective strategy.

 

Learning Isn’t Always Required—Some Information can be Looked Up When Needed

When doing a training-needs analysis and designing training, it is imperative to separate information that is “need-to-know” from that which is “need-to-access.” Since learners forget easily, it’s better to use training time to teach the need-to-know information and prepare people on how to access the need-to-access information.

 

Do NOT Offer Training if It is NOT Relevant to Trainees

In addition to being an obvious waste of time and resources, training courses that are not specifically relevant to trainees can hurt motivation for training in general. “Organizations are advised, when possible, to not only select employees who are likely to be motivated to learn when training is provided but to foster high motivation to learn by supporting training and offering valuable training programs.” (p. 79) This suggests that every one of the courses on our LMS should have relevance and value.

 

It’s about Training Transfer—Not Just about Learning!

“Transfer refers to the extent to which learning during training is subsequently applied on the job or affects later job performance.” (p. 77) “Transfer is critical because without it, an organization is less likely to receive any tangible benefits from its training investments.” (p. 77-78) To ensure transfer, we have to utilize proven scientific research-based principles in our instructional designs. Relying on our intuitions is not enough—because they may steer us wrong.

 

We must go Beyond Training!

“What happens in training is not the only thing that matters—a focus on what happens before and after training can be as important. Steps should be taken to ensure that trainees perceive support from the organization, are motivated to learn the material, and anticipate the opportunity to use their skills once on (or back on) the job.” (p. 79)

 

Training can be Designed for Individuals or for Teams

“Today, training is not limited to building individual skills—training can be used to improve teams as well.” (p. 79)

 

Management and Leadership Training Works

“Research evidence suggests that management and leadership development efforts work.” (p. 80) “Management and leadership development typically incorporate a variety of both formal and informal learning activities, including traditional training, one-on-one mentoring, coaching, action learning, and feedback.” (p. 80)

 

Forgetting Must Be Minimized, Remembering Must Be Supported

One meta-analysis found that one year after training, “trainees [had] lost over 90% of what they learned.” (p. 84) “It helps to schedule training close in time to when trainees will be able to apply what they have learned so that continued use of the trained skill will help avert skill atrophy. In other words, trainees need the chance to ‘use it before they lose it.’ Similarly, when skill decay is inevitable (e.g., for infrequently utilized skills or knowledge) it can help to schedule refresher training.” (p. 84)

 

Common Mistakes in Training Design Should Be Avoided

“Recent reports suggest that information and demonstrations (i.e., workbooks, lectures, and videos) remain the strategies of choice in industry. And this is a problem [because] we know from the body of research that learning occurs through the practice and feedback components.” (p. 86) “It has long been recognized that traditional, stand-up lectures are an inefficient and unengaging strategy for imparting new knowledge and skills.” (p. 86) Researchers have “noted that trainee errors are typically avoided in training, but because errors often occur on the job, there is value in training people to cope with errors both strategically and on an emotional level.” (p. 86) “Unfortunately, systematic training needs analysis, including task analysis, is often skipped or replaced by rudimentary questions.” (p. 81)

 

Effective Training Requires At Least Four Components

“We suggest incorporating four concepts into training: information, demonstration, practice, and feedback.” (p. 86) Information must be presented clearly and in a way that enables the learners to fully understand the concepts and skills being taught. Skill demonstrations should provide clarity to enable comprehension. Realistic practice should be provided to enable full comprehension and long-term remembering. Proving feedback after decision-making and skill practice should be used to correct misconceptions and improve the potency of later practice efforts.

The bottom line is that more realistic practice is needed. Indeed, the most effective training utilizes relatively more practice and feedback than is typically provided. “The demonstration component is most effective when both positive and negative models are shown rather than positive models only.” (p. 87)

Will’s Note: While these four concepts are extremely valuable, personally I think they are insufficient. See my research review on the Decisive Dozen for my alternative.

 

E-Learning Can Be Effective, But It May Not Lower the Cost of Training

“Both traditional forms of training and technology-based training can work, but both can fail as well. (p. 87) While the common wisdom argues that e-learning is less costly, recent “survey data suggest that training costs across organizations remain relatively constant as training shifts from face-to-face to technology-based methods.” (p. 87) This doesn’t mean that e-learning can’t offer a cost savings, but it does mean that most organizations so far haven’t realized cost savings. “Well-designed technology-based training can be quite effective, but not all training needs are best addressed with that approach. Thus, we advise that organizations use technology-based training wisely—choose the right media and incorporate effective instructional design principles.” (p. 87)

 

Well-Designed Simulations Provide Potent Learning and Practice

“When properly constructed, simulations and games enable exploration and experimentation in realistic scenarios. Properly constructed simulations also incorporate a number of other research-supported learning aids, in particular practice, scaffolding or context-sensitive support, and feedback. Well-designed simulation enhances learning, improves performance, and helps minimize errors; it is also particularly valuable when training dangerous tasks. (p. 88)

 

To Get On-the-Job Improvement, Training Requires After-Training Support

“The extent to which trainees perceive the posttraining environment (including the supervisor) as supportive of the skills covered in training had a significant effect on whether those skills are practiced and maintained.” (p. 88) “Even when trainees master new knowledge and skills in training, a number of contextual factors determine whether that learning is applied back on the job: opportunities to perform; social, peer, and supervisory support; and organizational policies.” (p. 90) A trainee’s supervisor is particularly important in this regard. As repeated from above, researchers have “discovered that transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization as a whole.” (p. 90)

 

On-the-Job Learning can be Leveraged with Coaching and Support

“Learning on the job is more complex than just following someone or seeing what one does. The experience has to be guided. Researchers reported that team leaders are a key to learning on the job. These leaders can greatly influence performance and retention. In fact, we know that leaders can be trained to be better coaches…Organizations should therefore provide tools, training, and support to help team leaders to coach employees and use work assignments to reinforce training and to enable trainees to continue their development.” (p. 90)

 

Trainees’ Supervisors Can Make or Break Training Success

Researchers have “found that one misdirected comment by a team leader can wipe out the full effects of a training program.” (p. 83) “What organizations ought to do is provide leaders with information they need to (a) guide trainees to the right training, (b) clarify trainees’ expectations, (c) prepare trainees, and (d) reinforce learning…” (p. 83) Supervisors can increase trainees’ motivation to engage in the learning process. (p. 85) “After trainees have completed training, supervisors should be positive about training, remove obstacles, and ensure ample opportunity for trainees to apply what they have learned and receive feedback.” (p. 90) “Transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization.” (p. 90)

 

Will’s Note: I’m a big believer in the power of supervisors to enable learning. I’ll be speaking on this in an upcoming ASTD webinar.

 

Basing Our Evaluations on the Kirkpatrick 4 Levels is Insufficient!!!

“Historically, organizations and training researchers have relied on Kirkpatrick’s [4-Level] hierarchy as a framework for evaluating training programs…[Unfortunately,] The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… Although the Kirkpatrick hierarchy has clear limitations, using it for training evaluation does allow organizations to compare their efforts to those of others in the same industry. The authors recommendations for improving training evaluation fit into two categories. First, [instead of only using the Kirkpatrick framework] “organizations should begin training evaluation efforts by clearly specifying one or more purposes for the evaluation and should then link all subsequent decisions of what and how to measure to the stated purposes.” (p. 91) Second, the authors recommend that training evaluations should “use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes.” (p. 91)

 

This is a devastating critique that should give us all pause. Of course it is not the first such critique, nor will it have to be the last I’m afraid. The worst part about the Kirkpatrick model is that it controls the way we think about learning measurement. It doesn’t allow us to see alternatives.

 

Leadership is Needed for Successful Training and Development

“Human resources executives, learning officers, and business leaders can influence the effectiveness of training in their organizations and the extent to which their company’s investments in training produce desired results. Collectively, the decisions these leaders make and the signals they send about training can either facilitate or hinder training effectiveness…Training is best viewed as an investment in an organization’s human capital, rather than as a cost of doing business. Underinvesting can leave an organization at a competitive disadvantage. But the adjectives “informed” and “active” are the key to good investing. When we use the word “informed,” we mean being knowledgeable enough about training research and science to make educated decisions. Without such knowledge, it is easy to fall prey to what looks and sounds cool—the latest training fad or technology.”  (p. 92)

Thank you!

I’d like to thank all my clients over the years for hiring me as a consultant, learning auditor, workshop provider, and speaker–and thus enabling me to continue in the critical work of translating research into practical recommendations.

If you think I might be able to help your organization, please feel free to contact me directly by emailing me at “info at worklearning dot com” or calling me at 617-718-0767.

 

As of today, the Learning Styles Challenge payout is rising from $1000 to $5000! That is, if any person or group creates a real-world learning intervention that takes learning styles into account–and proves that such an intervention produces better learning results than a non-learning-styles intervention, they’ll be awarded $5,000!

Special thanks to the new set of underwriters, each willing to put $1000 in jeopardy to help get the word out to the field:

Learning Styles Challenge Rules

We’re still using the original rules, as established back in 2006. Read them here.


What is Implied in This Debunking

The basic finding in the research is that learning interventions that take into account learning styles do no better than learning interventions that do not take learning styles into account. This does not mean that people do not have differences in the way they learn. It just means that designing with learning styles in mind is unlikely to produce benefits–and thus the extra costs are not likely to be a good investment.

Interestingly, there are learning differences that do matter! For example, if we really want to get benefits from individual differences, we should consider the knowledge and skill level of our learners.


What Can You Do to Spread the Word

Thanks to multiple efforts by many people over the years to lessen the irrational exuberance of the learning-styles proliferators, fewer and fewer folks in the learning field are falling prey to the learning-styles myth. But the work is not done yet. This issue still needs your help!

Here’s some ideas for how you can help:

  • Spread the word through social media! Blogs, Twitter, LinkedIn, Facebook!
  • Share this information with your work colleagues, fellow students, etc.
  • Gently challenge those who proselytize learning styles.
  • Share the research cited below.


History of the Learning Styles Challenge

It has been exactly eight years since I wrote in a blog post:

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

Eight years is a long time. Since that time, over one billion babies have been born, 72 billion metric tons of carbon pollution have been produced, and the U.S. Congress has completely stopped functioning.

However, not once in these past eight years has any person or group collected on the Learning Styles challenge. Not once!


Research on Learning Styles

However, since 2006, more and more people have discovered that learning styles are unlikely to be an effective way to design instruction.

First, there was the stunning research review in the top-tier scientific journal, Psychological Science in the Public Interest:

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.

The authors wrote the following:

We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all. (p. 105)

To read more about what they wrote, click here.

Two years later, two of the authors reiterated their findings in a separate–and nicely written–article for the Association for the Study of Medical Education. You can access that article at: http://uweb.cas.usf.edu/~drohrer/pdfs/Rohrer&Pashler2012MedEd.pdf. Here’s the research citation:

Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

A researcher who had once advocated for learning styles did an about face after he did some additional research:

Cook, D. A. (2012). Revisiting cognitive and learning styles in computer-assisted instruction: Not so useful after all. Academic Medicine, 87(6), 778-784.
Of course, not everyone is willing to give up on learning styles. For example, Furnham (2012) wrote:
The application of, and research into, learning styles and approaches is clearly alive and well. (p. 77).
Furnham, A. (2012). Learning styles and approaches to learning. In K. R. Harris, S. Graham, T. Urdan, S. Graham, J. M. Royer, & M. Zeidner (Eds.), APA handbooks in psychology. APA educational psychology handbook, Vol. 2. Individual differences and cultural and contextual factors (pp. 59-81). doi:10.1037/13274-003

A quick cursory look–today–through the PsycINFO database shows that scientific published articles on learning styles are still being published.


Learning Styles in the Workplace Learning Field

Guy Wallace, performance analyst and instructional architect, has been doing a great job keeping the workplace learning field up on the learning-styles debate. Check out his article in eLearn Magazine and his blog post update.

You’ll note from Guy’s blog post that many prominent thought leaders in the field have been suspicious of learning styles for many years.

 

 

 

In continuing my dual career–partly as a charismatic and wildly-effective learning consultant and keynote speaker, partly as a grizzled, hermitted researcher and agonizingly-slow book writer (still working on the same book for the last 16 years)–I have started talking publicly about my list of 12 factors that if training developers implemented, would propel their training to divine glory. I call this list “The Decisive Dozen.”

Click here to read a short introduction to the Decisive Dozen

Of course, I get questions. “How does he do it when so many others have perished?” “How does he get his hair to do that?” And my personal favorite, “How the hell does Thalheimer know these are the most critical learning factors?” It’s a fair question, and instead of releasing 12 chapters from my potentially-forthcoming hopefully-not-posthumorous book, I created a little research brief just to show that I’m not making this stuff up. I’m stealing directly from the world’s best learning researchers!

Click to download the Decisive Dozen Research Review

Thanks for your interest in my work!

Clark Quinn (blog, website, Twitter)  recently cited some of my thinking about instructional objectives in the instructional technology forum of AECT (ITFORUM). I wrote a long email to Clark in response, thanking him, and going into more detail. I am reprising my response to Clark here:

In a recent post to this list, Clark Quinn rightly notes that objectives for learners and objectives for instructional designers need not be identical. Indeed, as both Clark and I have previously noted, the probably shouldn’t be identical.

Here’s the thinking: Objectives are designed to guide behavior. So, how can it be that identically-worded objectives can adequately guide the behavior of two disparate groups of individuals (learners and instructional designers)? It just doesn’t make any sense!!

And indeed, Hamilton (1985) found that presenting learners with learning objectives in the way Mager suggested, PRODUCES NO BENEFITS AND MAY BE HARMFUL. Here’s what Hamilton wrote:

“[An instructional] objective that generally identifies the information to be learned in the text will produce robust effects. Including other information (per Mager’s, 1962, definition) will not significantly help and it may hinder the effects of the objectives”

(Hamilton, 1985, p. 78).

Objectives are not only designed to change behavior for a particular set of individuals, but they are also designed with particular purposes in mind—or they should be.

So, when we talk of instructional objectives, we also need to think about what purpose we have for them.

The quote above from Hamilton is focused on how well learning objectives focus the attention of learners. Interestingly, this is the only area in which extensive research has been done on learning objectives. You might be surprised to know that learning objectives help learners focus on the information targeted by learning objectives, but actually diminish their attention on information in the learning materials not targeted by learning objectives. For example, in two experiments using specific objectives, Rothkopf and Billington (1979) found that when focusing objectives were provided to learners, performance on material related to the objectives improved by 49% and 47% over situations when focusing objectives were not used. However, the material not related to the learning objectives was learned 39% and 33% WORSE than it would have been if no learning objectives were used!

These types of instructional objectives—presented to learners prior to subsequent learning—I call “focusing objectives” because they are designed for the purpose of focusing learner attention on critical learning material. As the Hamilton (1985) review pointed out, it does NOT help to add Mager’s criterion information to focusing objectives, because it doesn’t help learners focus on the critical material.

NOW, here’s an important point (I say to focus your attention):  We don’t necessarily need to use focusing objectives with learners if we have other means to focus their attention!! We can use a relevant, gripping story. We can do a shout-out (example, “Here’s an important point…”). We can have them attempt to answer a relevant scenario-based question and struggle with it. Etcetera.

Here’s another important point: Focusing objectives are only one type of objective we might want to utilize. I have a whole list, and I’m sure you can think of more of them.

Instructional Objectives for Learners:

  1. Table-of-Contents Objectives
    To give learners a big picture sense of what will be taught.
  2. Performance Objectives
    To let learners know what performance will be expected of them.
  3. Motivation Objective
    To ensure learners know why they might be motivated to engage the learning or application of the learning.
  4. Focusing Objective
    To guide learner attention to the most critical information in the learning material.

Instructional Objectives for Developers:

  1. Instructional-Design Objective
    To guide developers toward the ultimate goal of the learning intervention.
  2. Evaluation Objective
    To guide developers (and other stakeholders) to the ultimate measurable outcomes that the learning intervention will be measured by.
  3. Situation Objectives
    To guide developers to the situations that learners must be prepared for.
  4. Organization Objective
    To guide developers to the organizational effects targeted by the instruction.

 

Questions:

So, here’s some questions for you:

Is it okay to use the word understand in an “instructional-design objective”?

How about in a “focusing objective”?

Answer: It’s okay to use the word understand in a focusing objective—because it does not hurt the learner in setting them up to focus attention on critical concepts. But it is NOT okay to use the word understand in an instructional-design objective—because the word “understand” doesn’t have enough specificity to guide instructional design.

My point in asking these questions is to show that over-simplistic notions about instructional objectives are likely to be harmful to your instructional designs.

As usual, the research helps us see things we wouldn’t otherwise have seen.

Hope this helps!!

= Will

 

Will’s Note:

References

Hamilton, R. J. (1985). A framework for the evaluation of the effectiveness of adjunct questions and objectives. Review of Educational Research, 55, 47-85.

Mager, R. (1962). Preparing Instructional Objectives. Palo Alto, CA: Fearon Publishers.

Rothkopf, E. Z., & Billington, M. J. (1979). Goal-guided learning from text: Inferring a descriptive processing model from inspection times and eye movements. Journal of Educational Psychology, 71(3), 310-327.

I set out over 15 years ago to come up with a short list of the most important learning factors based on scientific research and practical real-world wisdom. I felt at the time, and I believe even more strongly now, that the learning field–particularly the workplace learning-and-performance field–is too strongly tempted to jump from one learning fad to another, while ignoring the learning factors that are most important.

My original goal was to create a list that had no more than seven learning factors. I did extensive reviews of a wide swath of the research on learning, memory, and instruction; often doing comparative effect-size analyses to determine what factors were most important. As the years went by and blurred into the second decade of work, I more and more looked at the research from a practical perspective, hoping to find the factors that were not just the most potent, but also the most leveragable by real-world instructional designers, trainers, teachers, e-learning developers, etc.In the end, I failed to find only seven factors, but found 12 that seem extraordinarily potent and leveragable.

Obviously, to pick the most important learning factors is a difficult endeavor–and one subject to a significant degree of human judgment, of which some of mine is surely faulty. Still, all along the way, I have not lost my strong belief that coming up with a short list of learning factors based on the world’s best scientific research from peer-reviewed refereed journals would be extremely helpful in keeping us focused on the factors that matter the most. At a minimum, I feel the list I have created will help most of us create remarkably more effective learning interventions.

In the current version of my book draft, I say the following:

If you put all 12 of these factors into practice, your learning interventions are
likely to be more effective than 95% of all workplace learning interventions
currently being utilized!!

This is quite a bold statement, I know. But I’m very comfortable in making it. In the book I provide a detailed footnote explaining the evidence behind the statement, and maybe an editor will convince me not to be so bold (maybe push me down to 93%, for example–SMILE), but I really do think that most learning interventions in the workplace learning field are lacking significant effectiveness.

Also in the book, each of the 12 factors has its own chapter, often dozens of pages long, backed up by dozens of research studies, and dozens of practical implications and recommendations. Dozens to the max! Obviously, the short descriptions in the synopsis below cannot even approach doing these topics justice. Still, they may provide you with a good framework to enable you to begin to see your learning designs in a different light.

The Decisive Dozen

Because I have shared the Decisive Dozen with clients and in keynote addresses and conference speeches for the last year or so, I decided that it was time to make the list officially public.

Click here to download a brief synopsis of the Decisive Dozen

I welcome your comments and feedback.

And, of course, for those who can’t wait for the book (I can’t blame you, I’m taking a long time, aren’t I?), I would be delighted to discuss with you how the Decisive Dozen might be helpful in guiding your organization to greater learning effectiveness.

Update: Now you can check out a research review supporting the Decisive Dozen.

Click here to get access to the free research review

Despite decades of advocacy by our best trade associations, our wisest gurus, and our most practical researchers, most organizations today still rely on training courses that have little impact in promoting on-the-job performance.

As I mentioned in a recent article, we as learning professionals continue to fail in five major ways. You can access that article by clicking here.

I used to think that this was just a failure of knowledge, but in most of the organizations in which I’ve consulted, there are at least a few learning-and-performance professionals who understand that training alone is not enough. Part of the problem is the dead weight of tradition–the “old normal” continues to blind us to new possibilities. The enlightened few have a hard time pushing back against the gravitational pull of this mass hypnosis.

I recently had a new insight–a way of looking at this problem that I think might enable organizations to break out of their bad habits. The solution is that we have to gain control of the leverage points we have to push for change. We have to change the levers that warp and control our thinking. The big lever is learning measurement. I’ve been pushing this for years as our most important leverage point. If we measured better, we’d get better feedback, which would push us to create better learning interventions.

But learning measurement isn’t our only lever and changing your learning measurement practices is not always easy politically. Beside learning measurement, I’ve compiled a whole list of other leverage points that  really matter. In fact, it was only recently that I had this incredible insight (one I maybe should have had 10 years ago), that we ought to figure out all the levers we have at our disposal and change them to help push our organizations toward a performance orientation. I’d like to reveal one of those levers today.

One of the things we do in our organizations is review our training courses from time to time–either intentionally or by osmosis and feeling. Well, instead of using the wrong metrics, why not use methods that we know–based on our understanding of learning-and-performance–are likely to be good indicators of whether our training course will support actual on-the-job performance.

The Course Review Template is something that can be used on any training course–classroom training or e-learning.It includes a set of questions that are indicators of how performance-based your training course is. Each rubric in this tool is inspired by research or proven practices which I’ve learned in my 25+ years in the workplace learning field.

I should give you a warning. You’re unlikely to be happy with what you find. If I bet each of you one dollar for each training course of yours that doesn’t support performance, I’d be a millionaire overnight.

But to be fair, I’m going to let you try out the tool yourself. It’s free. Use it. And, let me know how your training courses rate. Are they likely to improve on-the-job performance or not?

Click to Download the Course Review Template

After you review a course, post your results at the following link, and when we get enough responses, we’ll let you compare your results to others.

Click to Post Your Course Review Results — SORRY, we’re done collecting data in a survey format.

Maybe I’m having a momentary bout of delusional cognition, but I’m thinking right now that this simple Course Review Template might just revolutionize our ability to simply review our courses to see how performance focused they are.

Such a grandiose statement will provoke eye rolls in some, so let me stipulate a few things. First, this is a first draft, so the Course Review Template is going to be imminently improvable. Second, the Course Review Template is NOT a precision instrument. It is not psychometrically derived, the numbers it assigns to each rubric are best guesses, and there was no super-committee here–just me. Third, the rubrics themselves are subject to interpretation. Instead of over-complicating the form and making it unusable, I decided to keep it simple and make it less precise. Finally, course reviews are just one of the levers you’ll need to completely transition from a course-focus to a performance-focus.

The bottom line is that we have to try some innovate new things to push our organizations to a performance focus. The old ways have not worked. The Course Review Template–or something like it–is worth a try. And seriously, I think it could revolutionize the way your organization views its training courses.

NOTE 2017: While this is the original blog post, it now includes the latest version of the Course Review Template. A later post that introduced the improvements is available here.

 

This blog post is excerpted from the full report, How Much Do People Forget? Click here to download the full report. You may also access the report—and many other reports—by going to my catalog page by clicking here.

Everybody Wants to Know—How Much Do People Forget?

For years, people have been asking me, “How much do people forget?” and I’ve told them, “It depends.” When I make this statement, most people scowl at me and walk away frustrated and unrequited. I also suspect that some of them think less of me—perhaps that I am just hiding my ignorance.

But I try. I try to explain the complexity of human learning. I explain that forgetting depends on many things, for example:

  • The type of material that is being learned
  • The learners’ prior knowledge
  • The learners’ motivation to learn
  • The power of the learning methods used
  • The contextual cues in the learning and remembering situations
  • The amount of time the learning has to be retained
  • The difficulty of the retention test
  • Etc.

More meaningful materials (like stories) tend to be easier to remember than less meaningful material (like nonsense syllables). More relevant concepts tend to be easier to remember than less relevant concepts. Learners who have more prior knowledge in a topic area are likely to be better able to remember new concepts learned in that area. More motivated learners are more likely to remember than less motivated learners. Learners who receive repetitions, retrieval practice, feedback, variety (and other potent learning methods) are more likely to remember than learners who do not receive such learning supports. Learners who are provided with learning and practice in the situations where they will be asked to remember the information will be better able to remember. Learners who are asked to retrieve information shortly after learning it will retrieve more than learners who are asked to retrieve information a long time after learning it.

I try to explain all this, but still people keep asking.

And then there are the statistics I keep hearing—that are passed around the learning field from person to person through the years as if they were immutable truths carved by Old Moses Ebbinghaus on granite stones. Here is some information so cited (as of December 2010):

  • People forget 40% of what they learned in 20 minutes and 77% of what they learned in six days (http://www.festo-didactic.co.uk/gb-en/news/forgetting-curve-its-up-to-you.htm?fbid=Z2IuZW4uNTUwLjE3LjE2LjM0Mzc).
  • People forget 90% after one month. (http://www.reneevations.com/management/ebbinghaus-curve/)
  • People forget 50-80% of what they’ve learned after one day and 97-98% after a month. (http://www.adm.uwaterloo.ca/infocs/study/curve.html)

Never mind that these immutable truths conflict with each other.

So, I will try one more time to convince the world that forgetting depends.

To accomplish this, I explored 14 research articles, examining 69 conditions to see how much forgetting occured, representing over 1,000 learners.

The following graph details the amount of forgetting for each of the 69 conditions:

 

Conclusions

This graph and the indepth analysis in the full article revealed four critical concepts in human learning—truths that every learning professional should deeply understand.

  1. The amount a learner will forget varies depending on many things. We as learning professionals will be more effective if we make decisions based on a deep understanding of how to minimize forgetting and enhance remembering.
  2. Rules-of-thumb that show people forgetting at some pre-defined rate are just plain false. In other words, learning gurus and earnest bloggers are wrong when they make blanket statements like, “People will forget 40% of what they learned within a day of learning it.”
  3. Learning interventions can produce profound improvements in long-term remembering. In other words, learning gurus are wrong when they say that training is not effective.
  4. Different learning methods produce widely different amounts of forgetting. We as learning professionals can be more effective if we take a research-based approach and utilize those learning methods that are most effective.

Telling Findings From the Research

  1. People in the reviewed experiments forgot from 0% to 94% of what they had learned. The bottom line is that forgetting varies widely.
  2. Even within a restricted time range, learners forgot at wildly differing rates. For example, in the 1-2 day range, learners forgot from 0 to 73%. Learners in the 2-8 year range forgot from 16% to 94%. The obvious conclusion here is that learning varies widely (and wildly) and cannot be predetermined (except perhaps by deities, of whom, I think, we have not even a few in the learning field). To be specific, when we hear statements like, “People will forget 60% of what they learned within 7 days,” we should ignore such advice and instead reflect on our own superiority and good looks until we are decidedly pleased with ourselves.
  3. Even when we looked at only one type of learning material, forgetting varied widely. For example, in Bahrick’s classic 1979 experiment where learners were learning English-Spanish word pairs, learners forgot from 12% to 63%. Even more remarkably, if we include those cases where learners actually remembered more on the second test than the first test, learners’ “forgetting” varied from -41% to 63%, a swing of 104 percentage points! Again, we must conclude that forgetting varies widely.
  4. Many of the experiments reviewed in this report showed clearly that learning methods matter. For example, in the Bahrick 1979 study, the best learning methods produced an average forgetting score of -29% forgetting, whereas the worst learning methods produced forgetting at 47%, a swing of 76% points. In Runquist’s 1983 study, the best learning method produced average forgetting at 34%, whereas all the other learning methods produced average forgetting of 78%. In Allen, Mahler, and Estes’ 1969 experiment, the learners given the best learning methods forgot an average of 2.3%, whereas the learners who got middling learning methods forgot an average of 14.3%, and learners given the worst learning methods forgot approximately 21.7%. The bottom line is that the learning methods we choose make all the difference!!

Check out the full report to learn more about the following:

  • What you should do as a learning professional (in light of these findings).
  • Whether the learning-curve notion still applies.
  • What wisdom each of the 14 research articles revealed.
  • The methodology used in the research.
  • The calculation of forgetting.

 

Many of us are inclined to see audience response systems only as a way to deliver multiple-choice and true-false questions. While this may be true in a literal sense, such a restricted conception can divert us from myriad possibilities for deep and meaningful learning in our classrooms.

The following list of 39 question types and methods is provided to show the breadth of possibilities. It is distilled from 85 pages of detailed recommendations in the white paper, Questioning Strategies for Audience Response Systems: How to Use Questions to Maximize Learning, Engagement, and Satisfaction, available free by clicking here.

NOTE from Will Thalheimer (2017): The report is focused on audience-response systems — and I must admit that it is a bit dated now in terms of the technology, but the questions types are still a very potent list.

1. Graded Questions to Encourage Attendance

Questions can be used to encourage attendance, but there are dangers that must be avoided.

2. Graded Questions to Encourage Homework and Preparation

Questions can be used to encourage learners to spend time learning prior to classroom sessions, but there are dangers that must be avoided.

3. Avoiding the Use of One Correct Answer (When Appropriate)

Questions that don’t fulfill a narrow assessment purpose need not have right answers. Pecking for a correct answer does not always produce the most beneficial mathemagenic (learning creating) cognitive processing. We can give partial credit. We can have two answers be equally acceptable. We can let the learners decide on their own.

4. Prequestions that Activate Prior Knowledge

Questions can be used to help learners connect their new knowledge to what they’ve already learned, making it more memorable. For example, a cooking teacher could ask a question about making yogurt before introducing a topic on making cheese, prompting learners to activate their knowledge about using yogurt cultures before they begin talking about how to culture cheese. A poetry teacher could ask a question about patriotic symbolism, before talking about the use of symbols in modern American poetry.

5. Prequestions that Surface Misconceptions

Learners bring naïve understandings to the classroom. One of the best ways to confront misconceptions is to bring them to the surface so that they can be confronted straight on. The Socratic Method is a prime example of this. Socrates asks a series of prequestions thereby unearthing misconceptions and leading to a new improved understanding.

6. Prequestions to Focus Attention

Our learners’ attention wanders. In an hour-long session, sometimes they’ll be riveted to the learning discussion, sometimes they’ll be thinking of other ideas that have been triggered, and sometimes they’ll be off in a daze. Prequestions (just like well-written learning objectives) can be use to help learners pay attention to the most important subsequent learning material. In fact, in one famous study, Rothkopf and Billington (1979) presented learners with learning objectives before they encountered the learning material. They then measured learning and eye movements and found that learners actually paid more attention to aspects of the learning material targeted by  the learning objectives. Prequestions work the same way as learning objectives—they focus attention.

7. Postquestions to Provide Retrieval Practice

Postquestions—questions that come after the learning content has been introduced—can be used to reinforce what has been learned and to minimize forgetting. This is a very basic process. By giving learners practice in retrieving information from memory, we increase the probability that they’ll be able to do this in the future. Retrieval practice makes perfect.

8. Postquestions to Enable Feedback

Feedback is essential for learners and instructors. Corrective feedback is critical, especially when learners have misunderstandings. Providing retrieval practice with corrective feedback is especially important as learners are struggling with newly-encountered material, difficult material, and when their attention is likely to wander—for example when they’re tired after a long-day of training, when there are excessive distractions, or when the previous material has induced boredom.

9. Postquestions to Surface Misconceptions

We already talked about using prequestions to surface misconceptions. We can also use postquestions to surface misconceptions. Learners don’t always understand concepts after only one presentation of the material. Many an instructor has been surprised after delivering a “brilliant” exposition to find that most of their learners just didn’t get it.

10. Questions Prompting Analysis of Things Presented in Classroom

One of the great benefits of classroom learning is that it enables instructors to present learners with all manner of things. In addition to verbal utterances and marks on a white board, instructors can introduce demonstrations, videos, maps, photographs, illustrations, learner performances, role-plays, diagrams, screen shots, computer animations, etcetera. While these presentations can support learning just by being observed, questions on what has been seen can prompt a different focus and a deeper understanding.

11. Using Rubric Questions to Help Learners Analyze

In common parlance, the term “rubric” connotes a set of standards. Rubrics can be utilized in asking learners questions about what they experience in the classroom. Rubric questions, if they are well designed, can give learners practice in evaluating situations, activities, and events. Such practice is an awesome way to engage learners and prepare them for critical thinking in similar future situations. In addition, if rubrics are continually emphasized, learners will integrate their wisdom in their own planning and decision-making.

12. Questions to Debrief an In-Class Experience

Classrooms can also be used to provide learners with experiences in which they themselves participate. Learners can be asked to take part in role plays, simulations, case studies, and other exercises. It’s usually beneficial to debrief those exercises, and questions can be an excellent way to drive those discussions.

13. Questions to Surface Affective Responses

Not all learning is focused on the cold, steely arithmetic of increasing the inventory of knowledge. Learners can also experience deep emotional responses, many of which are relevant to the learning itself. In topics dealing with oppression, slavery, brutality, war, leadership, glory, and honor, learners aren’t getting the full measure of learning unless they experience emotion in some way. Learners can be encouraged to explore their affective responses by asking them questions.

14. Scenario-Based Decision-Making Questions

Scenario-based questions present learners with scenarios and then ask them to make a decision about what to do. These scenarios can take many forms. They can consist of short descriptive paragraphs or involved case studies. They can be presented in a text-only format or augmented with graphics or multimedia. They can put the learner in the protagonist’s role (“What are you going to do?”) or ask the learner to make a decision for someone else (“What should Dorothy do?”). The questions can be presented in a number of formats—as multiple-choice, true-false, check-all-that-apply, or open-ended queries.

15. Don’t Show Answer Right Away

There’s no rule that you have to show learners the correct response right after they answer the question. Such a reflexive behaviorist scheme can subvert deeper learning. Instructors have had great success in withholding feedback. For example, Harvard professor Mazur’s (1997) Peer Instruction method requires learners to make an individual decision and then try to convince a peer to believe the same decision—all before the instructor weighs in with the answer.

By withholding feedback, learners are encouraged to take some responsibility for their own beliefs and their own learning. Discussions with others further deepen the learning. Simply by withholding the answer, instructors can encourage strategic metacognitive processing, thereby sending learners the not-so-subtle message that it is they—the learners—who must take responsibility for learning.

16. Dropping Answer Choices

There are several reasons to drop answer choices after learners have initially responded to a question. You can drop incorrect answer choices to help focus further discussions on more plausible alternatives. You can drop an obviously correct choice to focus on more critical distinctions. You can drop an unpopular correct choice to prompt learners to question their assumptions and also to highlight the importance of examining unlikely options. Each of these methods has specific advantages.

17. Helping Learners Transfer Knowledge to Novel Situations

“Transfer” is the idea that the learning that happens today ought to be relevant to other situations in the future. More specifically, transfer occurs when learners retrieve what they’ve learned in relevant future situations. As we’ve already discussed, the easiest and often the most potent way to promote transfer is to provide learners with practice in the same contexts—retrieving the same information—that they’ll be required to retrieve in future situations. But questions can also be used to prepare learners to retrieve information in situations that are not, or cannot, be anticipated in designing the learning experience.

18. Making the Learning Personal

By making the learning personal, we help learners actively engage the learning material, we support mathemagenic cognitive processing, and we make it more likely that they’ll think about the learning outside of our classrooms, further reinforcing retention and utilization. Questions can be designed to relate to our learners’ personal experiences, thus bolstering learning.

19. Making the Material Important

Sometimes we can’t make the material directly personal or provide realistic decisions for learners to make, but we can still use questions to show the importance of the topic being discussed.

20. Helping Learners Question Their Assumptions

One of our goals in teaching is to get learners to change their thinking. Sometimes this requires learners to directly confront their assumptions. Questions can be written that force learners to evaluate the assumptions they bring to particular topic areas.

21. Using the Devil’s Advocate Tactic

In a classroom, when we play the devil’s advocate, we argue ostensibly to find flaws in the positions put forth. The devil’s advocate tactic can be used in a number of different ways. You can play the devil’s advocate yourself, or utilize your learners in that role. From a learning standpoint, when someone plays the devil’s advocate, learners are prompted to more fully process the learning material.

22. Data Slicing

Data slicing is the process of using one factor to help make sense of a second factor. So for example, through the use of our audience response systems, we might examine how our learner’s socio-economic background affects their opinion of race relations. Data slicing can be done manually or automatically. It is particularly powerful in the classroom for demonstrating how audience characteristics may play a part in their own perceptions or judgments.

23. Using Questions for In-class Experiments.

For some topics, in-class experimentation—using the learners as the experimental participants—is very beneficial. It helps learners relate to the topic personally. It also highlights how scientific data is derived. For example, in a course on learning, psychology, or thinking; learners could be asked to remember words, but could—unbeknownst to them—be primed to think about certain semantic associates and not others.

24. Prompting Learners to Make Predictions

Prediction-making can facilitate learning in many ways. It can be used to provide retrieval practice for well-learned information. It can be used to deepen learners’ understandings of boundary conditions, contingencies, and other complications. It can be used to engender wonder. It can be used to enable learners to check their own understanding of the concepts being learned.

25. Utilizing Student Questions and Comments

Our learners often ask the best questions. Sometimes a learner’s question hints at the outlines of his or her confusion—and the confusion of many others as well. Sometimes learners want to know about boundary conditions. Students can also offer statements that can improve the learning environment. They may share their comfort level with the topic, add their thoughts in a class discussion, or ar
gue a point because they disagree. All of these interactions provide opportunities for a richer learning environment, especially if we—as instructors—can use these questions to generate learning.

26. Enabling Readiness When Learners are Aloof or Distracted

Let’s face it. Not all of our learners will come into our classrooms ready to learn. Some will be dealing with personal problems. Some will be attending because they have to—not because they want to. Some will be distracted with other stress-inducing responsibilities. Some will think the topic is boring, silly, or irrelevant to them. Fortunately, experienced instructors have discovered tricks that often are successful. Audience response technology can help.

27. Enabling Readiness When Learners Think They Know it All

Some learners will come to your classroom thinking they already know everything they need to know about the topic you’re going to discuss. There are two types of learners who feel this way—those who are delusional (they actually need the learning) and those who are quite clearheaded (they already know what they need to know). Using the right questions and gathering everyone’s responses can help you deal with both of these characters.

28. Enabling Readiness When Learners are Hostile

In almost every instructor’s life, there will come a day when one, two, or multiple learners are publicly hostile. Experienced instructors know that such hostility must be dealt with immediately—not ignored. Even a few bad apples can ruin the learning experience and the satisfaction of the whole classroom. Fortunately, there are ways to fend off the assault.

29. Using Questions with Images

Using images as part of the learning process is critical in many domains. Obvious examples are art appreciation, architecture, geology, computer programming, and film. But even for the least likely topics, such as poetry or literature, there may be opportunities. For example, a poetry teacher may want to display poems to ask learners about the physical layout of poems. Images should not be thrown in willy-nilly. They should be used only when they help instructors meet their learning goals. Images should not be used just to make the question presentation look good. Research has shown that placing irrelevant images in learning material, even if those images seem related to the topic, can hurt learning results, distracting learners from focusing on the main points of the material . One easy rule: Don’t use images if they’re not needed to answer the question.

30. Aggregating Handset Responses for a Group or Team

Some handset brands enable responses of individual handsets to be aggregated. So for example, an instructor in a class of 50 learners might break the learners into 10 teams, with five people on a team. All 50 learners have a handset, but the responses from each team of five learners are aggregated in some way. This aggregation feature enables some additional learning benefits. Teamwork can be rewarded and competition between teams can add an extra element of motivation. Using aggregation scoring allows the instructor to encourage out-of-class activities where learners within a team help each other. Obviously, this will only work if the learning experience takes place over time. In such cases, aggregation can be used to build a learning community. Learners can be assigned to the same team or rotated on different teams, depending on the goals of instruction. Putting learners on one team encourages deeper relationships and eases the logistics for out-of-class learning. Rotating learners through multiple teams enables a greater richness of multiple perspectives and broader networking opportunities. It’s a tradeoff.

31. Using One Handset for a Group or Team

Although one of the prime benefits of handsets is that every learner is encouraged to think and respond, handsets don’t have to be used only in a one-person one-handset format. Sometimes a greater number of audience members show up than expected. Sometimes budgets don’t allow for the purchase of handsets for every learner. Sometimes learners forget to bring their handsets. In addition, sometimes there are specific interactions that are more suited to group responding. When a group of learners has to make a single response, there has to be a mechanism for them to decide what response to make. Several exist, each having their own strengths and weaknesses.

32. Using Questions in Games

As several sales representatives have told me, one of the first things instructors ask about when being introduced to a particular audience response system is the gaming features. This excitement is understandable, because almost all classroom audiences respond energetically to games. Our enthusiasm as instructors must be balanced, however, with knowledge of the pluses and minuses of gaming. Just as with grading manipulations, games energize learners toward specific overt goals—namely scoring well on the game. If this energy is utilized in appropriate mathemagenic activity, it has benefits. On the other hand, games can be highly counterproductive as well.

33. Questions to Narrow the Options in Decision Making

Sometimes the audience in the room must make decisions about what to do. For example, a senior manager running an action-learning group may want to take a vote about which project to pursue given a slate of 15 possible projects. A professor in an upper-level seminar course might give students a vote in deciding which of the 10 possible topics to discuss in the final three weeks of the course. A supervisor might want her employees to narrow down the candidates for employee of the year. A primary school teacher might want to give her students a choice of field trip options. Audience response systems can be used in two ways to do this, single round voting and double round voting.

34. Questions to Decide Go or No Go

Sometimes it’s beneficial to give our learners a chance to decide whether they’re ready to go on to the next topic. You might ask, “Are we ready to go ahead?” Or, “Are we ready to go ahead, or do I need to clarify this a bit more?” Using an audience response system has distinct advantages over handraising here because most learners are uncomfortable asking for additional instruction, even when they need it.

35. Perspective-Taking Questions

There are some topics that may benefit by encouraging learners to take perspectives of others in answering questions. In other words, instead of only asking our learners to express their opinions, we can ask them to take a guess as to the opinions of others. For example, we might ask our learners to guess the opinion of both rich and poor people to affirmative action, the importance of education, etc.

36. Open-Ended Questions

Some people think that audience response systems lack potential because they only enable the use of multiple-choice questions. In contrast, the research on learning suggests to me that (a) multiple choice questions can be powerful on their own, and (b) variations of multiple-choice questions add to this power, and (c) open-ended questions can be valuable in conjunction with multiple-choice formats, for example by letting learners think first on their own, providing student ideas, providing more authentic retrieval practice, etc.

37. Matching

Matching questions are especially valuable if your learning goal is to enable learners to distinguish between closely related items. The matching format can also be useful for logistical reasons in asking more than one question at a time. Although the matching question has its uses, it is often overused by instructors who are simply trying to use non-multiple-choice questions. Often, the matching format only helps learners reinforce relatively low-level concepts, like definitions, word meaning, simple calculations, and the like. While this type of information is valuable, it’s not clear that the classroom is the best place to reinforce this type of knowledge.

38. Asking People to Answer Different Questions

Some audience response systems enable learners to simultaneously answer different questions. In other words, Sam might answer questions 1, 3, 5, 7, and 9, while Pat answers questions 2, 4, 6, 8, and 10. This feature provides an advantage only when it’s critical not to let (a) individual learners cheat off other learners, or (b) groups of learners overhear the conversations of other groups of learners. The biggest disadvantage to this tactic is that it makes post-question discussions particularly untenable. In any case, if you do find a unique benefit to having learners answering different questions simultaneously, it’s likely to be for information that is already well learned—where in-depth discussions are not needed.

39. Using Models of Facilitated Questioning

In the paper that details these 39 question types and methods, I attempted to lay bare the DNA of classroom questioning. I intentionally stripped questioning practices down to their essence in the hope of creating building blocks that you, my patient readers, can utilize to build your own interactive classroom sessions. For example, I talked specifically about using prequestions to focus attention, to activate prior knowledge, and to surface misconceptions. I didn’t describe the myriad permutations that pre- and postquestions might inhabit for example, or any systematic combinations of the many other building blocks I described. While I didn’t describe them, many instructors have developed their own systematic methods—or what I will call, “Models of Facilitated Questioning.” For example, in the paper I briefly describe Harvard Professor Eric Mazur’s “Peer Instruction” method and the University of Massachusetts’s Scientific Reasoning Research Institute and Department of Physics’ “Question-Driven Instruction” method.

Click to download the full report.

One of the features of the book is that it will provide a comprehensive model of workplace learning and performance. This model can be used in many phases of our work, from design through to evaluation.

Or click to watch a larger version (with more viewing control) directly on YouTube: Learning Landscape Model.

The learning-and-performance industry is deluged with instruments purported to help people (1) work better in teams, (2) manage more effectively, (3) hire the right people, (4) promote the best people, (5) etcetera. Unfortunately, many of these instruments have validity, reliability, and magnitude-of-effect issues, despite being well-received by respondents and by learning-and-performance professionals. For example, I will note problems with the MBTI Myers-Briggs below.

Such instruments include multi-rater 360-degree instruments, job-skills tests, knowledge tests, and personality inventories. This blog post is related specifically to personality inventories.

Personality instruments include the wildly-popular MBTI Myers-Briggs Type Indicator and the DISC, plus all sorts of other tests indexed with colors, shapes, and other personality dimensions.

The thinking is that people’s personalities influence their actions and their actions determine their workplace effectiveness. This makes sense intuitively, but in practice it has not always been easy to show that personality affects behavior. Early excitement about this possibility in the mid 1900’s (i.e., 1930 to 1960) gave way to skepticism, only rebounding into favor in the 1990’s as new research found evidence that personality tests could be used in relationship to job performance. For a good historic overview see John and Srivastava (1999, link in reference section below).

Recent research has generally found that personality inventories are related to job performance, though the relationships may be modest and not always consistent. Barrick and Mount (1991) did a meta-analysis looking at many aspects of job performance and found personality to be a factor. Zhao and Seibert (2006) found that the Five-Factor Personality types were related to entrepreneurial skills. Clarke and Robertson (2005) found that personality was related to workplace and non-workplace accidents. Barrick, Mount, and Judge (2001) examined 15 different meta-analyses and concluded that personality and performance were linked.

But this research needs to be understood with some perspective. As Hurtz and Donovan (2000) and others have pointed out, the relationship between the five-factor personality inventories and job performance can be somewhat limited. In other words, just because a person scores a certain way doesn’t necessarily mean that they will act a certain way; while there is a slight tendency in the predicted direction, it often is only a slight tendency. Hurtz and Donovan worry further that when other indicators are used (e.g., previous job experience, interviews, etc.), personality measures may provide very little additional information. Moreover, they cite the worry that respondents can fake their responses on personality inventories (see also, Birkeland, Manson, Kisamore, Brannick, & Smith, 2006).

It is particularly important to note that personality research is now almost all tied to the “Big-Five” or “Five-Factor” personality taxonomy. This taxonomy measures personality along five distinct scales, including Openness, Conscientiousness, Extraversion, Agreeableness, and Emotional Stability. The “Big-Five” or “Five-Factor” Personality taxonomy has been validated in many scientific studies (Digman, 1990; Hogan, Hogan, & Roberts, 1996) and is the most widely-regarded of the many personality models, especially as it relates to workplace behaviors. For example, Barrick, Mount, and Judge in 2001 looked at 15 meta-analyses that investigated the relationship between the five personality factors and job performance.

Other personality taxonomies have not fared as well. For example, the MBTI (Myers-Briggs) has been widely discredited by researchers. It is considered neither reliable nor valid. For example, see Pittenger’s (2005) caution about using the MBTI. The DISC has not been studied enough to be scientifically validated.

Years ago, I used the MBTI in leadership training to make the point that people are different and may bring different skills and needs to the table. While using such a diagnostic seemed helpful in making that point, today I would use other ways to get that message across or use instruments that are scientifically validated.

To Learn More about Five-Factor Model of Personality

To Purchase/Use Instruments based on the Five-Factor Model

 

Research Citations

Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1-26.

Barrick, M. R., Mount, M. K., & Judge, T. A. (2001). Personality and performance at the beginning of the new millennium: What do we know and where do we go next?. International Journal of Selection and Assessment, 9, 9-30.

Birkeland, S. A., Manson, T. M., Kisamore, J. L., Brannick, M. T., & Smith, M. A. (2006). A Meta-Analytic Investigation of Job Applicant Faking on Personality Measures. International Journal of Selection and Assessment, 14, 317-335.

Clarke, S., & Robertson, I. T. (2005). A meta​-​analytic review of the Big Five personality factors and accident involvement in occupational and non​-​occupational settings. Journal of Occupational and Organizational Psychology, 78(3), 355-376.

Costa, P., & McCrae, R. (1992). NEO-PI-R and NEO-FFI professional manual. Odessa, FL: Psychological Assessment Resources.

Digman, J. M. (1990). Personality structure: Emergence of the five-factor model. Annual Review of Psychology, 41, 417-440.

Hogan, R., Hogan, J., & Roberts, B. W. (1996). Personality measurement and employment decisions: Questions and answers. American Psychologist, 51, 469-477.

John, O. P., & Srivastava, S.  (1999). The Big Five Trait Taxonomy:  History, measurement, and theoretical perspectives.  In L. A. Pervin & O. P. John (Eds.), Handbook of Personality:  Theory and Research (2nd ed., pp. 102-138), New York:  Guilford Press. Available at:
http://www.ocf.berkeley.edu/~johnlab/pdfs/john&srivastava,1999.pdf or http://www.uoregon.edu/~sanjay/pubs/bigfive.pdf.

Pittenger, D. J. (2005). Cautionary Comments Regarding the Myers-Briggs Type Indicator. Consulting Psychology Journal: Practice and Research, 57, 210-221.

Zhao, H., & Seibert, S. E. (2006). The Big Five personality dimensions and entrepreneurial status: A meta-analytical review. Journal of Applied Psychology, 91, 259-271.

 

Some Interesting Articles on Personality and the Workplace (and their abstracts)

 

Personality and Team Performance: A Meta​-​Analysis.

By Peeters, Miranda A. G.; Van Tuijl, Harrie F. J. M.; Rutte, Christel G.; Reymen, Isabelle M. M. J.
European Journal of Personality. Vol 20(5), Aug 2006, 377-396.

Using a meta-analytical procedure, the relationship between team composition in terms of the Big-Five personality traits (trait elevation and variability) and team performance were researched. The number of teams upon which analyses were performed ranged from 106 to 527. For the total sample, significant effects were found for elevation in agreeableness (p = 0.24) and conscientiousness (p = 0.20), and for variability in agreeableness (p = -0.12) and conscientiousness (p = -0.24). Moderation by type of team was tested for professional teams versus student teams. Moderation results for agreeableness and conscientiousness were in line with the total sample results. However, student and professional teams differed in effects for emotional stability and openness to experience. Based on these results, suggestions for future team composition research are presented.

 

An examination of the role of personality in work accidents using meta​-​analysis.

By Clarke, Sharon; Roberston, Ivan
Applied Psychology: An International Review. Vol 57(1), Jan 2008, 94-108.

Personality has been studied as a predictor variable in a range of occupational settings. The study reported is based on a systematic search and meta-analysis of the literature, using the “Big Five” personality framework. The results indicated that there was substantial variability in the effect of personality on workplace accidents, with evidence of situational moderators operating in most cases. However, one aspect of personality, low agreeableness, was found to be a valid and generalisable predictor of involvement in work accidents. The implications of the findings for future research are discussed. Although meta-analysis can be used to provide definite estimates of effect sizes, the limitations of such an approach are also considered.

 

Personality and Transformational and Transactional Leadership: A Meta​-​Analysis.

By Bono, Joyce E.; Judge, Timothy A.
Journal of Applied Psychology. Vol 89(5), Oct 2004, 901-910.

This study was a meta-analysis of the relationship between personality and ratings of transformational and transactional leadership behaviors. Using the 5-factor model of personality as an organizing framework, the authors accumulated 384 correlations from 26 independent studies. Personality traits were related to 3 dimensions of transformational leadership–idealized influence-inspirational motivation (charisma), intellectual stimulation, and individualized consideration–and 3 dimensions of transactional leadership–contingent reward, management by exception-active, and passive leadership. Extraversion was the strongest and most consistent correlate of transformational leadership. Although results provided some support for the dispositional basis of transformational leadership–especially with respect to the charisma dimension–generally, weak associations suggested the importance of future research to focus on both narrower personality traits and nondispositional determinants of transformational and transactional leadership.

The Big Five personality dimensions and entrepreneurial status: A meta​-​analytical review.

By Zhao, Hao; Seibert, Scott E.
Journal of Applied Psychology. Vol 91(2), Mar 2006, 259-271.

In this study, the authors used meta-analytical techniques to examine the relationship between personality and entrepreneurial status. Personality variables used in previous studies were categorized according to the five-factor model of personality. Results indicate significant differences between entrepreneurs and managers on 4 personality dimensions such that entrepreneurs scored higher on Conscientiousness and Openness to Experience and lower on Neuroticism and Agreeableness. No difference was found for Extraversion. Effect sizes for each personality dimension were small, although the multivariate relationship for the full set of personality variables was moderate (R = .37). Considerable heterogeneity existed for all of the personality variables except Agreeableness, suggesting that future research should explore possible moderators of the personality-entrepreneurial status relationship.

 

Predicting job performance using FFM and non​-​FFM personality measures.

By Salgado, Jesús F.
Journal of Occupational and Organizational Psychology. Vol 76(3), Sep 2003, 323-346.

This study compares the criterion validity of the Big Five personality dimensions when assessed using Five-Factor Model (FFM)-based inventories and non-FFM-based inventories. A large database consisting of American as well as European validity studies was meta-analysed. The results showed that for conscientiousness and emotional stability, the FFM-based inventories had greater criterion validity than the non FFM-based inventories. Conscientiousness showed an operational validity of .28 (N=19,460, 90% CV=.07) for FFM-based inventories and .18 (N=5,874, 90% CV=-.04) for non-FFM inventories. Emotional stability showed an operational validity of .16 (N=10,786, 90% CV=.04) versus .05 (N=4,54I, 90% CV=-.05) for FFM and non-FFM-based inventories, respectively. No relevant differences emerged for extraversion, openness, and agreeableness. From a practical point of view, these findings suggest that practitioners should use inventories based on the FFM in order to make personnel selection decisions.


A Meta​-​Analytic Investigation of Job Applicant Faking on Personality Measures.

By Birkeland, Scott A.; Manson, Todd M.; Kisamore, Jennifer L.; Brannick, Michael T.; Smith, Mark A.
International Journal of Selection and Assessment. Vol 14(4), Dec 2006, 317-335.

This study investigates the extent to which job applicants fake their responses on personality tests. Thirty-three studies that compared job applicant and non-applicant personality scale scores were meta-analyzed. Across all job types, applicants scored significantly higher than non-applicants on extraversion (d = .11), emotional stability (d = .44), conscientiousness (d = .45), and openness (d = .13). For certain jobs (e.g., sales), however, the rank ordering of mean differences changed substantially
suggesting that job applicants distort responses on personality dimensions that are viewed as particularly job relevant. Smaller mean differences were found in this study than those reported by Viswesvaran and Ones (Educational and Psychological Measurement, 59(2), 197-210), who compared scores for induced ‘fake-good’ vs. honest response conditions. Also, direct Big Five measures produced substantially larger differences than did indirect Big Five measures.

 

A meta​-​analytic review of the Big Five personality factors and accident involvement in occupational and non​-​occupational settings.

By Clarke, Sharon; Robertson, Ivan T.
Journal of Occupational and Organizational Psychology. Vol 78(3), Sep 2005, 355-376.

Although a number of studies have examined individual personality traits and their influence on accident involvement, consistent evidence of a predictive relationship is lacking due to contradictory findings. The current study reports a meta-analysis of the relationship between accident involvement and the Big Five personality dimensions (extraversion, neuroticism, conscientiousness, agreeableness, and openness). Low conscientiousness and low agreeableness were found to be valid and generalizable predictors of accident involvement, with corrected mean validities of .27 and .26, respectively. The context of the accident acts as a moderator in the personality-accident relationship, with different personality dimensions associated with occupational and non-occupational accidents. Extraversion was found to be a valid and generalizable predictor of traffic accidents, but not occupational accidents. Avenues for further research are highlighted and discussed.


Big Five personality predictors of post​-​secondary academic performance.

By O’Connor, Melissa C.; Paunonen, Sampo V.
Personality and Individual Differences. Vol 43(5), Oct 2007, 971-990.

We reviewed the recent empirical literature on the relations between the Big Five personality dimensions and post-secondary academic achievement, and found some consistent results. A meta-analysis showed Conscientiousness, in particular, to be most strongly and consistently associated with academic success. In addition, Openness to Experience was sometimes positively associated with scholastic achievement, whereas Extraversion was sometimes negatively related to the same criterion, although the empirical evidence regarding these latter two dimensions was somewhat mixed. Importantly, the literature indicates that the narrow personality traits or facets presumed to underlie the broad Big Five personality factors are generally stronger predictors of academic performance than are the Big Five personality factors themselves. Furthermore, personality predictors can account for variance in academic performance beyond that accounted for by measures of cognitive ability. A template for future research on this topic is proposed, which aims to improve the prediction of scholastic achievement by overcoming identifiable and easily correctable limitations of past studies.

 

Gender differences in personality traits across cultures: Robust and surprising findings.

By Costa Jr., Paul; Terracciano, Antonio; McCrae, Robert R.
Journal of Personality and Social Psychology. Vol 81(2), Aug 2001, 322-331.

Secondary analyses of Revised NEO Personality inventory data from 26 cultures (N =23,031) suggest that gender differences are small relative to individual variation within genders; differences are replicated across cultures for both college-age and adult samples, and differences are broadly consistent with gender stereotypes: Women reported themselves to be higher in Neuroticism, Agreeableness, Warmth, and Openness to Feelings, whereas men were higher in Assertiveness and Openness to Ideas. Contrary to predictions from evolutionary theory, the magnitude of gender differences varied across cultures. Contrary to predictions from the social role model, gender differences were most pronounced in European and American cultures in which traditional sex roles are minimized. Possible explanations for this surprising finding are discussed, including the attribution of masculine and feminine behaviors to roles rather than traits in traditional cultures.

 

Five​-​factor model of personality and job satisfaction: A meta​-​analysis.

By Judge, Timothy A.; Heller, Daniel; Mount, Michael K.
Journal of Applied Psychology. Vol 87(3), Jun 2002, 530-541.

This study reports results of a meta-analysis linking traits from the 5-factor model of personality to overall job satisfaction. Using the model as an organizing framework, 334 correlations from 163 independent samples were classified according to the model. The estimated true score correlations with job satisfaction were -.29 for Neuroticism, .25 for Extraversion, .02 for Openness to Experience, .17 for Agreeableness, and .26 for Conscientiousness. Results further indicated that only the relations of Neuroticism and Extraversion with job satisfaction generalized across studies. As a set, the Big Five traits had a multiple correlation of .41 with job satisfaction, indicating support for the validity of the dispositional source of job satisfaction when traits are organized according to the 5-factor model.

 

Relationship of personality to performance motivation: A meta​-​analytic review.

By Judge, Timothy A.; Ilies, Remus
Journal of Applied Psychology. Vol 87(4), Aug 2002, 797-807.

This article provides a meta-analysis of the relationship between the 5-factor model of personality and 3 central theories of performance motivation (goal-setting, expectancy, and self-efficacy motivation). The quantitative review includes 150 correlations from 65 studies. Traits were organized according to the 5-factor model of personality. Results indicated that Neuroticism (average validity=-.31) and Conscientiousness (average validity=.24) were the strongest and most consistent correlates of performance motivation across the 3 theoretical perspectives. Results further indicated that the validity of 3 of the Big Five traits–Neuroticism, Extraversion, and Conscientiousness–generalized across studies. As a set, the Big 5 traits had an average multiple correlation of .49 with the motivational criteria, suggesting that the Big 5 traits are an important source of performance motivation.

 

Temperament and personality in dogs (Canis familiaris): A review and evaluation of past research.

By Jones, Amanda C.; Gosling, Samuel D.
Applied Animal Behaviour Science. Vol 95(1-2), Nov 2005, 1-53.

Spurred by theoretical and applied goals, the study of dog temperament has begun to garner considerable research attention. The researchers studying temperament in dogs come from varied backgrounds, bringing with them diverse perspectives, and publishing in a broad range of journals. This paper reviews and evaluates the disparate work on canine temperament. We begin by summarizing general trends in research on canine temperament. To identify specific patterns, we propose several frameworks for organizing the literature based on the methods of assessment, the breeds examined, the purpose of the studies, the age at which the dogs were tested, the breeding and rearing environment, and the sexual status of the dogs. Next, an expert-sorting study shows that the enormous number of temperament traits examined can be usefully classified into seven broad dimensions. Meta-analyses of the findings pertaining to inter-rater agreement, test-retest reliability, internal consistency, and convergent validity generally support the reliability and validity of canine temperament tests but more studies are needed to support these preliminary findings. Studies examining discriminant validity are needed, as preliminary findings on discriminant validity are mixed. We close by drawing 18 conclusions about the field, identifying the major theoretical and empirical questions that remain to be addressed.

 

Will’s Note: I included this last one because it amused me that searching for “personality” one might find a research review on dog personality—and to keep all this research stuff in perspective.