This article was originally published in Will’s Insight News, my monthly newsletter.

It has been updated and improved to include new information.

Click here if you want to sign up for my newsletter…

Radically Improved Action Planning
Using Cognitive Triggers to Support On-the-Job Performance

Most of us who have been trainers have tried one or more methods of action planning–hoping to get our learners to apply what they’ve learned back on the job. The most common form of action planning goes something like this (at the end of a training program):

“Okay, take a look at this action-planning handout. Think of 3 things from the course you’d like to take away and apply back on the job. This is critically important. If you feel you’ve learned something you’d like to use, you won’t get the results you want if you forget what your goals are. On the handout, you’ll see space to write down your 3 action-planning goals. I’m going to give you 20 minutes to do this because it’s so important!”

Unfortunately, that method is likely to get less than half the follow-through that another–research based–method may get you!

When we as trainers do action planning, we are recognizing that learning is not enough. We want to make sure that all of our passionate, exhaustive efforts at training are not wasted. If we’re honest with ourselves, we know that if our learners forget everything they’ve learned, then we really haven’t been effective. This goes for e-learning as well. There’s a lot of effort that goes into creating an e-learning course–and, if we can maximize the benefits through effective action planning, then we ought to do it.

 

Before sharing with you my radically improved action-planning method, it’s critical that I motivate it. Look at the above diagram. It shows that the human mind is subject to both conscious and sub-conscious messages. It also shows that the sub-conscious channel is using a broader bandwidth–and when humans process messages consciously, they often filter the messages in ways that limit the effectiveness of those messages.

One of the most important findings from psychological research in the past 10 years–I hate to call it “brain science” because that’s an inaccurate tease–is that much of what controls human thinking comes from or is influenced by sub-conscious primes. Speed limit signs (conscious messages to slow down) are not as effective as narrowing streets, planting trees near streets, and other sub-conscious influencers. Committing to a diet may not be as effective as using smaller dishes, removing snacks from eyesight, and shopping at farmer’s markets instead of in the processed-food isles of grocery stores.

We workplace professionals tend to use the conscious communication channel almost exclusively–we think it’s our job to compile content, make the best arguments for it’s usefulness, and share information so that our learners acknowledge its value and plan to use it. But, if a large part of human cognition is sub-conscious, shouldn’t we use that too? Don’t we have a professional responsibility to be as effective as we can?

My action-planning method does just that. It sets triggers that later create spontaneous sub-conscious prompts to action. I’m calling this “Triggered Action Planning”–a reminder that we are TAP-ping into our learners’ sub-conscious processing to help them remember what they’ve learned. SMILE.

The basic concept is this: We want learners, when they are back on the job, to be reminded of what they’ve learned. We should do this by aligning context–one of the Decisive Dozen research-based learning factors–in our training designs. We can do this by using more hands-on exercises, more real work, more simulations–but we can extend this to action planning as well.

The key is to set SITUATION-ACTION triggers. We want contextual situations to trigger certain actions. So for example, if we teach supervisors to bring their direct reports into decision-making, we want them to think about this when they are having team meetings, when they are discussing a decision with one of their direct reports, etc. The SITUATION could be a team meeting. The ACTION could be delegating a decision, asking for input, etc., as appropriate.

In action planning, it’s even simpler. Instead of just asking our learners what their goals are for implementing what they’ve learned, we also ask them to select situations when they will begin to carry out those goals. So for example:

  • GOAL: I will work with my team to identify a change initiative.
  • SITUATION-ACTION: At our first staff meeting in October, I will work with my team to identify a change initiative.

Remarkably, this kind of intervention–what researchers call “implementation intentions”–has been found to create incredibly significant effects, often doubling compliance of actual performance!!!!!!!!!!!!!

I think this research finding is so important to workplace learning that I’ve devoted a whole section of my unpublished tome to considering how to use it. Instead of using the term “implementation intentions”–it’s such a mouthful–I just call this trigger-setting.

The bottom line here is that we may be able to double the likelihood that our learners actually apply what they’ve learned simply by having our learners link situations and actions in their action planning.

New Job Aid for Triggered Action Planning

You can easily create your own triggered-action planning worksheets or e-learning interactions, but I’ve got one ready to go that you can use as is–FREE OF CHARGE BECAUSE I LOVE TO SHARE–or you can just use it as a starting point for your own triggered-action-planning exercises.

 

Click here to download the triggered-action-planning job aid (as a PDF)

Click here for a Word version (so you can modify)

 

Research:

Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38, 69-119.

Bjork, R. A., & Richardson-Klavehn, A. (1989). On the puzzling relationship between environmental context and human memory. In C. Izawa (Ed.) Current Issues in Cognitive Processes: The Tulane Floweree Symposium on Cognition (pp. 313-344). Hillsdale, NJ: Erlbaum.

Roediger, H. L., III, & Guynn, M. J. (1996). Retrieval processes. In E. L. Bjork & R. A. Bjork (Eds.), Memory (pp. 197-236). San Diego, CA: Academic Press.

Smith, S. M., & Vela, E. (2001). Environmental context-dependent memory: A review and meta-analysis. Psychonomic Bulletin & Review, 8, 203-220.

Thalheimer, W. (2013). The decisive dozen: Research review abridged. Available at the Work-Learning Research catalog.

About two years ago, four enterprising learning researchers reviewed the research on training and development and published their findings in a top-tier refereed scientific journal. They did a really nice job!

Unfortunately, a vast majority of professionals in the workplace learning-and-performance field have never read the research review, nor have they even heard about it.

As a guy whose consulting practice is premised on the idea that good learning research can be translated into practical wisdom for instructional designers, trainers, elearning developers, chief learning officers and other learning executives, I have been curious to see to what extent this seminal research review has been utilized by other learning professionals. So, for the last year and a half or so, I’ve been asking the audiences I encounter in my keynotes and other conference presentations whether they have encountered this research review.

Often I use the image below to ask the question:

Click here to see original research article…

 

What would be your guess as to the percentage of folks in our industry who have read this?

10%

30%

50%

70%

90%

Sadly, in almost all of the audiences I’ve encountered, less than 5% of the learning professionals have read this research review.

Indeed, usually more than 95% of workplace learning professionals have “never heard of it” even two years after it was published!!!

THIS IS DEEPLY TROUBLING!

And the slur this dumps on our industry’s most potent institutions should be self-evident. And I, too, must take blame for not being more successful in getting these issues heard.

A Review of the Review

People who are subscribed to my email newsletter (you can sign up here), have already been privy to this review many months ago.

I hope the following review will be helpful, and remember, when you’re gathering knowledge to help you do your work, make sure you’re gathering it from sources who are mindful of the scientific research. There is a reason that civilization progresses through its scientific efforts–science provides a structured process of insight generation and testing, creating a self-improving knowledge-generation process that maximizes innovation while minimizing bias.

——————————-

Quotes from the Research Review:

“It has long been recognized that traditional,
stand-up lectures are an inefficient and
unengaging strategy for imparting
new knowledge and skills.” (p. 86)

 

“Training costs across organizations remain
relatively constant as training shifts from
face-to-face to technology-based methods.” (p. 87)

 

“Even when trainees master new knowledge and
skills in training, a number of contextual factors
determine whether that learning is applied
back on the job…” (p. 90)

 

“Transfer is directly related to opportunities
to practice—opportunities provided either by
the direct supervisor or the organization
as a whole.” (p. 90)

 

“The Kirkpatrick framework has a number of
theoretical and practical shortcomings…” (p. 91)

Introduction

I, Will Thalheimer, am a research translator. I study research from peer-reviewed scientific journals on learning, memory, and instruction and attempt to distill whatever practical wisdom might lurk in the dark cacophony of the research catacomb. It’s hard work—and I love it—and the best part is that it gives me some research-based wisdom to share with my consulting clients. It helps me not sound like a know-nothing. Working to bridge the research-practice gap also enables me to talk with trainers, instructional designers, elearning developers, chief learning officers, and other learning executives about their experiences using research-based concepts.

 

It is from this perspective that I have a sad, and perhaps horrifying, story to tell. In 2012—an excellent research review on training was published in a top-tier journal. Unbelievably, most training practitioners have never heard of this research review. I know because when I speak at conferences and chapters in our field I often ask how many people have read the article. Typically, less than 5% of experienced training practitioners have! Less than 1 in 20 people in our field have read a very important review article.

 

What the hell are we doing wrong? Why does everyone know what a MOOC is, but hardly anyone has looked at a key research article?

 

You can access the article by clicking here. You can also read my review of some of the article’s key points as I lay them out below.

 

Is This Research Any Good?

Not all research is created equal. Some is better than others. Some is crap. Too much “research” in the learning-and-performance industry is crap so it’s important to first acknowledge the quality of the research review.

The research review by Eduardo Salas, Scott Tannenbaum, Kurt Kraiger, and Kimberly Smith-Jentsch from November 2012 was published in the highly-regarded peer-reviewed scientific journal, Psychological Science in the Public Interest, published by the Association for Psychological Science, one of the most respected social-science professional organizations in the world. The research review not only reviews research, but also utilizes meta-analytic techniques to distill findings from multiple research studies. In short, it’s high-quality research.

 

The rest of this article will highlight key messages from the research review.

 

Training & Development Gets Results!

The research review by Salas, Tannenbaum, Kraiger, and Smith-Jentsch shows that training and development is positively associated with organizational effectiveness. This is especially important in today’s economy because the need for innovation is greater and more accelerated—and innovation comes from the knowledge and creativity of our human resources. As the researchers say, “At the organizational level, companies need employees who are both ready to perform today’s jobs and able to learn and adjust to changing demands. For employees, that involves developing both job-specific and more generalizable skills; for companies, it means taking actions to ensure that employees are motivated to learn.” (p. 77). Companies spend a ton of money every year on training—in the United States the estimate is $135 billion—so it’s first important to know whether this investment produces positive outcomes. The bottom line: Yes, training does produce benefits.

 

To Design Training, It Is Essential to Conduct a Training Needs Analysis

“The first step in any training development effort ought to be a training needs analysis (TNA)—conducting a proper diagnosis of what needs to be trained, for whom, and within what type of organizational system. The outcomes of this step are (a) expected learning outcomes, (b) guidance for training design and delivery, (c) ideas for training evaluation, and (d) information about the organizational factors that will likely facilitate or hinder training effectiveness. It is, however, important to recognize that training is not always the ideal solution to address performance deficiencies, and a well-conducted TNA can also help determine whether a non-training solution is a better alternative.” (p. 80-81) “In sum, TNA is a must. It is the first and probably the most important step toward the design and delivery of any training.” (p. 83) “The research shows that employees are often not able to articulate what training they really need” (p. 81) so just asking them what they need to learn is not usually an effective strategy.

 

Learning Isn’t Always Required—Some Information can be Looked Up When Needed

When doing a training-needs analysis and designing training, it is imperative to separate information that is “need-to-know” from that which is “need-to-access.” Since learners forget easily, it’s better to use training time to teach the need-to-know information and prepare people on how to access the need-to-access information.

 

Do NOT Offer Training if It is NOT Relevant to Trainees

In addition to being an obvious waste of time and resources, training courses that are not specifically relevant to trainees can hurt motivation for training in general. “Organizations are advised, when possible, to not only select employees who are likely to be motivated to learn when training is provided but to foster high motivation to learn by supporting training and offering valuable training programs.” (p. 79) This suggests that every one of the courses on our LMS should have relevance and value.

 

It’s about Training Transfer—Not Just about Learning!

“Transfer refers to the extent to which learning during training is subsequently applied on the job or affects later job performance.” (p. 77) “Transfer is critical because without it, an organization is less likely to receive any tangible benefits from its training investments.” (p. 77-78) To ensure transfer, we have to utilize proven scientific research-based principles in our instructional designs. Relying on our intuitions is not enough—because they may steer us wrong.

 

We must go Beyond Training!

“What happens in training is not the only thing that matters—a focus on what happens before and after training can be as important. Steps should be taken to ensure that trainees perceive support from the organization, are motivated to learn the material, and anticipate the opportunity to use their skills once on (or back on) the job.” (p. 79)

 

Training can be Designed for Individuals or for Teams

“Today, training is not limited to building individual skills—training can be used to improve teams as well.” (p. 79)

 

Management and Leadership Training Works

“Research evidence suggests that management and leadership development efforts work.” (p. 80) “Management and leadership development typically incorporate a variety of both formal and informal learning activities, including traditional training, one-on-one mentoring, coaching, action learning, and feedback.” (p. 80)

 

Forgetting Must Be Minimized, Remembering Must Be Supported

One meta-analysis found that one year after training, “trainees [had] lost over 90% of what they learned.” (p. 84) “It helps to schedule training close in time to when trainees will be able to apply what they have learned so that continued use of the trained skill will help avert skill atrophy. In other words, trainees need the chance to ‘use it before they lose it.’ Similarly, when skill decay is inevitable (e.g., for infrequently utilized skills or knowledge) it can help to schedule refresher training.” (p. 84)

 

Common Mistakes in Training Design Should Be Avoided

“Recent reports suggest that information and demonstrations (i.e., workbooks, lectures, and videos) remain the strategies of choice in industry. And this is a problem [because] we know from the body of research that learning occurs through the practice and feedback components.” (p. 86) “It has long been recognized that traditional, stand-up lectures are an inefficient and unengaging strategy for imparting new knowledge and skills.” (p. 86) Researchers have “noted that trainee errors are typically avoided in training, but because errors often occur on the job, there is value in training people to cope with errors both strategically and on an emotional level.” (p. 86) “Unfortunately, systematic training needs analysis, including task analysis, is often skipped or replaced by rudimentary questions.” (p. 81)

 

Effective Training Requires At Least Four Components

“We suggest incorporating four concepts into training: information, demonstration, practice, and feedback.” (p. 86) Information must be presented clearly and in a way that enables the learners to fully understand the concepts and skills being taught. Skill demonstrations should provide clarity to enable comprehension. Realistic practice should be provided to enable full comprehension and long-term remembering. Proving feedback after decision-making and skill practice should be used to correct misconceptions and improve the potency of later practice efforts.

The bottom line is that more realistic practice is needed. Indeed, the most effective training utilizes relatively more practice and feedback than is typically provided. “The demonstration component is most effective when both positive and negative models are shown rather than positive models only.” (p. 87)

Will’s Note: While these four concepts are extremely valuable, personally I think they are insufficient. See my research review on the Decisive Dozen for my alternative.

 

E-Learning Can Be Effective, But It May Not Lower the Cost of Training

“Both traditional forms of training and technology-based training can work, but both can fail as well. (p. 87) While the common wisdom argues that e-learning is less costly, recent “survey data suggest that training costs across organizations remain relatively constant as training shifts from face-to-face to technology-based methods.” (p. 87) This doesn’t mean that e-learning can’t offer a cost savings, but it does mean that most organizations so far haven’t realized cost savings. “Well-designed technology-based training can be quite effective, but not all training needs are best addressed with that approach. Thus, we advise that organizations use technology-based training wisely—choose the right media and incorporate effective instructional design principles.” (p. 87)

 

Well-Designed Simulations Provide Potent Learning and Practice

“When properly constructed, simulations and games enable exploration and experimentation in realistic scenarios. Properly constructed simulations also incorporate a number of other research-supported learning aids, in particular practice, scaffolding or context-sensitive support, and feedback. Well-designed simulation enhances learning, improves performance, and helps minimize errors; it is also particularly valuable when training dangerous tasks. (p. 88)

 

To Get On-the-Job Improvement, Training Requires After-Training Support

“The extent to which trainees perceive the posttraining environment (including the supervisor) as supportive of the skills covered in training had a significant effect on whether those skills are practiced and maintained.” (p. 88) “Even when trainees master new knowledge and skills in training, a number of contextual factors determine whether that learning is applied back on the job: opportunities to perform; social, peer, and supervisory support; and organizational policies.” (p. 90) A trainee’s supervisor is particularly important in this regard. As repeated from above, researchers have “discovered that transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization as a whole.” (p. 90)

 

On-the-Job Learning can be Leveraged with Coaching and Support

“Learning on the job is more complex than just following someone or seeing what one does. The experience has to be guided. Researchers reported that team leaders are a key to learning on the job. These leaders can greatly influence performance and retention. In fact, we know that leaders can be trained to be better coaches…Organizations should therefore provide tools, training, and support to help team leaders to coach employees and use work assignments to reinforce training and to enable trainees to continue their development.” (p. 90)

 

Trainees’ Supervisors Can Make or Break Training Success

Researchers have “found that one misdirected comment by a team leader can wipe out the full effects of a training program.” (p. 83) “What organizations ought to do is provide leaders with information they need to (a) guide trainees to the right training, (b) clarify trainees’ expectations, (c) prepare trainees, and (d) reinforce learning…” (p. 83) Supervisors can increase trainees’ motivation to engage in the learning process. (p. 85) “After trainees have completed training, supervisors should be positive about training, remove obstacles, and ensure ample opportunity for trainees to apply what they have learned and receive feedback.” (p. 90) “Transfer is directly related to opportunities to practice—opportunities provided either by the direct supervisor or the organization.” (p. 90)

 

Will’s Note: I’m a big believer in the power of supervisors to enable learning. I’ll be speaking on this in an upcoming ASTD webinar.

 

Basing Our Evaluations on the Kirkpatrick 4 Levels is Insufficient!!!

“Historically, organizations and training researchers have relied on Kirkpatrick’s [4-Level] hierarchy as a framework for evaluating training programs…[Unfortunately,] The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… Although the Kirkpatrick hierarchy has clear limitations, using it for training evaluation does allow organizations to compare their efforts to those of others in the same industry. The authors recommendations for improving training evaluation fit into two categories. First, [instead of only using the Kirkpatrick framework] “organizations should begin training evaluation efforts by clearly specifying one or more purposes for the evaluation and should then link all subsequent decisions of what and how to measure to the stated purposes.” (p. 91) Second, the authors recommend that training evaluations should “use precise affective, cognitive, and/or behavioral measures that reflect the intended learning outcomes.” (p. 91)

 

This is a devastating critique that should give us all pause. Of course it is not the first such critique, nor will it have to be the last I’m afraid. The worst part about the Kirkpatrick model is that it controls the way we think about learning measurement. It doesn’t allow us to see alternatives.

 

Leadership is Needed for Successful Training and Development

“Human resources executives, learning officers, and business leaders can influence the effectiveness of training in their organizations and the extent to which their company’s investments in training produce desired results. Collectively, the decisions these leaders make and the signals they send about training can either facilitate or hinder training effectiveness…Training is best viewed as an investment in an organization’s human capital, rather than as a cost of doing business. Underinvesting can leave an organization at a competitive disadvantage. But the adjectives “informed” and “active” are the key to good investing. When we use the word “informed,” we mean being knowledgeable enough about training research and science to make educated decisions. Without such knowledge, it is easy to fall prey to what looks and sounds cool—the latest training fad or technology.”  (p. 92)

Thank you!

I’d like to thank all my clients over the years for hiring me as a consultant, learning auditor, workshop provider, and speaker–and thus enabling me to continue in the critical work of translating research into practical recommendations.

If you think I might be able to help your organization, please feel free to contact me directly by emailing me at “info at worklearning dot com” or calling me at 617-718-0767.

 

As of today, the Learning Styles Challenge payout is rising from $1000 to $5000! That is, if any person or group creates a real-world learning intervention that takes learning styles into account–and proves that such an intervention produces better learning results than a non-learning-styles intervention, they’ll be awarded $5,000!

Special thanks to the new set of underwriters, each willing to put $1000 in jeopardy to help get the word out to the field:

Learning Styles Challenge Rules

We’re still using the original rules, as established back in 2006. Read them here.


What is Implied in This Debunking

The basic finding in the research is that learning interventions that take into account learning styles do no better than learning interventions that do not take learning styles into account. This does not mean that people do not have differences in the way they learn. It just means that designing with learning styles in mind is unlikely to produce benefits–and thus the extra costs are not likely to be a good investment.

Interestingly, there are learning differences that do matter! For example, if we really want to get benefits from individual differences, we should consider the knowledge and skill level of our learners.


What Can You Do to Spread the Word

Thanks to multiple efforts by many people over the years to lessen the irrational exuberance of the learning-styles proliferators, fewer and fewer folks in the learning field are falling prey to the learning-styles myth. But the work is not done yet. This issue still needs your help!

Here’s some ideas for how you can help:

  • Spread the word through social media! Blogs, Twitter, LinkedIn, Facebook!
  • Share this information with your work colleagues, fellow students, etc.
  • Gently challenge those who proselytize learning styles.
  • Share the research cited below.


History of the Learning Styles Challenge

It has been exactly eight years since I wrote in a blog post:

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

Eight years is a long time. Since that time, over one billion babies have been born, 72 billion metric tons of carbon pollution have been produced, and the U.S. Congress has completely stopped functioning.

However, not once in these past eight years has any person or group collected on the Learning Styles challenge. Not once!


Research on Learning Styles

However, since 2006, more and more people have discovered that learning styles are unlikely to be an effective way to design instruction.

First, there was the stunning research review in the top-tier scientific journal, Psychological Science in the Public Interest:

Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.

The authors wrote the following:

We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all. (p. 105)

To read more about what they wrote, click here.

Two years later, two of the authors reiterated their findings in a separate–and nicely written–article for the Association for the Study of Medical Education. You can access that article at: http://uweb.cas.usf.edu/~drohrer/pdfs/Rohrer&Pashler2012MedEd.pdf. Here’s the research citation:

Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

A researcher who had once advocated for learning styles did an about face after he did some additional research:

Cook, D. A. (2012). Revisiting cognitive and learning styles in computer-assisted instruction: Not so useful after all. Academic Medicine, 87(6), 778-784.
Of course, not everyone is willing to give up on learning styles. For example, Furnham (2012) wrote:
The application of, and research into, learning styles and approaches is clearly alive and well. (p. 77).
Furnham, A. (2012). Learning styles and approaches to learning. In K. R. Harris, S. Graham, T. Urdan, S. Graham, J. M. Royer, & M. Zeidner (Eds.), APA handbooks in psychology. APA educational psychology handbook, Vol. 2. Individual differences and cultural and contextual factors (pp. 59-81). doi:10.1037/13274-003

A quick cursory look–today–through the PsycINFO database shows that scientific published articles on learning styles are still being published.


Learning Styles in the Workplace Learning Field

Guy Wallace, performance analyst and instructional architect, has been doing a great job keeping the workplace learning field up on the learning-styles debate. Check out his article in eLearn Magazine and his blog post update.

You’ll note from Guy’s blog post that many prominent thought leaders in the field have been suspicious of learning styles for many years.

 

 

 

In continuing my dual career–partly as a charismatic and wildly-effective learning consultant and keynote speaker, partly as a grizzled, hermitted researcher and agonizingly-slow book writer (still working on the same book for the last 16 years)–I have started talking publicly about my list of 12 factors that if training developers implemented, would propel their training to divine glory. I call this list “The Decisive Dozen.”

Click here to read a short introduction to the Decisive Dozen

Of course, I get questions. “How does he do it when so many others have perished?” “How does he get his hair to do that?” And my personal favorite, “How the hell does Thalheimer know these are the most critical learning factors?” It’s a fair question, and instead of releasing 12 chapters from my potentially-forthcoming hopefully-not-posthumorous book, I created a little research brief just to show that I’m not making this stuff up. I’m stealing directly from the world’s best learning researchers!

Click to download the Decisive Dozen Research Review

Thanks for your interest in my work!

Clark Quinn (blog, website, Twitter)  recently cited some of my thinking about instructional objectives in the instructional technology forum of AECT (ITFORUM). I wrote a long email to Clark in response, thanking him, and going into more detail. I am reprising my response to Clark here:

In a recent post to this list, Clark Quinn rightly notes that objectives for learners and objectives for instructional designers need not be identical. Indeed, as both Clark and I have previously noted, the probably shouldn’t be identical.

Here’s the thinking: Objectives are designed to guide behavior. So, how can it be that identically-worded objectives can adequately guide the behavior of two disparate groups of individuals (learners and instructional designers)? It just doesn’t make any sense!!

And indeed, Hamilton (1985) found that presenting learners with learning objectives in the way Mager suggested, PRODUCES NO BENEFITS AND MAY BE HARMFUL. Here’s what Hamilton wrote:

“[An instructional] objective that generally identifies the information to be learned in the text will produce robust effects. Including other information (per Mager’s, 1962, definition) will not significantly help and it may hinder the effects of the objectives”

(Hamilton, 1985, p. 78).

Objectives are not only designed to change behavior for a particular set of individuals, but they are also designed with particular purposes in mind—or they should be.

So, when we talk of instructional objectives, we also need to think about what purpose we have for them.

The quote above from Hamilton is focused on how well learning objectives focus the attention of learners. Interestingly, this is the only area in which extensive research has been done on learning objectives. You might be surprised to know that learning objectives help learners focus on the information targeted by learning objectives, but actually diminish their attention on information in the learning materials not targeted by learning objectives. For example, in two experiments using specific objectives, Rothkopf and Billington (1979) found that when focusing objectives were provided to learners, performance on material related to the objectives improved by 49% and 47% over situations when focusing objectives were not used. However, the material not related to the learning objectives was learned 39% and 33% WORSE than it would have been if no learning objectives were used!

These types of instructional objectives—presented to learners prior to subsequent learning—I call “focusing objectives” because they are designed for the purpose of focusing learner attention on critical learning material. As the Hamilton (1985) review pointed out, it does NOT help to add Mager’s criterion information to focusing objectives, because it doesn’t help learners focus on the critical material.

NOW, here’s an important point (I say to focus your attention):  We don’t necessarily need to use focusing objectives with learners if we have other means to focus their attention!! We can use a relevant, gripping story. We can do a shout-out (example, “Here’s an important point…”). We can have them attempt to answer a relevant scenario-based question and struggle with it. Etcetera.

Here’s another important point: Focusing objectives are only one type of objective we might want to utilize. I have a whole list, and I’m sure you can think of more of them.

Instructional Objectives for Learners:

  1. Table-of-Contents Objectives
    To give learners a big picture sense of what will be taught.
  2. Performance Objectives
    To let learners know what performance will be expected of them.
  3. Motivation Objective
    To ensure learners know why they might be motivated to engage the learning or application of the learning.
  4. Focusing Objective
    To guide learner attention to the most critical information in the learning material.

Instructional Objectives for Developers:

  1. Instructional-Design Objective
    To guide developers toward the ultimate goal of the learning intervention.
  2. Evaluation Objective
    To guide developers (and other stakeholders) to the ultimate measurable outcomes that the learning intervention will be measured by.
  3. Situation Objectives
    To guide developers to the situations that learners must be prepared for.
  4. Organization Objective
    To guide developers to the organizational effects targeted by the instruction.

 

Questions:

So, here’s some questions for you:

Is it okay to use the word understand in an “instructional-design objective”?

How about in a “focusing objective”?

Answer: It’s okay to use the word understand in a focusing objective—because it does not hurt the learner in setting them up to focus attention on critical concepts. But it is NOT okay to use the word understand in an instructional-design objective—because the word “understand” doesn’t have enough specificity to guide instructional design.

My point in asking these questions is to show that over-simplistic notions about instructional objectives are likely to be harmful to your instructional designs.

As usual, the research helps us see things we wouldn’t otherwise have seen.

Hope this helps!!

= Will

 

Will’s Note:

References

Hamilton, R. J. (1985). A framework for the evaluation of the effectiveness of adjunct questions and objectives. Review of Educational Research, 55, 47-85.

Mager, R. (1962). Preparing Instructional Objectives. Palo Alto, CA: Fearon Publishers.

Rothkopf, E. Z., & Billington, M. J. (1979). Goal-guided learning from text: Inferring a descriptive processing model from inspection times and eye movements. Journal of Educational Psychology, 71(3), 310-327.

I set out over 15 years ago to come up with a short list of the most important learning factors based on scientific research and practical real-world wisdom. I felt at the time, and I believe even more strongly now, that the learning field–particularly the workplace learning-and-performance field–is too strongly tempted to jump from one learning fad to another, while ignoring the learning factors that are most important.

My original goal was to create a list that had no more than seven learning factors. I did extensive reviews of a wide swath of the research on learning, memory, and instruction; often doing comparative effect-size analyses to determine what factors were most important. As the years went by and blurred into the second decade of work, I more and more looked at the research from a practical perspective, hoping to find the factors that were not just the most potent, but also the most leveragable by real-world instructional designers, trainers, teachers, e-learning developers, etc.In the end, I failed to find only seven factors, but found 12 that seem extraordinarily potent and leveragable.

Obviously, to pick the most important learning factors is a difficult endeavor–and one subject to a significant degree of human judgment, of which some of mine is surely faulty. Still, all along the way, I have not lost my strong belief that coming up with a short list of learning factors based on the world’s best scientific research from peer-reviewed refereed journals would be extremely helpful in keeping us focused on the factors that matter the most. At a minimum, I feel the list I have created will help most of us create remarkably more effective learning interventions.

In the current version of my book draft, I say the following:

If you put all 12 of these factors into practice, your learning interventions are
likely to be more effective than 95% of all workplace learning interventions
currently being utilized!!

This is quite a bold statement, I know. But I’m very comfortable in making it. In the book I provide a detailed footnote explaining the evidence behind the statement, and maybe an editor will convince me not to be so bold (maybe push me down to 93%, for example–SMILE), but I really do think that most learning interventions in the workplace learning field are lacking significant effectiveness.

Also in the book, each of the 12 factors has its own chapter, often dozens of pages long, backed up by dozens of research studies, and dozens of practical implications and recommendations. Dozens to the max! Obviously, the short descriptions in the synopsis below cannot even approach doing these topics justice. Still, they may provide you with a good framework to enable you to begin to see your learning designs in a different light.

The Decisive Dozen

Because I have shared the Decisive Dozen with clients and in keynote addresses and conference speeches for the last year or so, I decided that it was time to make the list officially public.

Click here to download a brief synopsis of the Decisive Dozen

I welcome your comments and feedback.

And, of course, for those who can’t wait for the book (I can’t blame you, I’m taking a long time, aren’t I?), I would be delighted to discuss with you how the Decisive Dozen might be helpful in guiding your organization to greater learning effectiveness.

Update: Now you can check out a research review supporting the Decisive Dozen.

Click here to get access to the free research review

Despite decades of advocacy by our best trade associations, our wisest gurus, and our most practical researchers, most organizations today still rely on training courses that have little impact in promoting on-the-job performance.

As I mentioned in a recent article, we as learning professionals continue to fail in five major ways. You can access that article by clicking here.

I used to think that this was just a failure of knowledge, but in most of the organizations in which I’ve consulted, there are at least a few learning-and-performance professionals who understand that training alone is not enough. Part of the problem is the dead weight of tradition–the “old normal” continues to blind us to new possibilities. The enlightened few have a hard time pushing back against the gravitational pull of this mass hypnosis.

I recently had a new insight–a way of looking at this problem that I think might enable organizations to break out of their bad habits. The solution is that we have to gain control of the leverage points we have to push for change. We have to change the levers that warp and control our thinking. The big lever is learning measurement. I’ve been pushing this for years as our most important leverage point. If we measured better, we’d get better feedback, which would push us to create better learning interventions.

But learning measurement isn’t our only lever and changing your learning measurement practices is not always easy politically. Beside learning measurement, I’ve compiled a whole list of other leverage points that  really matter. In fact, it was only recently that I had this incredible insight (one I maybe should have had 10 years ago), that we ought to figure out all the levers we have at our disposal and change them to help push our organizations toward a performance orientation. I’d like to reveal one of those levers today.

One of the things we do in our organizations is review our training courses from time to time–either intentionally or by osmosis and feeling. Well, instead of using the wrong metrics, why not use methods that we know–based on our understanding of learning-and-performance–are likely to be good indicators of whether our training course will support actual on-the-job performance.

The Course Review Template is something that can be used on any training course–classroom training or e-learning.It includes a set of questions that are indicators of how performance-based your training course is. Each rubric in this tool is inspired by research or proven practices which I’ve learned in my 25+ years in the workplace learning field.

I should give you a warning. You’re unlikely to be happy with what you find. If I bet each of you one dollar for each training course of yours that doesn’t support performance, I’d be a millionaire overnight.

But to be fair, I’m going to let you try out the tool yourself. It’s free. Use it. And, let me know how your training courses rate. Are they likely to improve on-the-job performance or not?

Click to Download the Course Review Template

After you review a course, post your results at the following link, and when we get enough responses, we’ll let you compare your results to others.

Click to Post Your Course Review Results — SORRY, we’re done collecting data in a survey format.

Maybe I’m having a momentary bout of delusional cognition, but I’m thinking right now that this simple Course Review Template might just revolutionize our ability to simply review our courses to see how performance focused they are.

Such a grandiose statement will provoke eye rolls in some, so let me stipulate a few things. First, this is a first draft, so the Course Review Template is going to be imminently improvable. Second, the Course Review Template is NOT a precision instrument. It is not psychometrically derived, the numbers it assigns to each rubric are best guesses, and there was no super-committee here–just me. Third, the rubrics themselves are subject to interpretation. Instead of over-complicating the form and making it unusable, I decided to keep it simple and make it less precise. Finally, course reviews are just one of the levers you’ll need to completely transition from a course-focus to a performance-focus.

The bottom line is that we have to try some innovate new things to push our organizations to a performance focus. The old ways have not worked. The Course Review Template–or something like it–is worth a try. And seriously, I think it could revolutionize the way your organization views its training courses.

NOTE 2017: While this is the original blog post, it now includes the latest version of the Course Review Template. A later post that introduced the improvements is available here.

 

This blog post is excerpted from the full report, How Much Do People Forget? Click here to download the full report. You may also access the report—and many other reports—by going to my catalog page by clicking here.

Everybody Wants to Know—How Much Do People Forget?

For years, people have been asking me, “How much do people forget?” and I’ve told them, “It depends.” When I make this statement, most people scowl at me and walk away frustrated and unrequited. I also suspect that some of them think less of me—perhaps that I am just hiding my ignorance.

But I try. I try to explain the complexity of human learning. I explain that forgetting depends on many things, for example:

  • The type of material that is being learned
  • The learners’ prior knowledge
  • The learners’ motivation to learn
  • The power of the learning methods used
  • The contextual cues in the learning and remembering situations
  • The amount of time the learning has to be retained
  • The difficulty of the retention test
  • Etc.

More meaningful materials (like stories) tend to be easier to remember than less meaningful material (like nonsense syllables). More relevant concepts tend to be easier to remember than less relevant concepts. Learners who have more prior knowledge in a topic area are likely to be better able to remember new concepts learned in that area. More motivated learners are more likely to remember than less motivated learners. Learners who receive repetitions, retrieval practice, feedback, variety (and other potent learning methods) are more likely to remember than learners who do not receive such learning supports. Learners who are provided with learning and practice in the situations where they will be asked to remember the information will be better able to remember. Learners who are asked to retrieve information shortly after learning it will retrieve more than learners who are asked to retrieve information a long time after learning it.

I try to explain all this, but still people keep asking.

And then there are the statistics I keep hearing—that are passed around the learning field from person to person through the years as if they were immutable truths carved by Old Moses Ebbinghaus on granite stones. Here is some information so cited (as of December 2010):

  • People forget 40% of what they learned in 20 minutes and 77% of what they learned in six days (http://www.festo-didactic.co.uk/gb-en/news/forgetting-curve-its-up-to-you.htm?fbid=Z2IuZW4uNTUwLjE3LjE2LjM0Mzc).
  • People forget 90% after one month. (http://www.reneevations.com/management/ebbinghaus-curve/)
  • People forget 50-80% of what they’ve learned after one day and 97-98% after a month. (http://www.adm.uwaterloo.ca/infocs/study/curve.html)

Never mind that these immutable truths conflict with each other.

So, I will try one more time to convince the world that forgetting depends.

To accomplish this, I explored 14 research articles, examining 69 conditions to see how much forgetting occured, representing over 1,000 learners.

The following graph details the amount of forgetting for each of the 69 conditions:

 

Conclusions

This graph and the indepth analysis in the full article revealed four critical concepts in human learning—truths that every learning professional should deeply understand.

  1. The amount a learner will forget varies depending on many things. We as learning professionals will be more effective if we make decisions based on a deep understanding of how to minimize forgetting and enhance remembering.
  2. Rules-of-thumb that show people forgetting at some pre-defined rate are just plain false. In other words, learning gurus and earnest bloggers are wrong when they make blanket statements like, “People will forget 40% of what they learned within a day of learning it.”
  3. Learning interventions can produce profound improvements in long-term remembering. In other words, learning gurus are wrong when they say that training is not effective.
  4. Different learning methods produce widely different amounts of forgetting. We as learning professionals can be more effective if we take a research-based approach and utilize those learning methods that are most effective.

Telling Findings From the Research

  1. People in the reviewed experiments forgot from 0% to 94% of what they had learned. The bottom line is that forgetting varies widely.
  2. Even within a restricted time range, learners forgot at wildly differing rates. For example, in the 1-2 day range, learners forgot from 0 to 73%. Learners in the 2-8 year range forgot from 16% to 94%. The obvious conclusion here is that learning varies widely (and wildly) and cannot be predetermined (except perhaps by deities, of whom, I think, we have not even a few in the learning field). To be specific, when we hear statements like, “People will forget 60% of what they learned within 7 days,” we should ignore such advice and instead reflect on our own superiority and good looks until we are decidedly pleased with ourselves.
  3. Even when we looked at only one type of learning material, forgetting varied widely. For example, in Bahrick’s classic 1979 experiment where learners were learning English-Spanish word pairs, learners forgot from 12% to 63%. Even more remarkably, if we include those cases where learners actually remembered more on the second test than the first test, learners’ “forgetting” varied from -41% to 63%, a swing of 104 percentage points! Again, we must conclude that forgetting varies widely.
  4. Many of the experiments reviewed in this report showed clearly that learning methods matter. For example, in the Bahrick 1979 study, the best learning methods produced an average forgetting score of -29% forgetting, whereas the worst learning methods produced forgetting at 47%, a swing of 76% points. In Runquist’s 1983 study, the best learning method produced average forgetting at 34%, whereas all the other learning methods produced average forgetting of 78%. In Allen, Mahler, and Estes’ 1969 experiment, the learners given the best learning methods forgot an average of 2.3%, whereas the learners who got middling learning methods forgot an average of 14.3%, and learners given the worst learning methods forgot approximately 21.7%. The bottom line is that the learning methods we choose make all the difference!!

Check out the full report to learn more about the following:

  • What you should do as a learning professional (in light of these findings).
  • Whether the learning-curve notion still applies.
  • What wisdom each of the 14 research articles revealed.
  • The methodology used in the research.
  • The calculation of forgetting.

 

Many of us are inclined to see audience response systems only as a way to deliver multiple-choice and true-false questions. While this may be true in a literal sense, such a restricted conception can divert us from myriad possibilities for deep and meaningful learning in our classrooms.

The following list of 39 question types and methods is provided to show the breadth of possibilities. It is distilled from 85 pages of detailed recommendations in the white paper, Questioning Strategies for Audience Response Systems: How to Use Questions to Maximize Learning, Engagement, and Satisfaction, available free by clicking here.

NOTE from Will Thalheimer (2017): The report is focused on audience-response systems — and I must admit that it is a bit dated now in terms of the technology, but the questions types are still a very potent list.

1. Graded Questions to Encourage Attendance

Questions can be used to encourage attendance, but there are dangers that must be avoided.

2. Graded Questions to Encourage Homework and Preparation

Questions can be used to encourage learners to spend time learning prior to classroom sessions, but there are dangers that must be avoided.

3. Avoiding the Use of One Correct Answer (When Appropriate)

Questions that don’t fulfill a narrow assessment purpose need not have right answers. Pecking for a correct answer does not always produce the most beneficial mathemagenic (learning creating) cognitive processing. We can give partial credit. We can have two answers be equally acceptable. We can let the learners decide on their own.

4. Prequestions that Activate Prior Knowledge

Questions can be used to help learners connect their new knowledge to what they’ve already learned, making it more memorable. For example, a cooking teacher could ask a question about making yogurt before introducing a topic on making cheese, prompting learners to activate their knowledge about using yogurt cultures before they begin talking about how to culture cheese. A poetry teacher could ask a question about patriotic symbolism, before talking about the use of symbols in modern American poetry.

5. Prequestions that Surface Misconceptions

Learners bring naïve understandings to the classroom. One of the best ways to confront misconceptions is to bring them to the surface so that they can be confronted straight on. The Socratic Method is a prime example of this. Socrates asks a series of prequestions thereby unearthing misconceptions and leading to a new improved understanding.

6. Prequestions to Focus Attention

Our learners’ attention wanders. In an hour-long session, sometimes they’ll be riveted to the learning discussion, sometimes they’ll be thinking of other ideas that have been triggered, and sometimes they’ll be off in a daze. Prequestions (just like well-written learning objectives) can be use to help learners pay attention to the most important subsequent learning material. In fact, in one famous study, Rothkopf and Billington (1979) presented learners with learning objectives before they encountered the learning material. They then measured learning and eye movements and found that learners actually paid more attention to aspects of the learning material targeted by  the learning objectives. Prequestions work the same way as learning objectives—they focus attention.

7. Postquestions to Provide Retrieval Practice

Postquestions—questions that come after the learning content has been introduced—can be used to reinforce what has been learned and to minimize forgetting. This is a very basic process. By giving learners practice in retrieving information from memory, we increase the probability that they’ll be able to do this in the future. Retrieval practice makes perfect.

8. Postquestions to Enable Feedback

Feedback is essential for learners and instructors. Corrective feedback is critical, especially when learners have misunderstandings. Providing retrieval practice with corrective feedback is especially important as learners are struggling with newly-encountered material, difficult material, and when their attention is likely to wander—for example when they’re tired after a long-day of training, when there are excessive distractions, or when the previous material has induced boredom.

9. Postquestions to Surface Misconceptions

We already talked about using prequestions to surface misconceptions. We can also use postquestions to surface misconceptions. Learners don’t always understand concepts after only one presentation of the material. Many an instructor has been surprised after delivering a “brilliant” exposition to find that most of their learners just didn’t get it.

10. Questions Prompting Analysis of Things Presented in Classroom

One of the great benefits of classroom learning is that it enables instructors to present learners with all manner of things. In addition to verbal utterances and marks on a white board, instructors can introduce demonstrations, videos, maps, photographs, illustrations, learner performances, role-plays, diagrams, screen shots, computer animations, etcetera. While these presentations can support learning just by being observed, questions on what has been seen can prompt a different focus and a deeper understanding.

11. Using Rubric Questions to Help Learners Analyze

In common parlance, the term “rubric” connotes a set of standards. Rubrics can be utilized in asking learners questions about what they experience in the classroom. Rubric questions, if they are well designed, can give learners practice in evaluating situations, activities, and events. Such practice is an awesome way to engage learners and prepare them for critical thinking in similar future situations. In addition, if rubrics are continually emphasized, learners will integrate their wisdom in their own planning and decision-making.

12. Questions to Debrief an In-Class Experience

Classrooms can also be used to provide learners with experiences in which they themselves participate. Learners can be asked to take part in role plays, simulations, case studies, and other exercises. It’s usually beneficial to debrief those exercises, and questions can be an excellent way to drive those discussions.

13. Questions to Surface Affective Responses

Not all learning is focused on the cold, steely arithmetic of increasing the inventory of knowledge. Learners can also experience deep emotional responses, many of which are relevant to the learning itself. In topics dealing with oppression, slavery, brutality, war, leadership, glory, and honor, learners aren’t getting the full measure of learning unless they experience emotion in some way. Learners can be encouraged to explore their affective responses by asking them questions.

14. Scenario-Based Decision-Making Questions

Scenario-based questions present learners with scenarios and then ask them to make a decision about what to do. These scenarios can take many forms. They can consist of short descriptive paragraphs or involved case studies. They can be presented in a text-only format or augmented with graphics or multimedia. They can put the learner in the protagonist’s role (“What are you going to do?”) or ask the learner to make a decision for someone else (“What should Dorothy do?”). The questions can be presented in a number of formats—as multiple-choice, true-false, check-all-that-apply, or open-ended queries.

15. Don’t Show Answer Right Away

There’s no rule that you have to show learners the correct response right after they answer the question. Such a reflexive behaviorist scheme can subvert deeper learning. Instructors have had great success in withholding feedback. For example, Harvard professor Mazur’s (1997) Peer Instruction method requires learners to make an individual decision and then try to convince a peer to believe the same decision—all before the instructor weighs in with the answer.

By withholding feedback, learners are encouraged to take some responsibility for their own beliefs and their own learning. Discussions with others further deepen the learning. Simply by withholding the answer, instructors can encourage strategic metacognitive processing, thereby sending learners the not-so-subtle message that it is they—the learners—who must take responsibility for learning.

16. Dropping Answer Choices

There are several reasons to drop answer choices after learners have initially responded to a question. You can drop incorrect answer choices to help focus further discussions on more plausible alternatives. You can drop an obviously correct choice to focus on more critical distinctions. You can drop an unpopular correct choice to prompt learners to question their assumptions and also to highlight the importance of examining unlikely options. Each of these methods has specific advantages.

17. Helping Learners Transfer Knowledge to Novel Situations

“Transfer” is the idea that the learning that happens today ought to be relevant to other situations in the future. More specifically, transfer occurs when learners retrieve what they’ve learned in relevant future situations. As we’ve already discussed, the easiest and often the most potent way to promote transfer is to provide learners with practice in the same contexts—retrieving the same information—that they’ll be required to retrieve in future situations. But questions can also be used to prepare learners to retrieve information in situations that are not, or cannot, be anticipated in designing the learning experience.

18. Making the Learning Personal

By making the learning personal, we help learners actively engage the learning material, we support mathemagenic cognitive processing, and we make it more likely that they’ll think about the learning outside of our classrooms, further reinforcing retention and utilization. Questions can be designed to relate to our learners’ personal experiences, thus bolstering learning.

19. Making the Material Important

Sometimes we can’t make the material directly personal or provide realistic decisions for learners to make, but we can still use questions to show the importance of the topic being discussed.

20. Helping Learners Question Their Assumptions

One of our goals in teaching is to get learners to change their thinking. Sometimes this requires learners to directly confront their assumptions. Questions can be written that force learners to evaluate the assumptions they bring to particular topic areas.

21. Using the Devil’s Advocate Tactic

In a classroom, when we play the devil’s advocate, we argue ostensibly to find flaws in the positions put forth. The devil’s advocate tactic can be used in a number of different ways. You can play the devil’s advocate yourself, or utilize your learners in that role. From a learning standpoint, when someone plays the devil’s advocate, learners are prompted to more fully process the learning material.

22. Data Slicing

Data slicing is the process of using one factor to help make sense of a second factor. So for example, through the use of our audience response systems, we might examine how our learner’s socio-economic background affects their opinion of race relations. Data slicing can be done manually or automatically. It is particularly powerful in the classroom for demonstrating how audience characteristics may play a part in their own perceptions or judgments.

23. Using Questions for In-class Experiments.

For some topics, in-class experimentation—using the learners as the experimental participants—is very beneficial. It helps learners relate to the topic personally. It also highlights how scientific data is derived. For example, in a course on learning, psychology, or thinking; learners could be asked to remember words, but could—unbeknownst to them—be primed to think about certain semantic associates and not others.

24. Prompting Learners to Make Predictions

Prediction-making can facilitate learning in many ways. It can be used to provide retrieval practice for well-learned information. It can be used to deepen learners’ understandings of boundary conditions, contingencies, and other complications. It can be used to engender wonder. It can be used to enable learners to check their own understanding of the concepts being learned.

25. Utilizing Student Questions and Comments

Our learners often ask the best questions. Sometimes a learner’s question hints at the outlines of his or her confusion—and the confusion of many others as well. Sometimes learners want to know about boundary conditions. Students can also offer statements that can improve the learning environment. They may share their comfort level with the topic, add their thoughts in a class discussion, or ar
gue a point because they disagree. All of these interactions provide opportunities for a richer learning environment, especially if we—as instructors—can use these questions to generate learning.

26. Enabling Readiness When Learners are Aloof or Distracted

Let’s face it. Not all of our learners will come into our classrooms ready to learn. Some will be dealing with personal problems. Some will be attending because they have to—not because they want to. Some will be distracted with other stress-inducing responsibilities. Some will think the topic is boring, silly, or irrelevant to them. Fortunately, experienced instructors have discovered tricks that often are successful. Audience response technology can help.

27. Enabling Readiness When Learners Think They Know it All

Some learners will come to your classroom thinking they already know everything they need to know about the topic you’re going to discuss. There are two types of learners who feel this way—those who are delusional (they actually need the learning) and those who are quite clearheaded (they already know what they need to know). Using the right questions and gathering everyone’s responses can help you deal with both of these characters.

28. Enabling Readiness When Learners are Hostile

In almost every instructor’s life, there will come a day when one, two, or multiple learners are publicly hostile. Experienced instructors know that such hostility must be dealt with immediately—not ignored. Even a few bad apples can ruin the learning experience and the satisfaction of the whole classroom. Fortunately, there are ways to fend off the assault.

29. Using Questions with Images

Using images as part of the learning process is critical in many domains. Obvious examples are art appreciation, architecture, geology, computer programming, and film. But even for the least likely topics, such as poetry or literature, there may be opportunities. For example, a poetry teacher may want to display poems to ask learners about the physical layout of poems. Images should not be thrown in willy-nilly. They should be used only when they help instructors meet their learning goals. Images should not be used just to make the question presentation look good. Research has shown that placing irrelevant images in learning material, even if those images seem related to the topic, can hurt learning results, distracting learners from focusing on the main points of the material . One easy rule: Don’t use images if they’re not needed to answer the question.

30. Aggregating Handset Responses for a Group or Team

Some handset brands enable responses of individual handsets to be aggregated. So for example, an instructor in a class of 50 learners might break the learners into 10 teams, with five people on a team. All 50 learners have a handset, but the responses from each team of five learners are aggregated in some way. This aggregation feature enables some additional learning benefits. Teamwork can be rewarded and competition between teams can add an extra element of motivation. Using aggregation scoring allows the instructor to encourage out-of-class activities where learners within a team help each other. Obviously, this will only work if the learning experience takes place over time. In such cases, aggregation can be used to build a learning community. Learners can be assigned to the same team or rotated on different teams, depending on the goals of instruction. Putting learners on one team encourages deeper relationships and eases the logistics for out-of-class learning. Rotating learners through multiple teams enables a greater richness of multiple perspectives and broader networking opportunities. It’s a tradeoff.

31. Using One Handset for a Group or Team

Although one of the prime benefits of handsets is that every learner is encouraged to think and respond, handsets don’t have to be used only in a one-person one-handset format. Sometimes a greater number of audience members show up than expected. Sometimes budgets don’t allow for the purchase of handsets for every learner. Sometimes learners forget to bring their handsets. In addition, sometimes there are specific interactions that are more suited to group responding. When a group of learners has to make a single response, there has to be a mechanism for them to decide what response to make. Several exist, each having their own strengths and weaknesses.

32. Using Questions in Games

As several sales representatives have told me, one of the first things instructors ask about when being introduced to a particular audience response system is the gaming features. This excitement is understandable, because almost all classroom audiences respond energetically to games. Our enthusiasm as instructors must be balanced, however, with knowledge of the pluses and minuses of gaming. Just as with grading manipulations, games energize learners toward specific overt goals—namely scoring well on the game. If this energy is utilized in appropriate mathemagenic activity, it has benefits. On the other hand, games can be highly counterproductive as well.

33. Questions to Narrow the Options in Decision Making

Sometimes the audience in the room must make decisions about what to do. For example, a senior manager running an action-learning group may want to take a vote about which project to pursue given a slate of 15 possible projects. A professor in an upper-level seminar course might give students a vote in deciding which of the 10 possible topics to discuss in the final three weeks of the course. A supervisor might want her employees to narrow down the candidates for employee of the year. A primary school teacher might want to give her students a choice of field trip options. Audience response systems can be used in two ways to do this, single round voting and double round voting.

34. Questions to Decide Go or No Go

Sometimes it’s beneficial to give our learners a chance to decide whether they’re ready to go on to the next topic. You might ask, “Are we ready to go ahead?” Or, “Are we ready to go ahead, or do I need to clarify this a bit more?” Using an audience response system has distinct advantages over handraising here because most learners are uncomfortable asking for additional instruction, even when they need it.

35. Perspective-Taking Questions

There are some topics that may benefit by encouraging learners to take perspectives of others in answering questions. In other words, instead of only asking our learners to express their opinions, we can ask them to take a guess as to the opinions of others. For example, we might ask our learners to guess the opinion of both rich and poor people to affirmative action, the importance of education, etc.

36. Open-Ended Questions

Some people think that audience response systems lack potential because they only enable the use of multiple-choice questions. In contrast, the research on learning suggests to me that (a) multiple choice questions can be powerful on their own, and (b) variations of multiple-choice questions add to this power, and (c) open-ended questions can be valuable in conjunction with multiple-choice formats, for example by letting learners think first on their own, providing student ideas, providing more authentic retrieval practice, etc.

37. Matching

Matching questions are especially valuable if your learning goal is to enable learners to distinguish between closely related items. The matching format can also be useful for logistical reasons in asking more than one question at a time. Although the matching question has its uses, it is often overused by instructors who are simply trying to use non-multiple-choice questions. Often, the matching format only helps learners reinforce relatively low-level concepts, like definitions, word meaning, simple calculations, and the like. While this type of information is valuable, it’s not clear that the classroom is the best place to reinforce this type of knowledge.

38. Asking People to Answer Different Questions

Some audience response systems enable learners to simultaneously answer different questions. In other words, Sam might answer questions 1, 3, 5, 7, and 9, while Pat answers questions 2, 4, 6, 8, and 10. This feature provides an advantage only when it’s critical not to let (a) individual learners cheat off other learners, or (b) groups of learners overhear the conversations of other groups of learners. The biggest disadvantage to this tactic is that it makes post-question discussions particularly untenable. In any case, if you do find a unique benefit to having learners answering different questions simultaneously, it’s likely to be for information that is already well learned—where in-depth discussions are not needed.

39. Using Models of Facilitated Questioning

In the paper that details these 39 question types and methods, I attempted to lay bare the DNA of classroom questioning. I intentionally stripped questioning practices down to their essence in the hope of creating building blocks that you, my patient readers, can utilize to build your own interactive classroom sessions. For example, I talked specifically about using prequestions to focus attention, to activate prior knowledge, and to surface misconceptions. I didn’t describe the myriad permutations that pre- and postquestions might inhabit for example, or any systematic combinations of the many other building blocks I described. While I didn’t describe them, many instructors have developed their own systematic methods—or what I will call, “Models of Facilitated Questioning.” For example, in the paper I briefly describe Harvard Professor Eric Mazur’s “Peer Instruction” method and the University of Massachusetts’s Scientific Reasoning Research Institute and Department of Physics’ “Question-Driven Instruction” method.

Click to download the full report.

One of the features of the book is that it will provide a comprehensive model of workplace learning and performance. This model can be used in many phases of our work, from design through to evaluation.

Or click to watch a larger version (with more viewing control) directly on YouTube: Learning Landscape Model.