Given the challenges TEACHERS and PROFESSORS are facing with the Coronavirus Pandemic I’ve decided to make the Presentation Science Online Workshop available to Teachers and Professors for FREE (now through April 30th).

The workshop provides a strong science-of-learning foundation that will help educators make informed decisions as they move their courses online, create video recordings, or use any free time to update their classroom learning designs.

PLEASE share this with educators you know.


About the Presentation Science Workshop

Presentation Science is an online self-paced workshop designed specifically to help people who are trainers, teachers, professors, speakers, CEOs, Executive Directors, managers, military leaders, salespeople, team leads–anybody who uses presentation software–to help their audiences learn.

Inspired by learning science, this workshop will help speakers and educators to (1) keep their audiences’ attention, (2) support comprehension, (3) motivate audience members to take action, and (4) support them in remembering what’s been taught.

The workshop is also an excellent TRAIN-THE-TRAINER experience and organizations wanting to engage in a private cohort can make arrangements with Will Thalheimer (workshop creator and host) to do that. You can see the specific pricing options here:

And for more information about the workshop, see PresentationScience.Net.

Today, after turning 60 a few months ago, I finally paid off my student loans—the loans that made it possible for me to get my doctorate from Columbia University. I was in school for eight years from 1988 to 1996, studying with some of the brightest minds in learning, development, and psychology (Rothkopf, Black, Peverly, Kuhn, Higgins, Dweck, Mischel, Darling-Hammond, not to mention my student cohort). If my math is right, that’s 22 years to pay off my student-loan debt. A ton of interest paid too!

I’m eternally grateful! Without the federal government funding my education, my life would have been so much different. I would never have learned how to understand the research on learning. My work at Work-Learning Research, Inc.—attempting to bridge the gap between research and practice—would not have been possible. Thank you to my country—the United States of America—and fellow citizens for giving me the opportunity of a lifetime!! Thanks also must go to my wife for marrying into the forever-string of monthly payments. Without her tolerance and support I certainly would be lost in a different life.

I’ve often reflected on my good fortune in being able to pursue my interests, and wondered why we as a society don’t do more to give our young people an easier road to pursue their dreams. Even when I hear about the brilliant people winning MacArthur fellowships, I wonder why only those who have proven their genius are being boosted. They are deserving of course, but where is our commitment to those who might be teetering on a knife edge of opportunity and economic desperation? I was even lucky as an undergrad back in the late 1970’s, paying relatively little for a good education at a state school and having parents who funded my tuition and living expenses. College today is wicked expensive, cutting out even more of our promising youth from realizing their potential.

Economic mobility is not as easy as we might like it. The World Bank just released a report showing that worldwide only 12% of young adults have been able to obtain more education than their parents. The United States iis no longer the land of opportunity we once liked to imagine.

This is crazy short-sighted, and combine this with our tendency to underfund our public schools, it has the smell of a societal suicide.

That’s depressing! Today I’m celebrating my ability to get student loans two-and-a-half decades ago and pay them off over the last twenty-some years! Hooray!

Seems not so important when put into perspective. It’s something though.



Two and a half years ago, in writing a blog post on learning styles, I did a Google search using the words “learning styles.” I found that the top 17 search items were all advocating for learning styles, even though there was clear evidence that learning-styles approaches DO NOT WORK.

Today, I replicated that search and found the following in the top 17 search items:

  • 13 advocated/supported the learning-styles idea.
  • 4 debunked it.

That’s progress, but clearly Google is not up to the task of providing valid information on learning styles.

Scientific Research that clearly Debunks the Learning-Styles Notion:

  • Kirschner, P. A. (2017) Stop propagating the learning styles myth. Computers & Education, 106, 166-171.
  • Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The scientific status of learning styles theories. Teaching of Psychology, 42(3), 266-271.
  • Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.
  • Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

Follow the Money

  • Still no one has come forward to prove the benefits of learning styles, even though it’s been over 10 years since $1,000 was offered, and 3 years since $5,000 was offered.

A recent research review (by Paul L. Morgan, George Farkas, and Steve Maczuga) finds that teacher-directed mathematics instruction in first grade is superior to other methods for students with “math difficulties.” Specifically, routine practice and drill was more effective than the use of manipulatives, calculators, music, or movement for students with math difficulties.

For students without math difficulties, teacher-directed and student-centered approaches performed about the same.

In the words of the researchers:

In sum, teacher-directed activities were associated with greater achievement by both MD and non-MD students, and student-centered activities were associated with greater achievement only by non-MD students. Activities emphasizing manipulatives/calculators or movement/music to learn mathematics had no observed positive association with mathematics achievement.

For students without MD, more frequent use of either teacher-directed or student-centered instructional practices was associated with achievement gains. In contrast, more frequent use of manipulatives/calculator or movement/music activities was not associated with significant gains for any of the groups.

Interestingly, classes with higher proportions of students with math difficulties were actually less likely to be taught with teacher-directed methods — the very methods that would be most helpful!

Will’s Reflection (for both Education and Training)

These findings fit in with a substantial body of research that shows that learners who are novices in a topic area will benefit most from highly-directed instructional activities. They will NOT benefit from discovery learning, problem-based learning, and similar non-directive learning events.

See for example:

  • Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), 75-86.
  • Mayer, R. E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? American Psychologist, 59(1), 14-19.

As a research translator, I look for ways to make complicated research findings usable for practitioners. One model that seems to be helpful is to divide learning activities into two phases:

  1. Early in Learning (When learners are new to a topic, or the topic is very complex)
    The goal here is to help the learners UNDERSTAND the content. Here we provide lots of learning support, including repetitions, useful metaphors, worked examples, immediate feedback.
  2. Later in Learning (When learners are experienced with a topic, or when the topic is simple)
    The goal here is to help the learners REMEMBER the content or DEEPEN they’re learning. To support remembering, we provide lots of retrieval practice, preferably set in realistic situations the learners will likely encounter — where they can use what they learned. We provide delayed feedback. We space repetitions over time, varying the background context while keeping the learning nugget the same. To deepen learning, we engage contingencies, we enable learners to explore the topic space on their own, we add additional knowledge.

What Elementary Mathematics Teachers Should Stop Doing

Elementary-school teachers should stop assuming that drill-and-practice is counterproductive. They should create lesson plans that guide their learners in understanding the concepts to be learned. They should limit the use of manipulatives, calculators, music, and movement. Ideas about “arts integration” should be pushed to the back burner. This doesn’t mean that teachers should NEVER use these other methods, but they should be used to create occasional, short, and rare moments of variety. Spending hours using manipulatives, for example, is certainly harmful in comparison with more teacher-directed activities.


Robert Slavin, Director of the Center for Research and Reform in Education at Johns Hopkins University, recently wrote the following:

"Sooner or later, schools throughout the U.S. and other countries will be making informed choices among proven programs and practices, implementing them with care and fidelity, and thereby improving outcomes for their children. Because of this, government, foundations, and for-profit organizations will be creating, evaluating, and disseminating proven programs to meet high standards of evidence required by schools and their funders. The consequences of this shift to evidence-based reform will be profound immediately and even more profound over time, as larger numbers of schools and districts come to embrace evidence-based reform and as more proven programs are created and disseminated."

To summarize, Slavin says that (1) schools and other education providers will be using research-based criteria to make decisions (2) that this change will have profound effects, significantly improving learning results, and (3) many stakeholders and institutions within the education field will be making radical changes, including holding themselves and others to account for these improvements.

In Workplace Learning and Performance

But what about us? What about we workplace learning-and-performance professionals? What about our institutions? Will we be left behind? Are we moving toward evidence-based practices ourselves?

My career over the last 16 years is devoted to helping the field bridge the gap between research and practice, so you might imagine that I have a perspective on this. Here it is, in brief:

Some of our field is moving towards research-based practices. But we have lots of roadblocks and gatekeepers that are stalling the journey for the large majority of the industry. I've been pleasantly surprised in working on the Serious eLearning Manifesto about the large number of people who are already using research-based practices; but as a whole, we are still stalled.

Of course, I'm still a believer. I think we'll get there eventually. In the meantime, I want to work with those who are marching ahead, using research wisely, creating better learning for their learners. There are research translaters who we can follow, folks like Ruth Clark, Rich Mayer, K. Anders Ericsson, Jeroen van Merriënboer, Richard E. Clark, Julie Dirksen, Clark Quinn, Gary Klein, and dozens more. There are practitioners who we can emulate–because they are already aligning themselves with the research: Marty Rosenheck, Eric Blumthal, Michael Allen, Cal Wick, Roy Pollock, Andy Jefferson, JC Kinnamon, and thousands of others.

Here's the key question for you who are reading this: "How fast do you want to begin using research-based recommendations?"

And, do you really want to wait for our sister profession to perfect this before taking action?

As a youth soccer coach for many years I have struggled to evaluate my own players and have seen how my soccer league evaluates players to place them on teams. As a professional learning-and-performance consultant who has focused extensively on measurement and evaluation, I think we can all do better, me included. To this end, I have spent the last two years creating a series of evaluation tools for use by coaches and youth soccer leagues. I’m sure these forms are not perfect, but I’m absolutely positive that they will be a huge improvement over the typical forms utilized by most youth soccer organizations. I have developed the forms so that they can be modified and they are made available for free to anyone who coaches youth soccer.

At the bottom of this post, I’ll include a list of the most common mistakes that are made in youth-soccer evaluation. For my regular blog readers–those who come to me for research-based recommendations on workplace learning-and-performance–you’ll see relevance to your own work in this list of evaluation mistakes.

I have developed four separate forms for evaluation. That may seem like a lot until you see how they will help you as a coach (and as a soccer league) meet varied goals you have. I will provide each form as a PDF (so you can see what the form is supposed to look like regardless of your computer configuration) and as a Word Document (so you can make changes if you like).

I’ve also provided a short set of instructions.


Note from Will in November 2017:

Although my work is in the workplace learning field, this blog post–from 2012–is one of the most popular posts on my blog, often receiving over 2000 unique visitors per year.


The Forms

1.   Player Ranking Form:  This form evaluates players on 26 soccer competencies and 4 player-comparison items, giving each player a numerical score based on these items AND an overall rating. This form is intended to provide leagues with ranking information so that they can better place players on teams for the upcoming season.

2. Player Development Form:
  This form evaluates players on the 26 soccer competencies. This form is intended for use by coaches to help support their players in development. Indeed, this form can be shared with players and parents to help players focus on their development needs.

3. Team Evaluation Form:
  This form helps coaches use practices and games to evaluate their players on the 26 key competencies. Specifically, it enables them to use one two-page form to evaluate every player on their team.

4. Field Evaluation Form:  This form enables skilled evaluators to judge the performance of players during small-group scrimmages. Like the Player Ranking Form, it provides player-comparison information to leagues (or to soccer clubs).

The Most Common Mistakes in Youth-Soccer Evaluation

  1. When skills evaluated are not clear to evaluators. So for example, having players rated on their “agility” will not provide good data because “agility” will likely mean different things to different people.
  2. When skills are evaluated along too many dimensions. So for example, evaluating a player on their “ball-handling skills, speed, and stamina” covers too many dimensions at once—a player could have excellent ball-handling skills but have terrible stamina.
  3. When the rating scales that evaluators are asked to use make it hard to select between different levels of competence. So for example, while “ball-handling” might reasonably be evaluated, it may be hard for an evaluator to determine whether a player is excellent, very good, average, fair, or poor in ball-handling. Generally, it is better to have clear criteria and ask whether or not a player meets those criteria. Four or Five-Point scales are not recommended.
  4. When evaluators can’t assess skills because of the speed of action, the large number of players involved, or the difficulty of noticing the skills targeted. For example, evaluations of scrimmages that involve more than four players on a side make it extremely difficult for the evaluators to notice the contributions of each player.
  5. When bias affects evaluators’ judgments. Because the human mind is always working subconsciously, biases can be easily introduced. So for example, it is bad practice to give evaluators the coaches’ ratings of players before those players take part in a scrimmage-based evaluation.
  6. When bias leads to a generalized positive or negative evaluation. Because evaluation is difficult and is largely a subconscious process, a first impression can skew an evaluation away from what is valid. For example, when a player is seen as getting outplayed in the first few minutes of a scrimmage, his/her later excellent play may be ignored or downplayed. Similarly, when a player is intimidated early in the season, a coach may not fully notice his/her gritty determination later in the year.
  7. When bias comes from too few observations. Because evaluation is an inexact process, evaluation results are likely to be more valid if the evaluation utilizes (a) more observations (b) by more evaluators (c) focusing on more varied soccer situations. Coaches who see their players over time and in many soccer situations are less likely to suffer from bias, although they too have to watch out that their first impressions don’t cloud their judgments. And of course, it is helpful to get assessments beyond one or two coaches.
  8. When players are either paired with, or are playing against, players who are unrepresentative of realistic competition. For example, players who are paired against really weak players may look strong in comparison. Players who are paired as teammates with really good players may look strong because of their teammates’ strong play. Finally, players who only have experience playing weaker players may not play well when being evaluated against stronger players even though they might be expected to improve by moving up and gaining experience with those same players.
  9. When the wrong things are evaluated. Obviously, it’s critical to evaluate the right soccer skills. So for example, evaluating a player on how well he/she can pass to a stationary player is not as valid as seeing whether good passes are made in realistic game-like situations when players are moving around. The more game-like the situations, the better the evaluation.
  10. When evaluations are done by remembering, not observing. Many coaches fill out their evaluation forms back home late at night instead of evaluating their players while observing them. The problem with this memory-based approach is that introduces huge biases into the process. First, memory is not perfect, so evaluators may not remember correctly. Second, memory is selective. We remember some things and forget others. Players must be evaluated primarily through observation, not memory.
  11. Encouraging players to compare themselves to others. As coaches, one of our main goals is to help our players learn to develop their skills as players, as teammates, as people, and as thinkers. Unfortunately, when players focus on how well they are doing in comparison to others, they are less likely to focus on their own skill development. It is generally a mistake to use evaluations to encourage players to compare themselves to others. While players may be inclined to compare themselves to others, coaches can limit the negative effects of this by having each player focus on their own key competencies to improve.
  12. Encouraging players to focus on how good they are overall, instead of having them focus on what they are good at and what they still have to work on. For our players to get better, they have to put effort into getting better. If they believe their skills are fixed and not easily changed, they will have no motivation to put any effort into their own improvement. Evaluations should be designed NOT to put kids in categories (except when absolutely necessary for team assignments and the like), but rather to show them what they need to work on to get better. As coaches, we should teach the importance of giving effort to deliberate practice, encouraging our players to refine and speed their best skills and improve on their weakest skills.
  13. Encouraging players to focus on too many improvements at once. To help our players (a) avoid frustration, (b) avoid thinking of themselves as poor players, and (c) avoid overwhelming their ability to focus, we ought to have them only focus on a few major self-improvement goals at one time.


Who is better at crafting an instructional message about science, scientists or instructional designers?

I say we instructional designers SHOULD be able to do a better job, so I'm encouraging YOU, my colleagues, to give Alan Alda's Flame Challenge a try.

Here's Alda's challenge:

"We’re asking scientists to answer the question – “What is a flame?” – in a way that an 11-year-old would find intelligible and maybe even fun."

You can read the full challenge by clicking here.

The deadline is April 2nd, so you better get moving!!

To see what you're up against, consider the content, which you can find, for example, on Wikipedia, under the entry for flame.

Some thoughts on how to be successful:

  1. Consider pairing with an actual scientist (it's not really us against the SME's!)
  2. Use adult learning principles, but not in the stupid, static, uncreative way most of us use them on adults, which is pretty ineffective for adults too. SMILE.
  3. Realize that if you really want to win, you may actually have to craft your piece in a way that won't really do all the things that we'd like to do as instructional designers. For example, where we know extra spaced practice would be good, those who judge the contest may not understand all that.
  4. Utilize multimedia and visually beautiful images.
  5. Utilize language that, like a flame, (a) illuminates, (b) produces emotional heat, (c) and mesmerizes attention.

Good Luck Instructional-Design Team!!

Here is the comment I sent to the NY Times in response to their focus on a supposed research study that purported to show that gifted kids are being underserved.

I'm a little over the top in my comments, but still I think this is worth printing because it demonstrates the need for good research savvy and it shows that even the most respected news organizations can make really poor research-assessment mistakes.

Egads!! Why is the New York Times giving so much "above-the-fold" visibility to a poorly-conceived research study funded by a conservative think tank with obvious biases?

Why isn't at least one of your contributors a research-savvy person who could comment on the soundness of the research? Instead, your contributors assume the research is sound.

Did you notice that the references in the original research report were not from top-tier refereed scientific journals?

In the original article from the Thomas Fordham Institute (a conservative-funded enterprise), the authors try to wash away criticisms about regression-to-the-mean and test-variability, but this bone against the obvious–and most damaging, and most valid–criticisms is not good enough.

If you took the top 10% of players in the baseball draft, the football draft, any company's onboarding class, any randomly selected group of maple trees, a large percentage of the top performers would not be top performers a year or two later. Damn, ask any baseball scout whether picking out the best prospects is a sure thing. It's not!

And, in the cases I mentioned above, the measures are more objective than an educational test, which has much higher variability–which would make more top performers leak out of the top ranks.

NY Times–you should be embarrassed to have published these responses to this non-study. Seriously, don't you have any research-savvy people left on your staff?

We have scientific journals because the research is vetted by experts.

Many of us are inclined to see audience response systems only as a way to deliver multiple-choice and true-false questions. While this may be true in a literal sense, such a restricted conception can divert us from myriad possibilities for deep and meaningful learning in our classrooms.

The following list of 39 question types and methods is provided to show the breadth of possibilities. It is distilled from 85 pages of detailed recommendations in the white paper, Questioning Strategies for Audience Response Systems: How to Use Questions to Maximize Learning, Engagement, and Satisfaction, available free by clicking here.

NOTE from Will Thalheimer (2017): The report is focused on audience-response systems — and I must admit that it is a bit dated now in terms of the technology, but the questions types are still a very potent list.

1. Graded Questions to Encourage Attendance

Questions can be used to encourage attendance, but there are dangers that must be avoided.

2. Graded Questions to Encourage Homework and Preparation

Questions can be used to encourage learners to spend time learning prior to classroom sessions, but there are dangers that must be avoided.

3. Avoiding the Use of One Correct Answer (When Appropriate)

Questions that don’t fulfill a narrow assessment purpose need not have right answers. Pecking for a correct answer does not always produce the most beneficial mathemagenic (learning creating) cognitive processing. We can give partial credit. We can have two answers be equally acceptable. We can let the learners decide on their own.

4. Prequestions that Activate Prior Knowledge

Questions can be used to help learners connect their new knowledge to what they’ve already learned, making it more memorable. For example, a cooking teacher could ask a question about making yogurt before introducing a topic on making cheese, prompting learners to activate their knowledge about using yogurt cultures before they begin talking about how to culture cheese. A poetry teacher could ask a question about patriotic symbolism, before talking about the use of symbols in modern American poetry.

5. Prequestions that Surface Misconceptions

Learners bring naïve understandings to the classroom. One of the best ways to confront misconceptions is to bring them to the surface so that they can be confronted straight on. The Socratic Method is a prime example of this. Socrates asks a series of prequestions thereby unearthing misconceptions and leading to a new improved understanding.

6. Prequestions to Focus Attention

Our learners’ attention wanders. In an hour-long session, sometimes they’ll be riveted to the learning discussion, sometimes they’ll be thinking of other ideas that have been triggered, and sometimes they’ll be off in a daze. Prequestions (just like well-written learning objectives) can be use to help learners pay attention to the most important subsequent learning material. In fact, in one famous study, Rothkopf and Billington (1979) presented learners with learning objectives before they encountered the learning material. They then measured learning and eye movements and found that learners actually paid more attention to aspects of the learning material targeted by  the learning objectives. Prequestions work the same way as learning objectives—they focus attention.

7. Postquestions to Provide Retrieval Practice

Postquestions—questions that come after the learning content has been introduced—can be used to reinforce what has been learned and to minimize forgetting. This is a very basic process. By giving learners practice in retrieving information from memory, we increase the probability that they’ll be able to do this in the future. Retrieval practice makes perfect.

8. Postquestions to Enable Feedback

Feedback is essential for learners and instructors. Corrective feedback is critical, especially when learners have misunderstandings. Providing retrieval practice with corrective feedback is especially important as learners are struggling with newly-encountered material, difficult material, and when their attention is likely to wander—for example when they’re tired after a long-day of training, when there are excessive distractions, or when the previous material has induced boredom.

9. Postquestions to Surface Misconceptions

We already talked about using prequestions to surface misconceptions. We can also use postquestions to surface misconceptions. Learners don’t always understand concepts after only one presentation of the material. Many an instructor has been surprised after delivering a “brilliant” exposition to find that most of their learners just didn’t get it.

10. Questions Prompting Analysis of Things Presented in Classroom

One of the great benefits of classroom learning is that it enables instructors to present learners with all manner of things. In addition to verbal utterances and marks on a white board, instructors can introduce demonstrations, videos, maps, photographs, illustrations, learner performances, role-plays, diagrams, screen shots, computer animations, etcetera. While these presentations can support learning just by being observed, questions on what has been seen can prompt a different focus and a deeper understanding.

11. Using Rubric Questions to Help Learners Analyze

In common parlance, the term “rubric” connotes a set of standards. Rubrics can be utilized in asking learners questions about what they experience in the classroom. Rubric questions, if they are well designed, can give learners practice in evaluating situations, activities, and events. Such practice is an awesome way to engage learners and prepare them for critical thinking in similar future situations. In addition, if rubrics are continually emphasized, learners will integrate their wisdom in their own planning and decision-making.

12. Questions to Debrief an In-Class Experience

Classrooms can also be used to provide learners with experiences in which they themselves participate. Learners can be asked to take part in role plays, simulations, case studies, and other exercises. It’s usually beneficial to debrief those exercises, and questions can be an excellent way to drive those discussions.

13. Questions to Surface Affective Responses

Not all learning is focused on the cold, steely arithmetic of increasing the inventory of knowledge. Learners can also experience deep emotional responses, many of which are relevant to the learning itself. In topics dealing with oppression, slavery, brutality, war, leadership, glory, and honor, learners aren’t getting the full measure of learning unless they experience emotion in some way. Learners can be encouraged to explore their affective responses by asking them questions.

14. Scenario-Based Decision-Making Questions

Scenario-based questions present learners with scenarios and then ask them to make a decision about what to do. These scenarios can take many forms. They can consist of short descriptive paragraphs or involved case studies. They can be presented in a text-only format or augmented with graphics or multimedia. They can put the learner in the protagonist’s role (“What are you going to do?”) or ask the learner to make a decision for someone else (“What should Dorothy do?”). The questions can be presented in a number of formats—as multiple-choice, true-false, check-all-that-apply, or open-ended queries.

15. Don’t Show Answer Right Away

There’s no rule that you have to show learners the correct response right after they answer the question. Such a reflexive behaviorist scheme can subvert deeper learning. Instructors have had great success in withholding feedback. For example, Harvard professor Mazur’s (1997) Peer Instruction method requires learners to make an individual decision and then try to convince a peer to believe the same decision—all before the instructor weighs in with the answer.

By withholding feedback, learners are encouraged to take some responsibility for their own beliefs and their own learning. Discussions with others further deepen the learning. Simply by withholding the answer, instructors can encourage strategic metacognitive processing, thereby sending learners the not-so-subtle message that it is they—the learners—who must take responsibility for learning.

16. Dropping Answer Choices

There are several reasons to drop answer choices after learners have initially responded to a question. You can drop incorrect answer choices to help focus further discussions on more plausible alternatives. You can drop an obviously correct choice to focus on more critical distinctions. You can drop an unpopular correct choice to prompt learners to question their assumptions and also to highlight the importance of examining unlikely options. Each of these methods has specific advantages.

17. Helping Learners Transfer Knowledge to Novel Situations

“Transfer” is the idea that the learning that happens today ought to be relevant to other situations in the future. More specifically, transfer occurs when learners retrieve what they’ve learned in relevant future situations. As we’ve already discussed, the easiest and often the most potent way to promote transfer is to provide learners with practice in the same contexts—retrieving the same information—that they’ll be required to retrieve in future situations. But questions can also be used to prepare learners to retrieve information in situations that are not, or cannot, be anticipated in designing the learning experience.

18. Making the Learning Personal

By making the learning personal, we help learners actively engage the learning material, we support mathemagenic cognitive processing, and we make it more likely that they’ll think about the learning outside of our classrooms, further reinforcing retention and utilization. Questions can be designed to relate to our learners’ personal experiences, thus bolstering learning.

19. Making the Material Important

Sometimes we can’t make the material directly personal or provide realistic decisions for learners to make, but we can still use questions to show the importance of the topic being discussed.

20. Helping Learners Question Their Assumptions

One of our goals in teaching is to get learners to change their thinking. Sometimes this requires learners to directly confront their assumptions. Questions can be written that force learners to evaluate the assumptions they bring to particular topic areas.

21. Using the Devil’s Advocate Tactic

In a classroom, when we play the devil’s advocate, we argue ostensibly to find flaws in the positions put forth. The devil’s advocate tactic can be used in a number of different ways. You can play the devil’s advocate yourself, or utilize your learners in that role. From a learning standpoint, when someone plays the devil’s advocate, learners are prompted to more fully process the learning material.

22. Data Slicing

Data slicing is the process of using one factor to help make sense of a second factor. So for example, through the use of our audience response systems, we might examine how our learner’s socio-economic background affects their opinion of race relations. Data slicing can be done manually or automatically. It is particularly powerful in the classroom for demonstrating how audience characteristics may play a part in their own perceptions or judgments.

23. Using Questions for In-class Experiments.

For some topics, in-class experimentation—using the learners as the experimental participants—is very beneficial. It helps learners relate to the topic personally. It also highlights how scientific data is derived. For example, in a course on learning, psychology, or thinking; learners could be asked to remember words, but could—unbeknownst to them—be primed to think about certain semantic associates and not others.

24. Prompting Learners to Make Predictions

Prediction-making can facilitate learning in many ways. It can be used to provide retrieval practice for well-learned information. It can be used to deepen learners’ understandings of boundary conditions, contingencies, and other complications. It can be used to engender wonder. It can be used to enable learners to check their own understanding of the concepts being learned.

25. Utilizing Student Questions and Comments

Our learners often ask the best questions. Sometimes a learner’s question hints at the outlines of his or her confusion—and the confusion of many others as well. Sometimes learners want to know about boundary conditions. Students can also offer statements that can improve the learning environment. They may share their comfort level with the topic, add their thoughts in a class discussion, or ar
gue a point because they disagree. All of these interactions provide opportunities for a richer learning environment, especially if we—as instructors—can use these questions to generate learning.

26. Enabling Readiness When Learners are Aloof or Distracted

Let’s face it. Not all of our learners will come into our classrooms ready to learn. Some will be dealing with personal problems. Some will be attending because they have to—not because they want to. Some will be distracted with other stress-inducing responsibilities. Some will think the topic is boring, silly, or irrelevant to them. Fortunately, experienced instructors have discovered tricks that often are successful. Audience response technology can help.

27. Enabling Readiness When Learners Think They Know it All

Some learners will come to your classroom thinking they already know everything they need to know about the topic you’re going to discuss. There are two types of learners who feel this way—those who are delusional (they actually need the learning) and those who are quite clearheaded (they already know what they need to know). Using the right questions and gathering everyone’s responses can help you deal with both of these characters.

28. Enabling Readiness When Learners are Hostile

In almost every instructor’s life, there will come a day when one, two, or multiple learners are publicly hostile. Experienced instructors know that such hostility must be dealt with immediately—not ignored. Even a few bad apples can ruin the learning experience and the satisfaction of the whole classroom. Fortunately, there are ways to fend off the assault.

29. Using Questions with Images

Using images as part of the learning process is critical in many domains. Obvious examples are art appreciation, architecture, geology, computer programming, and film. But even for the least likely topics, such as poetry or literature, there may be opportunities. For example, a poetry teacher may want to display poems to ask learners about the physical layout of poems. Images should not be thrown in willy-nilly. They should be used only when they help instructors meet their learning goals. Images should not be used just to make the question presentation look good. Research has shown that placing irrelevant images in learning material, even if those images seem related to the topic, can hurt learning results, distracting learners from focusing on the main points of the material . One easy rule: Don’t use images if they’re not needed to answer the question.

30. Aggregating Handset Responses for a Group or Team

Some handset brands enable responses of individual handsets to be aggregated. So for example, an instructor in a class of 50 learners might break the learners into 10 teams, with five people on a team. All 50 learners have a handset, but the responses from each team of five learners are aggregated in some way. This aggregation feature enables some additional learning benefits. Teamwork can be rewarded and competition between teams can add an extra element of motivation. Using aggregation scoring allows the instructor to encourage out-of-class activities where learners within a team help each other. Obviously, this will only work if the learning experience takes place over time. In such cases, aggregation can be used to build a learning community. Learners can be assigned to the same team or rotated on different teams, depending on the goals of instruction. Putting learners on one team encourages deeper relationships and eases the logistics for out-of-class learning. Rotating learners through multiple teams enables a greater richness of multiple perspectives and broader networking opportunities. It’s a tradeoff.

31. Using One Handset for a Group or Team

Although one of the prime benefits of handsets is that every learner is encouraged to think and respond, handsets don’t have to be used only in a one-person one-handset format. Sometimes a greater number of audience members show up than expected. Sometimes budgets don’t allow for the purchase of handsets for every learner. Sometimes learners forget to bring their handsets. In addition, sometimes there are specific interactions that are more suited to group responding. When a group of learners has to make a single response, there has to be a mechanism for them to decide what response to make. Several exist, each having their own strengths and weaknesses.

32. Using Questions in Games

As several sales representatives have told me, one of the first things instructors ask about when being introduced to a particular audience response system is the gaming features. This excitement is understandable, because almost all classroom audiences respond energetically to games. Our enthusiasm as instructors must be balanced, however, with knowledge of the pluses and minuses of gaming. Just as with grading manipulations, games energize learners toward specific overt goals—namely scoring well on the game. If this energy is utilized in appropriate mathemagenic activity, it has benefits. On the other hand, games can be highly counterproductive as well.

33. Questions to Narrow the Options in Decision Making

Sometimes the audience in the room must make decisions about what to do. For example, a senior manager running an action-learning group may want to take a vote about which project to pursue given a slate of 15 possible projects. A professor in an upper-level seminar course might give students a vote in deciding which of the 10 possible topics to discuss in the final three weeks of the course. A supervisor might want her employees to narrow down the candidates for employee of the year. A primary school teacher might want to give her students a choice of field trip options. Audience response systems can be used in two ways to do this, single round voting and double round voting.

34. Questions to Decide Go or No Go

Sometimes it’s beneficial to give our learners a chance to decide whether they’re ready to go on to the next topic. You might ask, “Are we ready to go ahead?” Or, “Are we ready to go ahead, or do I need to clarify this a bit more?” Using an audience response system has distinct advantages over handraising here because most learners are uncomfortable asking for additional instruction, even when they need it.

35. Perspective-Taking Questions

There are some topics that may benefit by encouraging learners to take perspectives of others in answering questions. In other words, instead of only asking our learners to express their opinions, we can ask them to take a guess as to the opinions of others. For example, we might ask our learners to guess the opinion of both rich and poor people to affirmative action, the importance of education, etc.

36. Open-Ended Questions

Some people think that audience response systems lack potential because they only enable the use of multiple-choice questions. In contrast, the research on learning suggests to me that (a) multiple choice questions can be powerful on their own, and (b) variations of multiple-choice questions add to this power, and (c) open-ended questions can be valuable in conjunction with multiple-choice formats, for example by letting learners think first on their own, providing student ideas, providing more authentic retrieval practice, etc.

37. Matching

Matching questions are especially valuable if your learning goal is to enable learners to distinguish between closely related items. The matching format can also be useful for logistical reasons in asking more than one question at a time. Although the matching question has its uses, it is often overused by instructors who are simply trying to use non-multiple-choice questions. Often, the matching format only helps learners reinforce relatively low-level concepts, like definitions, word meaning, simple calculations, and the like. While this type of information is valuable, it’s not clear that the classroom is the best place to reinforce this type of knowledge.

38. Asking People to Answer Different Questions

Some audience response systems enable learners to simultaneously answer different questions. In other words, Sam might answer questions 1, 3, 5, 7, and 9, while Pat answers questions 2, 4, 6, 8, and 10. This feature provides an advantage only when it’s critical not to let (a) individual learners cheat off other learners, or (b) groups of learners overhear the conversations of other groups of learners. The biggest disadvantage to this tactic is that it makes post-question discussions particularly untenable. In any case, if you do find a unique benefit to having learners answering different questions simultaneously, it’s likely to be for information that is already well learned—where in-depth discussions are not needed.

39. Using Models of Facilitated Questioning

In the paper that details these 39 question types and methods, I attempted to lay bare the DNA of classroom questioning. I intentionally stripped questioning practices down to their essence in the hope of creating building blocks that you, my patient readers, can utilize to build your own interactive classroom sessions. For example, I talked specifically about using prequestions to focus attention, to activate prior knowledge, and to surface misconceptions. I didn’t describe the myriad permutations that pre- and postquestions might inhabit for example, or any systematic combinations of the many other building blocks I described. While I didn’t describe them, many instructors have developed their own systematic methods—or what I will call, “Models of Facilitated Questioning.” For example, in the paper I briefly describe Harvard Professor Eric Mazur’s “Peer Instruction” method and the University of Massachusetts’s Scientific Reasoning Research Institute and Department of Physics’ “Question-Driven Instruction” method.

Click to download the full report.

Great article on How to Create Great Teachers. It's focused on K-12 education primarily, but there is wisdom in the discussion relevant to workplace learning.

Here's the major points I take away:

  1. Great teachers need deep content knowledge.
  2. Great teachers need good classroom-management verbalization skills.
  3. Great teachers need their content knowledge to be fluently available to them in the context of typical classroom situations. To get this fluency, they need to practice in such situations—and practice linking actions (especially their verbal utterances) to specific classroom situations.