Key Information on Audience Response Learning:

 

Contact Will Thalheimer

Lectures are widely reviled for putting learners in a passive mode. On the other hand, lectures are relatively easy to implement, even with large numbers of learners. And regardless of the pluses and minuses, lectures are ubiquitous. While there aren’t many lectures in kindergarten, by third grade teachers are talking a lot and learners are listening. The college classroom is dominated by lecture. So is the corporate training session, conference presentations, church sermons, public meetings, elder hostels, and the local library’s evening speaker series. Lectures aren’t going away anytime soon, nor should they. Like all tools for learning, they provide certain unique advantages and have certain unique limitations.

Lectures can be modified in different ways to increase the amount of active learning—to ensure that learners are more fully engaged, have a more robust understanding of the learning material, are more likely to remember what they learned, are more likely to utilize the information at a later time.

One such method to increase active learning are “response cards.” Response cards are provided to students so that each one can respond to instructor questions. Two types of response cards are available, (1) those that enable each learner to write his or her answer on the card (for example with a dry-erase marker), and (2) those that enable learners to hold up preprinted answers (either True or False; or A, B, C, or D for example).

Research

While not a lot of good research has been done on response cards, the research seems to suggest that compared with the traditional method of having students raise their hands in response to questions, response cards improve learners’ classroom engagement, the amount they learn, and the amount they retain after a delay (Marmolejo, Wilder, & Bradley, 2004; Gardner, Heward, Grossi, 1994; Kellum, Carr, and Dozier, 2001; Narayan, Heward, Gardner, Courson, Omness, 1990; Christle & Schuster, 2003). Learners generally prefer response cards to simple hand-raising. Most of the research has focused on K-12 classrooms, with some research done in community college. The research has tended to focus on relatively low-level information and has not tested the value of response cards on higher-order thinking skills.

Recommendations

Getting learners to actively respond in lectures is certainly a worthwhile goal. Research has been fairly conclusive that learners learn better when they are actively engaged in learning (Bransford, Brown, & Cocking, 1999). Response cards may be one tool in the arsenal of methods to generate learner engagement. Of course, electronic keypads can be used in a similar way, at a significantly increased cost, with perhaps some added benefits as well. Still, at less than $30 a classroom, response cards may be worth a try.

Personally, I’m skeptical that audiences in adult training situations would be open to response cards. While 87% of college students rated the cards highly (Marmolejo, Wilder, & Bradley, 2004), the corporate audiences I’ve worked with over the years, might find them childish or unnecessary (“hey, why can’t we just raise our hands?”). On the other hand, electronic handsets are more likely to be accepted. Of course, such acceptance—whether we’re talking about response cards or electronic handsets—really depends on the relevance of the material and the questions used. If the questions are low-level rote memorization, adult audiences are likely to reject the instruction regardless of the technology employed.

Making lectures interactive has to be done with care. Adding questions and student responses can have negative consequences as well. When we ask questions, we signal to learners what to pay attention to. If we push our learners to think about low-level trivia, they will do that to the detriment of focusing on more important high-level concepts.

Limitations of the Research

The research on response cards tends to focus on low-level questions that are delivered all-to-frequently throughout lectures. Learners who have to answer a question every two minutes are being conditioned to focus on trivia, facts, and knowledge. Future research on response cards should focus on higher-level material in situations where more peer discussion are enabled.

Most of the research on response cards suffered from minor methodological difficulties (e.g., weaker than preferred comparison designs and a low level of learners actually tracked) and ambiguity (e.g., in reading the research articles, it was often difficult to tell whether the in-class questions were repeated on the final quizzes—those used as dependent variables; and no inferential statistics were available to test hypotheses).

References

Marmolejo, E. K., Wilder, D. A., & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37, 405-410.

Cristle, C. A., & Schuster, J. W. (2003). The effects of using response cards on student participation, academic achievement, and on-task behavior during whole-class, math instruction. Journal of Behavioral Education, 12(3), 147-165.

Gardner, R., Heward, W. L., & Grossi, T. A. (1994). Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole-class science instruction. Journal of Applied Behavior Analysis, 27, 63-71.

Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101-104.

Narayan, J. S., Heward, W. L., Gardner, R., Courson, F. H., & Omness, C. K. (1990). Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis, 23, 483-490.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

Questions produce cognitive effects in our learners and generate learning benefits. While it is beyond the scope of this webpage to delve into these benefits in great depth, the following list offers a flavor of the myriad ways that questioning strategies support deep and meaningful learning. Some of these benefits are inherent to questions, while others are possible if questions are well designed and well facilitated.

The following list of benefits is drawn from (a) the basic research on human learning, (b) from the research literature on active learning, and (c) from the practical utilization of active-learning classroom techniques, often in conjunction with audience response systems.

I’ve divided these 24 learning benefits into two sections. The first section will cover the learning benefits that result almost inherently from the cognitive effects of questioning. The second section will continue the list of learning benefits, but will cover the learning benefits that are possible—those that are leveragable if questions are appropriately utilized.

There is more information about the benefits of questions in the report. Click to download the report.

The Inherent Learning Benefits of Questions

1. Prequestions Guide Learner Attention

Prequestions improve overall learning when they are presented to learners soon before learning events Prequestions help learners focus on the most important learning material they subsequently encounter.

2. Postquestions Guide Later Learner Processing

How learners approach new material is affected by previous questions . For example, Sagerman and Mayer (1987) found that learners did better on verbatim questions when they had previously gotten verbatim questions and conceptual questions when they had previously gotten conceptual questions. Questions then, not only have an effect on learning events that occur immediately after the questions, but also on subsequent learning events. Because questions create habits of mind, we need to be very careful that our questions are creating the right habits. Questions can have detrimental effects when we continually test learners on meaningless fragments of facts, figures, and folderol.

3. Questions Provide Repetition

Repetition is arguably the most important learning factor there is. It enables learners to remember things they’d forgotten, learn things they didn’t quite get the first time around, and strengthen and enrich what they already know. Repetition doesn’t imply verbatim repetition. Verbatim repetitions can sometimes be valuable but they’re often boring. The true power of repetition is realized when we use paraphrasing, examples, case studies, role-plays, simulations, and questioning. Questions inherently provide repetitions. Consider the instructional sequence, (a) content, (b) question, (c) discussion, (d) feedback. This sequence provides four repetitions of the learning point. Each of these interactions prompts the learners to engage the learning point in some manner.
 
4. Questions Provide Retrieval Practice

The fundamental purpose of learning is to facilitate later retrieval of the learned information. While the term “retrieval” may conjure up images of simple recall, retrieval refers more broadly to drawing information from long-term memory into working memory. Therefore, retrieval occurs when we answer a question, when we think about how to solve a problem, when we engage in creativity, when we kick a soccer ball, when we are involved in a conversation, and when we play music. If our goal is to facilitate later retrieval, one of the best ways to support that retrieval is to prompt learners to retrieve information during learning. Practice makes perfect, and retrieval during learning not only provides practice but it makes the information that was retrieved more accessible in memory as well .

5. Questions Provide Learners with Feedback

Questions enable learners to get feedback on their retrieval attempts. Questions not only enable learners to evaluate their retrieval performance, but they can be used to help learners overcome their misconceptions and reinforce their tentative understandings. Researchers who have reviewed research articles on feedback have concluded that feedback was very effective in producing learning benefits . In fact, many investigators have been so sure of feedback’s effectiveness that they have simply assumed it improves learning and have gone on to discuss other variables that affect its impact . Studies that have compared giving feedback to not giving feedback generally have found fairly sizable improvements with feedback .

6. Questions Provide Instructors with Feedback

Questions not only provide learners with feedback, but they provide instructors with feedback as well. In a typical lecture, instructions get some feedback by watching the body language of learners and by listening to audience questions. This feedback tends to be quite impoverished. Learners are hesitant to admit their confusion in large rooms of peers. Instructors may tune out the feedback because it’s uncomfortable to acknowledge that their performance may be lacking. Body language, may tell an instructor that learners are confused, but it can’t clarify the exact nature of the confusion. Well-designed questions can help pinpoint the comprehension issues and get all the learners involved in providing data about their comprehension. Moreover, instructors can modify their facilitation based on the feedback they receive. For example, Draper and Brown (2004) talk about the benefits of contingent teaching—where instructors change what they teach within a particular learning session based on learner responses to questions.

The Leveragable Learning Benefits of Questions

The list above highlighted the inherent learning benefits of questions—the advantages that questions almost automatically produce. The following list highlights the possible benefits—the gains that can be leveraged depending on the design of your questions and the quality of the work and discussions that supplement those questions . I have chosen to continue the numbering, instead of starting anew, because I want to emphasize that questions offer myriad ways to produce their “benefits.”

7. Prequestions Activate Prior Knowledge

One factor that propels learning is to help learners connect their new knowledge to what they’ve already learned. Great instructors—whether they are teachers, religious leaders, managers, or political leaders—are adept at using metaphorical language to imbue a discussion with immediate meaning. The metaphor bridges the gap between what is well known and what is new. In the same way, we can help our learners learn by using questions that ask them to bring into working memory information they’ve already learned.

8. Questions Can Grab Attention

Questions by their very nature force learners to pay attention. While the drone of a lecture is more likely to keep learners in a state of daydreaming, questions prompt learners to reorient their minds to the content of the question. This has obvious learning benefits. Without attention, there is no learning.

9. Questions Can Provide Variety

Providing learners with a variety of learning methods can create learning, attention, and motivational advantages. Research has shown that variety helps people learn better, keeps them attentive longer, and motivates them to feel more interest in the topic. Even repeating concepts with paraphrased wording has advantages. The basic act of providing a question provides variety in comparison to lecture alone. Going beyond this basic mechanism, questions can provide variety by utilizing different question types. Questions can focus on the same concept but utilize different background situations. Instructors can facilitate questions differently, asking learners to answer individually sometimes, or having them discuss with partners, groups, or the whole class.

10. Questions Can Make the Learning Personal

Designing your lectures and discussions to help your learners see how the material relates to them personally has obvious value. It helps motivate the learners to pay attention and makes it more likely that learners will relate the new learning to their long-held knowledge structures. Even if your lectures and discussions are devoid of personal connection, questions can highlight the personal aspects of the material. By connecting the learning material to learners’ personal concerns, we can also help our learners learn outside our classrooms. Learners don’t learn only in th
e classroom. If we can generate thinking in the classroom that makes it likely that learners will spontaneously ponder relevant concepts while away from the classroom, we’ve doubled the returns on our classroom learning investment. In fact, we’ve done much more than that. When learning is personal, it’s much more likely to be remembered and utilized long into the future.
 
11. Questions Can Provide Spaced Repetitions

Repetitions of questions that are spaced apart in time are more effective than those that are massed together . Glover (1989b) found that repeating a test spaced by one day produced significantly higher retention than providing a test immediately after the material was learned. Even in a single classroom session, waiting to space a question after covering other unrelated material can provide substantial benefits. So for example, you might present Content A, then Content B, and then provide a question regarding Content A. Similarly, you might present Content A B C and D and then ask questions on A B C and D. Homework and studying also provide spaced repetitions. For example, you might use a couple questions at the end of class—without providing the answers—to spur learning outside the classroom. You could then include those questions in the following session and provide feedback.
 
12. Questions Can Highlight Boundary Conditions

Questions can be used to introduce learners to boundary conditions or test their knowledge of contingencies. For example, fifth graders may be taught that tolerance is good, but they also need to learn that tolerance of evil is not good. Managers can be taught to encourage their direct reports to help in making decisions, except in cases that have safety, ethical, or legal repercussions. Highlighting boundary conditions often brings a dose of reality to instruction, thus engaging learners by moving beyond stale platitudes. Real life is complicated. If we don’t acknowledge this to our learners, we not only do them a disservice, we lose their respect and attention.

13. Questions Can Highlight Common Misunderstandings

Learners often come to learning experiences with naïve understandings that make it difficult for them to learn new information. Their preconceptions may bias them against the new paradigms they have to learn. As many master instructors have concluded, one of the primary goals of instruction is to help learners unlearn their flawed beliefs and replace them with more accurate knowledge structures. Question answering reveals the truth of learners’ knowledge to learners and to the instructor. Such revelations enable learners to awaken to new constructs and test newly learned constructs. At the same time, these “teachable moments” provide instructors with special advantages in guiding and supporting further learning.

14. Questions Can Demonstrate Forgetting

Learners forget. It’s an immutable law of nature. As instructors, one of our primary goals is to ensure that our learners learn AND remember. Unfortunately, the typical learning environment is set up to make this difficult. First, learners are overly optimistic about their ability to remember, so during learning events they sometimes avoid using the kind of cognitive processing that supports long-term retention. For example, they tend to use simple rehearsal strategies as opposed to more elaborative processing. Second, learners are often too busy or distracted to devote enough time to learning. Third, learners who are graded often focus on getting good grades as opposed to supporting their long-term ability to remember. For example, they tend to cram instead of spacing their learning and practice over time. As recent research has showed, learners who are prompted to retrieve information from memory after a significant delay—typically over a week or more—are much more likely afterwards to utilize cognitive processing that propels long-term remembering. If we want our learners to remember concepts over time, we can help them by providing them with questions well after we’ve moved on to different topic areas.

15. Questions Can Support Transfer to Related Situations/Topics

It is rare for knowledge learned in one topic area to be retrieved from memory when another topic is being considered. You may have heard of this as the problem of transfer . Even when learners are given a problem to solve that closely resembles another problem they already solved, they very rarely use the solution to the solved problem to solve the second problem. In current parlance, learners “just don’t get it” unless the connections are actually practiced or made incredibly obvious. Questions provide an obvious opportunity to help support transfer. We can use questions to provide authentic scenarios to directly practice transfer. We can also provide multiple questions on the same learning point—each utilizing a different background context.

16. Questions Can Prepare Learners for Future Decision-Making

Rarely is rote recall the primary goal of instruction. Often, we want learners to be able to retrieve information from memory to make decisions. For example, history teachers might want learners to remember the lessons of the U.S. involvement in the Vietnam War so that they can make better decisions about which political party to support. A biology teacher might want learners to remember how ecosystems work so that the learners will make more-informed decisions about recycling, eating, and purchasing a car. A leadership trainer might want learners to make better decisions about how to handle unstable employees. To support future decision-making, questions can be used that simulate future decision-making situations. In other words, if we give them authentic decisions to make, we’ll better prepare them to make future real-world decisions.
 
17. Questions Can Demonstrate Relevance to the Real World

Sometimes our learners aren’t preparing for specific future decision-making responsibilities, so the section immediately above may not seem to apply directly. Still, to keep learners engaged and to provide them with deep learning experiences, it may be beneficial to provide them either with decision-making questions beyond what their futures may hold and/or other questions that aren’t decision-making related but do highlight the importance of the topic.
 
18. Questions Can Help Learners Identify Their Assumptions

Questions are excellent vehicles to prompt students to identify their assumptions. Socrates used a series of questions to help pinpoint his learners’ misunderstandings. You can use a series of questions or one question to do the same.

19. Questions Can Encourage Attention to Difficult Content

When faced with extremely difficult content in a classroom situation, some learners become overwhelmed and just tune out. This can happen intentionally or automatically. Some learners will move into a state of performance anxiety that literally overloads the limited capacity of their working memories with off-task ruminations. Others will consciously tune out, expecting to be able to learn the material on their own outside the classroom. In either case, valuable learning time is lost. Questions can be used in these instances to partition the learning content into manageable chunks and to slow the flow of the lecture to give learners an opportunity to refocus their attention on processing the learning material.
 
20. Questions Can Demonstrate Learning over Time

Questions delivered in a pretest-posttest format, before and after the accompanying content, can demonstrate for learners how much they’ve learned. While this may not seem particu
larly advantageous—“Can’t they see how far they’ve come?”—learners often can’t remember their previous states of mind, so demonstrating it to them may be the only way to convince them of their progress. This has advantages for the learners, because it demonstrates that their learning efforts have value, making it more likely that they’ll approach future learning tasks with a bent toward perseverance. It has advantages for instructors as well, because it will increase learner satisfaction, instructor ratings, and will induce further learner engagement.

21. Questions Can Provide Practice in Learning with Others

Almost all learners need to be able to work with others—and learn with others. Certainly, in today’s knowledge economy, the ability to work with others is critical to success. There are at least three reasons to give them practice in working with others. First, learning with others provides people with multiple perspectives and thus a richer learning experience. Second, learning with others usually improves individual learning outcomes. It improves attention to the task. It creates more elaborate memory pathways. It prompts retrieval practice to reinforce the concepts. Finally, learning with others is preparation for their real-world futures. It helps them practice articulating their thoughts. It gives learners practice interacting and working with others.
 
22. Questions Can Be Utilized to Gather Experimental Data

This benefit doesn’t relate to all classrooms, but it can be quite powerful when it is relevant. The idea is that we can actually collect data from our learners to elucidate our topic. For example, an instructor could prompt students to respond to a typical color-blindness test by selecting an answer with their handsets. To begin a discussion of perfect pitch, an instructor could ask students to listen to a musical note and ask them to select which note it is. Gathering experimental data is particularly appropriate when the class topics revolve around issues related to human beings, for example in courses in psychology, perception, decision making, ethics, and political science, among many other similar courses. A professor teaching a course on experimental psychology might replicate famous experiments that have been done.

23. Questions Can Prompt Out-of-the-Classroom Learning Activities

The more time learners spend learning, the more they learn. The correlation isn’t perfect—not all learning is created equal—but it’s still a strong positive correlation. Whether it’s a corporate classroom or high-school chemistry lab, only so much learning can take place in the classroom. If learners can be encouraged to engage in meaningful mathemagenic (learning-creating) processing outside the classroom, their learning outcomes will be improved. Questions can prompt out-of-the-classroom learning in a number of ways. The most brutish way is through grading. If learners are graded on their handset responses, they’re more likely to prepare for classes. Note that this has to be done very carefully so as not to stifle learning in the classroom. I talk about this more in the report. We’ve already talked about how personally relevant questions can spontaneously promote out-of-the-classroom thinking related to the course content. If the questions relate to the learner’s real-world futures, cues in those future situations may remind the learners about the content they previously learned.
 
24. Questions Can Promote Thinking Skills

Helping learners digest facts, learn terminology, and understand complex topics is commendable, but not sufficient. Our learners won’t be fully prepared for their futures unless they develop thinking skills—methods to evaluate situations, solve problems, generate options, make decisions. Questions, in conjunction with classroom facilitation and well-designed classroom exercises, can promote such thinking skills, encouraging learners to (a) generate multiple solutions, (b) categorize and classify, (c) discuss, summarize, and model, (d) strategize, justify, and plan, (e) reflect and evaluate, and (f) think about thinking and learning . Questions can also help learners (g) notice the most critical factors in a chaotic swarm of stimuli, (h) utilize hypothesis generation tactics, (i) simplify complexity to within workable boundaries, (j) recognize when a proposed solution has been fully vetted, and (k), persevere in learning in the face of obstacles, etc.

Click here to download the full report.

Many of us are inclined to see audience response systems only as a way to deliver multiple-choice and true-false questions. While this may be true in a literal sense, such a restricted conception can divert us from myriad possibilities for deep and meaningful learning in our classrooms.

The following list of 39 question types and methods is provided to show the breadth of possibilities. It is distilled from 85 pages of detailed recommendations in the white paper, Questioning Strategies for Audience Response Systems: How to Use Questions to Maximize Learning, Engagement, and Satisfaction, available free by clicking here.

NOTE from Will Thalheimer (2017): The report is focused on audience-response systems — and I must admit that it is a bit dated now in terms of the technology, but the questions types are still a very potent list.

1. Graded Questions to Encourage Attendance

Questions can be used to encourage attendance, but there are dangers that must be avoided.

2. Graded Questions to Encourage Homework and Preparation

Questions can be used to encourage learners to spend time learning prior to classroom sessions, but there are dangers that must be avoided.

3. Avoiding the Use of One Correct Answer (When Appropriate)

Questions that don’t fulfill a narrow assessment purpose need not have right answers. Pecking for a correct answer does not always produce the most beneficial mathemagenic (learning creating) cognitive processing. We can give partial credit. We can have two answers be equally acceptable. We can let the learners decide on their own.

4. Prequestions that Activate Prior Knowledge

Questions can be used to help learners connect their new knowledge to what they’ve already learned, making it more memorable. For example, a cooking teacher could ask a question about making yogurt before introducing a topic on making cheese, prompting learners to activate their knowledge about using yogurt cultures before they begin talking about how to culture cheese. A poetry teacher could ask a question about patriotic symbolism, before talking about the use of symbols in modern American poetry.

5. Prequestions that Surface Misconceptions

Learners bring naïve understandings to the classroom. One of the best ways to confront misconceptions is to bring them to the surface so that they can be confronted straight on. The Socratic Method is a prime example of this. Socrates asks a series of prequestions thereby unearthing misconceptions and leading to a new improved understanding.

6. Prequestions to Focus Attention

Our learners’ attention wanders. In an hour-long session, sometimes they’ll be riveted to the learning discussion, sometimes they’ll be thinking of other ideas that have been triggered, and sometimes they’ll be off in a daze. Prequestions (just like well-written learning objectives) can be use to help learners pay attention to the most important subsequent learning material. In fact, in one famous study, Rothkopf and Billington (1979) presented learners with learning objectives before they encountered the learning material. They then measured learning and eye movements and found that learners actually paid more attention to aspects of the learning material targeted by  the learning objectives. Prequestions work the same way as learning objectives—they focus attention.

7. Postquestions to Provide Retrieval Practice

Postquestions—questions that come after the learning content has been introduced—can be used to reinforce what has been learned and to minimize forgetting. This is a very basic process. By giving learners practice in retrieving information from memory, we increase the probability that they’ll be able to do this in the future. Retrieval practice makes perfect.

8. Postquestions to Enable Feedback

Feedback is essential for learners and instructors. Corrective feedback is critical, especially when learners have misunderstandings. Providing retrieval practice with corrective feedback is especially important as learners are struggling with newly-encountered material, difficult material, and when their attention is likely to wander—for example when they’re tired after a long-day of training, when there are excessive distractions, or when the previous material has induced boredom.

9. Postquestions to Surface Misconceptions

We already talked about using prequestions to surface misconceptions. We can also use postquestions to surface misconceptions. Learners don’t always understand concepts after only one presentation of the material. Many an instructor has been surprised after delivering a “brilliant” exposition to find that most of their learners just didn’t get it.

10. Questions Prompting Analysis of Things Presented in Classroom

One of the great benefits of classroom learning is that it enables instructors to present learners with all manner of things. In addition to verbal utterances and marks on a white board, instructors can introduce demonstrations, videos, maps, photographs, illustrations, learner performances, role-plays, diagrams, screen shots, computer animations, etcetera. While these presentations can support learning just by being observed, questions on what has been seen can prompt a different focus and a deeper understanding.

11. Using Rubric Questions to Help Learners Analyze

In common parlance, the term “rubric” connotes a set of standards. Rubrics can be utilized in asking learners questions about what they experience in the classroom. Rubric questions, if they are well designed, can give learners practice in evaluating situations, activities, and events. Such practice is an awesome way to engage learners and prepare them for critical thinking in similar future situations. In addition, if rubrics are continually emphasized, learners will integrate their wisdom in their own planning and decision-making.

12. Questions to Debrief an In-Class Experience

Classrooms can also be used to provide learners with experiences in which they themselves participate. Learners can be asked to take part in role plays, simulations, case studies, and other exercises. It’s usually beneficial to debrief those exercises, and questions can be an excellent way to drive those discussions.

13. Questions to Surface Affective Responses

Not all learning is focused on the cold, steely arithmetic of increasing the inventory of knowledge. Learners can also experience deep emotional responses, many of which are relevant to the learning itself. In topics dealing with oppression, slavery, brutality, war, leadership, glory, and honor, learners aren’t getting the full measure of learning unless they experience emotion in some way. Learners can be encouraged to explore their affective responses by asking them questions.

14. Scenario-Based Decision-Making Questions

Scenario-based questions present learners with scenarios and then ask them to make a decision about what to do. These scenarios can take many forms. They can consist of short descriptive paragraphs or involved case studies. They can be presented in a text-only format or augmented with graphics or multimedia. They can put the learner in the protagonist’s role (“What are you going to do?”) or ask the learner to make a decision for someone else (“What should Dorothy do?”). The questions can be presented in a number of formats—as multiple-choice, true-false, check-all-that-apply, or open-ended queries.

15. Don’t Show Answer Right Away

There’s no rule that you have to show learners the correct response right after they answer the question. Such a reflexive behaviorist scheme can subvert deeper learning. Instructors have had great success in withholding feedback. For example, Harvard professor Mazur’s (1997) Peer Instruction method requires learners to make an individual decision and then try to convince a peer to believe the same decision—all before the instructor weighs in with the answer.

By withholding feedback, learners are encouraged to take some responsibility for their own beliefs and their own learning. Discussions with others further deepen the learning. Simply by withholding the answer, instructors can encourage strategic metacognitive processing, thereby sending learners the not-so-subtle message that it is they—the learners—who must take responsibility for learning.

16. Dropping Answer Choices

There are several reasons to drop answer choices after learners have initially responded to a question. You can drop incorrect answer choices to help focus further discussions on more plausible alternatives. You can drop an obviously correct choice to focus on more critical distinctions. You can drop an unpopular correct choice to prompt learners to question their assumptions and also to highlight the importance of examining unlikely options. Each of these methods has specific advantages.

17. Helping Learners Transfer Knowledge to Novel Situations

“Transfer” is the idea that the learning that happens today ought to be relevant to other situations in the future. More specifically, transfer occurs when learners retrieve what they’ve learned in relevant future situations. As we’ve already discussed, the easiest and often the most potent way to promote transfer is to provide learners with practice in the same contexts—retrieving the same information—that they’ll be required to retrieve in future situations. But questions can also be used to prepare learners to retrieve information in situations that are not, or cannot, be anticipated in designing the learning experience.

18. Making the Learning Personal

By making the learning personal, we help learners actively engage the learning material, we support mathemagenic cognitive processing, and we make it more likely that they’ll think about the learning outside of our classrooms, further reinforcing retention and utilization. Questions can be designed to relate to our learners’ personal experiences, thus bolstering learning.

19. Making the Material Important

Sometimes we can’t make the material directly personal or provide realistic decisions for learners to make, but we can still use questions to show the importance of the topic being discussed.

20. Helping Learners Question Their Assumptions

One of our goals in teaching is to get learners to change their thinking. Sometimes this requires learners to directly confront their assumptions. Questions can be written that force learners to evaluate the assumptions they bring to particular topic areas.

21. Using the Devil’s Advocate Tactic

In a classroom, when we play the devil’s advocate, we argue ostensibly to find flaws in the positions put forth. The devil’s advocate tactic can be used in a number of different ways. You can play the devil’s advocate yourself, or utilize your learners in that role. From a learning standpoint, when someone plays the devil’s advocate, learners are prompted to more fully process the learning material.

22. Data Slicing

Data slicing is the process of using one factor to help make sense of a second factor. So for example, through the use of our audience response systems, we might examine how our learner’s socio-economic background affects their opinion of race relations. Data slicing can be done manually or automatically. It is particularly powerful in the classroom for demonstrating how audience characteristics may play a part in their own perceptions or judgments.

23. Using Questions for In-class Experiments.

For some topics, in-class experimentation—using the learners as the experimental participants—is very beneficial. It helps learners relate to the topic personally. It also highlights how scientific data is derived. For example, in a course on learning, psychology, or thinking; learners could be asked to remember words, but could—unbeknownst to them—be primed to think about certain semantic associates and not others.

24. Prompting Learners to Make Predictions

Prediction-making can facilitate learning in many ways. It can be used to provide retrieval practice for well-learned information. It can be used to deepen learners’ understandings of boundary conditions, contingencies, and other complications. It can be used to engender wonder. It can be used to enable learners to check their own understanding of the concepts being learned.

25. Utilizing Student Questions and Comments

Our learners often ask the best questions. Sometimes a learner’s question hints at the outlines of his or her confusion—and the confusion of many others as well. Sometimes learners want to know about boundary conditions. Students can also offer statements that can improve the learning environment. They may share their comfort level with the topic, add their thoughts in a class discussion, or ar
gue a point because they disagree. All of these interactions provide opportunities for a richer learning environment, especially if we—as instructors—can use these questions to generate learning.

26. Enabling Readiness When Learners are Aloof or Distracted

Let’s face it. Not all of our learners will come into our classrooms ready to learn. Some will be dealing with personal problems. Some will be attending because they have to—not because they want to. Some will be distracted with other stress-inducing responsibilities. Some will think the topic is boring, silly, or irrelevant to them. Fortunately, experienced instructors have discovered tricks that often are successful. Audience response technology can help.

27. Enabling Readiness When Learners Think They Know it All

Some learners will come to your classroom thinking they already know everything they need to know about the topic you’re going to discuss. There are two types of learners who feel this way—those who are delusional (they actually need the learning) and those who are quite clearheaded (they already know what they need to know). Using the right questions and gathering everyone’s responses can help you deal with both of these characters.

28. Enabling Readiness When Learners are Hostile

In almost every instructor’s life, there will come a day when one, two, or multiple learners are publicly hostile. Experienced instructors know that such hostility must be dealt with immediately—not ignored. Even a few bad apples can ruin the learning experience and the satisfaction of the whole classroom. Fortunately, there are ways to fend off the assault.

29. Using Questions with Images

Using images as part of the learning process is critical in many domains. Obvious examples are art appreciation, architecture, geology, computer programming, and film. But even for the least likely topics, such as poetry or literature, there may be opportunities. For example, a poetry teacher may want to display poems to ask learners about the physical layout of poems. Images should not be thrown in willy-nilly. They should be used only when they help instructors meet their learning goals. Images should not be used just to make the question presentation look good. Research has shown that placing irrelevant images in learning material, even if those images seem related to the topic, can hurt learning results, distracting learners from focusing on the main points of the material . One easy rule: Don’t use images if they’re not needed to answer the question.

30. Aggregating Handset Responses for a Group or Team

Some handset brands enable responses of individual handsets to be aggregated. So for example, an instructor in a class of 50 learners might break the learners into 10 teams, with five people on a team. All 50 learners have a handset, but the responses from each team of five learners are aggregated in some way. This aggregation feature enables some additional learning benefits. Teamwork can be rewarded and competition between teams can add an extra element of motivation. Using aggregation scoring allows the instructor to encourage out-of-class activities where learners within a team help each other. Obviously, this will only work if the learning experience takes place over time. In such cases, aggregation can be used to build a learning community. Learners can be assigned to the same team or rotated on different teams, depending on the goals of instruction. Putting learners on one team encourages deeper relationships and eases the logistics for out-of-class learning. Rotating learners through multiple teams enables a greater richness of multiple perspectives and broader networking opportunities. It’s a tradeoff.

31. Using One Handset for a Group or Team

Although one of the prime benefits of handsets is that every learner is encouraged to think and respond, handsets don’t have to be used only in a one-person one-handset format. Sometimes a greater number of audience members show up than expected. Sometimes budgets don’t allow for the purchase of handsets for every learner. Sometimes learners forget to bring their handsets. In addition, sometimes there are specific interactions that are more suited to group responding. When a group of learners has to make a single response, there has to be a mechanism for them to decide what response to make. Several exist, each having their own strengths and weaknesses.

32. Using Questions in Games

As several sales representatives have told me, one of the first things instructors ask about when being introduced to a particular audience response system is the gaming features. This excitement is understandable, because almost all classroom audiences respond energetically to games. Our enthusiasm as instructors must be balanced, however, with knowledge of the pluses and minuses of gaming. Just as with grading manipulations, games energize learners toward specific overt goals—namely scoring well on the game. If this energy is utilized in appropriate mathemagenic activity, it has benefits. On the other hand, games can be highly counterproductive as well.

33. Questions to Narrow the Options in Decision Making

Sometimes the audience in the room must make decisions about what to do. For example, a senior manager running an action-learning group may want to take a vote about which project to pursue given a slate of 15 possible projects. A professor in an upper-level seminar course might give students a vote in deciding which of the 10 possible topics to discuss in the final three weeks of the course. A supervisor might want her employees to narrow down the candidates for employee of the year. A primary school teacher might want to give her students a choice of field trip options. Audience response systems can be used in two ways to do this, single round voting and double round voting.

34. Questions to Decide Go or No Go

Sometimes it’s beneficial to give our learners a chance to decide whether they’re ready to go on to the next topic. You might ask, “Are we ready to go ahead?” Or, “Are we ready to go ahead, or do I need to clarify this a bit more?” Using an audience response system has distinct advantages over handraising here because most learners are uncomfortable asking for additional instruction, even when they need it.

35. Perspective-Taking Questions

There are some topics that may benefit by encouraging learners to take perspectives of others in answering questions. In other words, instead of only asking our learners to express their opinions, we can ask them to take a guess as to the opinions of others. For example, we might ask our learners to guess the opinion of both rich and poor people to affirmative action, the importance of education, etc.

36. Open-Ended Questions

Some people think that audience response systems lack potential because they only enable the use of multiple-choice questions. In contrast, the research on learning suggests to me that (a) multiple choice questions can be powerful on their own, and (b) variations of multiple-choice questions add to this power, and (c) open-ended questions can be valuable in conjunction with multiple-choice formats, for example by letting learners think first on their own, providing student ideas, providing more authentic retrieval practice, etc.

37. Matching

Matching questions are especially valuable if your learning goal is to enable learners to distinguish between closely related items. The matching format can also be useful for logistical reasons in asking more than one question at a time. Although the matching question has its uses, it is often overused by instructors who are simply trying to use non-multiple-choice questions. Often, the matching format only helps learners reinforce relatively low-level concepts, like definitions, word meaning, simple calculations, and the like. While this type of information is valuable, it’s not clear that the classroom is the best place to reinforce this type of knowledge.

38. Asking People to Answer Different Questions

Some audience response systems enable learners to simultaneously answer different questions. In other words, Sam might answer questions 1, 3, 5, 7, and 9, while Pat answers questions 2, 4, 6, 8, and 10. This feature provides an advantage only when it’s critical not to let (a) individual learners cheat off other learners, or (b) groups of learners overhear the conversations of other groups of learners. The biggest disadvantage to this tactic is that it makes post-question discussions particularly untenable. In any case, if you do find a unique benefit to having learners answering different questions simultaneously, it’s likely to be for information that is already well learned—where in-depth discussions are not needed.

39. Using Models of Facilitated Questioning

In the paper that details these 39 question types and methods, I attempted to lay bare the DNA of classroom questioning. I intentionally stripped questioning practices down to their essence in the hope of creating building blocks that you, my patient readers, can utilize to build your own interactive classroom sessions. For example, I talked specifically about using prequestions to focus attention, to activate prior knowledge, and to surface misconceptions. I didn’t describe the myriad permutations that pre- and postquestions might inhabit for example, or any systematic combinations of the many other building blocks I described. While I didn’t describe them, many instructors have developed their own systematic methods—or what I will call, “Models of Facilitated Questioning.” For example, in the paper I briefly describe Harvard Professor Eric Mazur’s “Peer Instruction” method and the University of Massachusetts’s Scientific Reasoning Research Institute and Department of Physics’ “Question-Driven Instruction” method.

Click to download the full report.

The following articles are some of the most cited research articles in regard to audience response systems and learning. They are briefly annotated so you can quickly determine their quality and meaning.

 

Bruff, D. (2009). Teaching with classroom response systems: Creating active learning environments. San Francisco: Jossey-Bass.

Type of Publication?

Book. 

Covered What?

For the most part provides an overview of how college professors are teaching with audience response systems.

Strengths?

Does a really nice job covering some of the best ways to encourage deep student learning in the classroom. 

Weaknesses?

Looks at some of the best of “what is” in college teaching—admittedly some really good stuff—but tends to assume that this is all that is possible. Unfortunately, there are many more opportunities—based on what we know about learning—that are not included in the book.

Evaluation?

A good book, with good ideas, but this shouldn’t be seen as the full spectrum of what is possible in using audience response systems. Still, if you want to teach thoughtfully with audience response systems, you should read this book.

Buy it?

Click here to purchase from Amazon.com: Teaching with Classroom Response Systems: Creating Active Learning Environments

 

Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy, and implications. In David A. Banks (Ed.) Audience response systems in higher education: Applications and cases (pp. 1-25).

Type of Publication?

Book chapter. 

Reviewed What?

Reviewed the history of the use of audience response systems. 

Strengths?

Interesting and enlightening review. Who knew, for example, that audience response systems were first used in the 1960’s? 

Evaluation?

Provides a wide-ranging introduction to the history and thinking around audience response systems.

 

Barak, M., Lipson, A., & Lerman, S. (2006). Wireless laptops as means for promoting active learning in large lecture halls. Journal of Research on Technology in Education, 38(3), 245-263.

Type of Publication?

Descriptive article in refereed journal on learning.

Tested What?

Did not compare. Instead, the authors describe their experiences using laptops to teach computer science. They did measure student reactions.

Strengths?

One of the first articles to examine the use of wireless laptops in the classroom.

Weaknesses?

No comparison data.

Evaluation?

Not much evidence for effectiveness.

 

Beatty, I. D., Gerace, W. J., Leonard, W. J., & Dufresne, R. J. (2006). Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31-39.

Type of Publication?

Practice review article in refereed journal on physics.

Reviewed What?

The University of Massachusetts Physic’s department’s approach to using audience response systems.

Strengths?

Very thoughtful review of their approach to using audience response systems in teaching physics.

Weaknesses?

It’s just one approach.

Evaluation?

Great article by very thoughtful practitioners. Recommended for non-physics folks as well. They make a nice case that instructors ought to have more than just content goals for their learners. Instructors ought to also have a process (cognitive) goal, and a metacognitive goal. For their physics courses, the process goals include 12 “habits of mind” that all their learners are expected to come away with as well.

 

Carnaghan, C., & Webb, A. (2005). Investigating the effects of group response systems on learning outcomes and satisfaction in accounting education. Retrieved on January 1, 2007 from www.learning.uwaterloo.ca/LIF/ responsepad_june20051.pdf

Type of Publication?

Research article (available online) on learning.

Tested What?

Compared class sessions that used an audience response system with class sessions that did not, where both types of sessions utilized active-engagement techniques. Looked at learner satisfaction, objective measures of learner engagement, and at learning results.

Strengths?

Very strong research design. Compared same learners in a counterbalanced design using audience response systems in half the class sessions and not using them in other half of sessions. Measured learning results, not just learner satisfaction ratings.

Weaknesses?

Mostly minor issues, except the fact that the authors didn’t cite raw learning results, making analysis difficult. Can’t rule out instructor-enthusiasm effects. Measures of learner engagement are not convincing because they don’t account for handset responses and they are analyzed by only one observer who was not blind to condition.

Evaluation?

Seems to demonstrate improvements in learning on exam questions that were most similar to audience-response questions used in class, but no improvement for exam questions unrelated to class questions. Also shows the typical result of high learner satisfaction with audience response systems. However, while the learners rated the audience-response systems highly, on questions about the course in general, the ratings were similar whether learners had just finished the series of sessions using (or not using) audience response systems.

 

Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.

Type of Publication?

Research and practice review article in refereed journal on physics.

Reviewed What?

Eric Mazur’s (and Catherine Crouch’s) Peer Instruction practice and results. Research results compare classes taught in traditional ways to those taught with the Peer-Instruction methods.

Strengths?

Peer Instruction is one of the most widespread methodologies in use and the authors do a nice job of covering the practice and results. At least some of the research comparisons use standardized tests of physics competence, providing good face validity and lessening opportunities for bias. Many instructors utilized.

Weaknesses?

Peer Instruction is just one approach. Research comparisons are versus previously-taught traditional classes, leaving open the possibility for learning gains due to differences in instructor enthusiasm and instructor preparation. Also, because no pretests were provided on the traditional courses, we can’t rule out bias due to differences in learners.

Evaluation?

Great article by thoughtful practitioners. Research comparisons, while not perfect, are suggestive of the benefits of Peer Instruction methodologies. Note that authors do NOT focus on the use of audience response systems. In fact, in the research cited they didn’t always use them. In an endnote they say, “We did not see any significant changes in student learning on introducing the classroom network system, and find the main advantages of the [system] are anonymity of student responses and data collection…” (page 976, endnote 9). On the other hand, the improvements in results they cite (in Figure 2 on page 971 and Table 1 on page 972) all come after the audience response systems were introduced. In any case, the study did not test audience response systems vs. no audience response systems. It examined the Peer Instruction method.

 

Dori, Y. J., & Belcher (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? The Journal of the Learning Sciences, 14(2), 243-279.

Type of Publication?

Research article in refereed journal on learning.

Tested What?

Active learning methods in a specially-designed classroom that enabled simulation and visualization versus traditional lecture method in a standard classroom.

Strengths?

Interesting discussion.

Weaknesses?

Major methodological weaknesses. Comparison groups likely sampled from different populations. Conflated active learning with the many technology interventions. The treatment group received conceptual questions. The control group did not. Couldn’t rule out effects of exciting new classroom or instructor-enthusiasm effects.

Evaluation?

Major methodological weaknesses. Moreover, can’t differentiate between effects of audience response systems and the many other variables in play.

 

Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.

Type of Publication?

Research article in refereed journal on learning.

Tested What?

Student opinions of audience response systems.

Strengths?

Looked at lots of classrooms and instructors. Very thoughtful analysis of potential benefits.

Weaknesses?

Only measured student opinions. No comparison with non-ARS situation.

Evaluation?

Methodology weak, but can tell us learner opinion, and did show improvements of opinion over time.

 

Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.

Type of Publication?

Descriptive article in refereed journal on learning.

Tested What?

Did not compare. Instead, the authors describe their institution’s experiences using audience response systems to improve classroom sessions. They did measure learner and instructor reactions.

Strengths?

Looked at many different topics taught with many different learners and instructors. Thoughtful analysis.

Weaknesses?

No comparison data. No data on learning improvements, only learner and instructor opinions.

Evaluation?

No evidence for learning effectiveness, but some suggestive evidence for learner satisfaction (though there is no comparison to traditional methods). Very thoughtful analysis with many helpful suggestions for how to use audience response technology.

 

Dufresne, R. J., Gerace, W. J., & Wenk, L. (1996). Classtalk: A classroom communication system for active learning. Journal of Computing in Higher Education, 7, 3-47.

Type of Publication?

Descriptive article in refereed journal on learning.

Tested What?

Did not compare. Instead, the authors describe their experiences using the audience response system and show some graphs of student reactions to using the response systems and active engagement.

Strengths?

An early look at how audience response systems were used.

Weaknesses?

No comparison data.

Evaluation?

Not much evidence for effectiveness.

 

Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response systems. San Francisco: Addison Wesley.

Type of Publication?

Short book (pamphlet).

Reviewed What?

Provides an introduction to using handsets in the classroom.

Strengths?

Covers many important areas. Good introduction.

Weaknesses?

Recommendations are limited to only a few questioning strategies. Book now a bit dated. Not generally available unless you contact publisher.

Evaluation?

Good primer. Could be better.

 

Hake, R. R. (1996). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64-74.

Type of Publication?

Research article in refereed journal on physics.

Tested What?

Active learning methods vs. traditional methods in classrooms.

Strengths?

Lots of classrooms, instructors, and institutions. Used nicely standardized relevant test in a pretest-posttest design.

Weaknesses?

Data seriously biased by collection method. Can’t rule out instructor bias and instructor preparation effects.

Evaluation?

Focuses not on ARS usage, but on active learning. Poor methodology makes results dubious.

 

Nicol, D. J., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28(4), 457-473.

Type of Publication?

Research article in refereed journal on learning.

Tested What?

Compared learner ratings on both peer instruction and classwide discussions, both of which were used in conjunction with audience response systems. 

Strengths?

Used a within-subject design, examining the feelings of the same students to both peer-instruction techniques and classwide discussion techniques. Used several methods to gather learner opinions.

Weaknesses?

Measured only learner opinions, not actual learning results.

Evaluation?

Shows that learners seem to prefer peer-instruction techniques to classwide-discussion techniques (when both are used with audience response systems).

 

Poulis, J., Massen, C., Robens, E., & Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66(5), 439-441.

Type of Publication?

Short research-like article in refereed journal on physics.

Tested What?

Compared examination pass rates over the years when physics classes used, or did not use, simple one-button handsets.

Strengths?

Used an ABA design, measuring actual learning results, comparing no-handset classes in Year’s 1 and 2 versus handset classes in Year’s 3 and 4, and no-handset classes in Year’s 5 and 6.

Weaknesses?

Methodology section is too brief to really assess the experimental methodology. The one-button handsets in use are much more basic than today’s handsets. Use of handsets was conflated with active-engagement techniques, so we can’t be sure what produced the effect, handsets or active-engagement. No statistics were used, making it difficult to assess comparisons, though the handset classes appear to generally—but not always—outperform the no-handset classes.

Evaluation?

Shows suggestive improvements in learning due to active engagement and handset use, but methodology is quite suspect.

Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy, and implications. In David A. Banks (Ed.) Audience response systems in higher education: Applications and cases (pp. 1-25).


Type of Publication?

Book chapter.

Reviewed What?

Reviewed the history of the use of audience response systems.

Strengths?

Interesting and enlightening review. Who knew, for example, that audience response systems were first used in the 1960’s?

Evaluation?

Provides a wide-ranging introduction to the history and thinking around audience response systems.

 

Barak, M., Lipson, A., & Lerman, S. (2006). Wireless laptops as means for promoting active learning in large lecture halls. Journal of Research on Technology in Education, 38(3), 245-263.


Type of Publication?

Descriptive article in refereed journal on learning.

Tested What?

Did not compare. Instead, the authors describe their experiences using laptops to teach computer science. They did measure student reactions. 

Strengths?

One of the first articles to examine the use of wireless laptops in the classroom.

Weaknesses?

No comparison data.

Evaluation?

Not much evidence for effectiveness.

 

Beatty, I. D., Gerace, W. J., Leonard, W. J., & Dufresne, R. J. (2006). Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31-39.


Type of Publication?

Practice review article in refereed journal on physics.

Reviewed What?

The University of Massachusetts Physic’s department’s approach to using audience response systems.

Strengths?

Very thoughtful review of their approach to using audience response systems in teaching physics.

Weaknesses?

It’s just one approach.

Evaluation?

Great article by very thoughtful practitioners. Recommended for non-physics folks as well. They make a nice case that instructors ought to have more than just content goals for their learners. Instructors ought to also have a process (cognitive) goal, and a metacognitive goal. For their physics courses, the process goals include 12 “habits of mind” that all their learners are expected to come away with as well.

 

Carnaghan, C., & Webb, A. (2005). Investigating the effects of group response systems on learning outcomes and satisfaction in accounting education. Retrieved on January 1, 2007 from www.learning.uwaterloo.ca/LIF/ responsepad_june20051.pdf


Type of Publication?

Research article (available online) on learning.

Tested What?

Compared class sessions that used an audience response system with class sessions that did not, where both types of sessions utilized active-engagement techniques. Looked at learner satisfaction, objective measures of learner engagement, and at learning results.

Strengths?

Very strong research design. Compared same learners in a counterbalanced design using audience response systems in half the class sessions and not using them in other half of sessions. Measured learning results, not just learner satisfaction ratings.

Weaknesses?

Mostly minor issues, except the fact that the authors didn’t cite raw learning results, making analysis difficult. Can’t rule out instructor-enthusiasm effects. Measures of learner engagement are not convincing because they don’t account for handset responses and they are analyzed by only one observer who was not blind to condition.

Evaluation?

Seems to demonstrate improvements in learning on exam questions that were most similar to audience-response questions used in class, but no improvement for exam questions unrelated to class questions. Also shows the typical result of high learner satisfaction with audience response systems. However, while the learners rated the audience-response systems highly, on questions about the course in general, the ratings were similar whether learners had just finished the series of sessions using (or not using) audience response systems.

 

Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.


Type of Publication?

Research and practice review article in refereed journal on physics.

Reviewed What?

Eric Mazur’s (and Catherine Crouch’s) Peer Instruction practice and results. Research results compare classes taught in traditional ways to those taught with the Peer-Instruction methods.

Strengths?

Peer Instruction is one of the most widespread methodologies in use and the authors do a nice job of covering the practice and results. At least some of the research comparisons use standardized tests of physics competence, providing good face validity and lessening opportunities for bias. Many instructors utilized.

Weaknesses?

Peer Instruction is just one approach. Research comparisons are versus previously-taught traditional classes, leaving open the possibility for learning gains due to differences in instructor enthusiasm and instructor preparation. Also, because no pretests were provided on the traditional courses, we can’t rule out bias due to differences in learners.

Evaluation?

Great article by thoughtful practitioners. Research comparisons, while not perfect, are suggestive of the benefits of Peer Instruction methodologies. Note that authors do NOT focus on the use of audience response systems. In fact, in the research cited they didn’t always use them. In an endnote they say, “We did not see any significant changes in student learning on introducing the classroom network system, and find the main advantages of the [system] are anonymity of student responses and data collection…” (page 976, endnote 9). On the other hand, the improvements in results they cite (in Figure 2 on page 971 and Table 1 on page 972) all come after the audience response systems were introduced. In any case, the study did not test audience response systems vs. no audience response systems. It examined the Peer Instruction method.

 

Dori, Y. J., & Belcher (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? The Journal of the Learning Sciences, 14(2), 243-279.


Type of Publication?

Research article in refereed journal on learning.

Tested What?

Active learning methods in a specially-designed classroom that enabled simulation and visualization versus traditional lecture method in a standard classroom.

Strengths?

Interesting discussion.

Weaknesses?

Major methodological weaknesses. Comparison groups likely sampled from different populations. Conflated active learning with the many technology interventions. The treatment group received conceptual questions. The control group did not. Couldn’t rule out effects of exciting new classroom or instructor-enthusiasm effects.

Evaluation?

Major methodological weaknesses. Moreover, can’t differentiate between effects of audience response systems and the many other variables in play.

 

Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.


Type of Publication?

Research article in refereed journal on learning.

Tested What? 

Student opinions of audience response systems.

Strengths?

Looked at lots of classrooms and instructors. Very thoughtful analysis of potential benefits.

Weaknesses?

Only measured student opinions. No comparison with non-ARS situation.

Evaluation?

Methodology weak, but can tell us learner opinion, and did show improvements of opinion over time.

 

Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.


Type of Publication?

Descriptive article in refereed journal on learning.

Tested What?

Did not compare. Instead, the authors describe their institution’s experiences using audience response systems to improve classroom sessions. They did measure learner and instructor reactions.

Strengths?

Looked at many different topics taught with many different learners and instructors. Thoughtful analysis.

Weaknesses?

No comparison data. No data on learning improvements, only learner and instructor opinions.

Evaluation?

No evidence for learning effectiveness, but some suggestive evidence for learner satisfaction (though there is no comparison to traditional methods). Very thoughtful analysis with many helpful suggestions for how to use audience response technology.

 

Dufresne, R. J., Gerace, W. J., & Wenk, L. (1996). Classtalk: A classroom communication system for active learning. Journal of Computing in Higher Education, 7, 3-47.


Type of Publication?

Descriptive article in refereed journal on learning.

Tested What?

Did not compare. Instead, the authors describe their experiences using the audience response system and show some graphs of student reactions to using the response systems and active engagement.

Strengths?

An early look at how audience response systems were used.

Weaknesses?

No comparison data.

Evaluation?

Not much evidence for effectiveness.

 

Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response systems. San Francisco: Addison Wesley.


Type of Publication?

Short book (pamphlet).

Reviewed What?

Provides an introduction to using handsets in the classroom

Strengths?

Covers many important areas. Good introduction.

Weaknesses?

Recommendations are limited to only a few questioning strategies. Book now a bit dated. Not generally available unless you contact publisher.

Evaluation?

Good primer. Could be better.

 

Hake, R. R. (1996). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64-74.


Type of Publication?

Research article in refereed journal on physics.

Tested What?

Active learning methods vs. traditional methods in classrooms.

Strengths?

Lots of classrooms, instructors, and institutions. Used nicely standardized relevant test in a pretest-posttest design.

Weaknesses?

Data seriously biased by collection method. Can’t rule out instructor bias and instructor preparation effects.

Evaluation?

Focuses not on ARS usage, but on active learning. Poor methodology makes results dubious.

 

Nicol, D. J., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28(4), 457-473.


Type of Publication?

Research article in refereed journal on learning.

Tested What?

Compared learner ratings on both peer instruction and classwide discussions, both of which were used in conjunction with audience response systems.  

Strengths?

Used a within-subject design, examining the feelings of the same students to both peer-instruction techniques and classwide discussion techniques. Used several methods to gather learner opinions.

Weaknesses?

Measured only learner opinions, not actual learning results.

Evaluation?

Shows that learners seem to prefer peer-instruction techniques to classwide-discussion techniques (when both are used with audience response systems).

 

Poulis, J., Massen, C., Robens, E., & Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66(5), 439-441.


Type of Publication?

Short research-like article in refereed journal on physics.

Tested What?

Compared examination pass rates over the years when physics classes used, or did not use, simple one-button handsets. 

Strengths?

Used an ABA design, measuring actual learning results, comparing no-handset classes in Year’s 1 and 2 versus handset classes in Year’s 3 and 4, and no-handset classes in Year’s 5 and 6.

Weaknesses?

Methodology section is too brief to really assess the experimental methodology. The one-button handsets in use are much more basic than today’s handsets. Use of handsets was conflated with active-engagement techniques, so we can’t be sure what produced the effect, handsets or active-engagement. No statistics were used, making it difficult to assess comparisons, though the handset classes appear to generally—but not always—outperform the no-handset classes.

Evaluation?

Shows suggestive improvements in learning due to active engagement and handset use, but methodology is quite suspect.

If you really want to know what Will Thalheimer’s up to—if you want to be the first to know when new research and information is revealed, if you want to be the first to get involved in Will’s Book Project—sign up here.

 

One of the features of the book is that it will provide a comprehensive model of workplace learning and performance. This model can be used in many phases of our work, from design through to evaluation.

Or click to watch a larger version (with more viewing control) directly on YouTube: Learning Landscape Model.

Questioning Strategies for Audience Response Systems:
How to Use Questions to Maximize Learning, Engagement, and Satisfaction

by Dr. Will Thalheimer


Introduction

    The buzz in the learning industry is focused on e-learning, m-learning, wikis, and blogs; but one of the most powerful learning technologies is being overlooked, probably because it’s an in-the-classroom technology—audience response systems. In this research-to-practice white paper I offer a blueprint for how to use audience response systems to maximize higher-order learning in the classroom and beyond.

What One Reader Wrote to Me:

Dr. Thalheimer,

Just wanted to drop you a little note this morning to express my gratitude for your paper “Questioning Strategies for Audience Response Systems: How to Use Questions to Maximize Learning, Engagement, and Satisfaction.”

A friend recommended that I read it to prepare for a Higher Order Questioning staff development class that she and I are teaching together (in conjunction with some CPS [audience response] training we’re offering). To tell you the truth, I really wasn’t looking forward to reading it because I expected it to be dry and full of boring I’m-trying-to-sound-snobbily-intellectual writing, but I LOVED it. 🙂

I enjoyed your approachable style and dry sense of humor so much I read all the way through (including the endnotes!) and had many a good laugh along the way. In addition to being a blast to read, the paper challenged and inspired me to find new ways to push my questioning skills to a higher level for the next school year.

Thanks again, for the inspiration and for the great read. I’ll be checking out your website later today and hope to find that equally enjoyable.

    Sincerely,

    Liz Walhof

    Spanish Teacher
    Colorado


From the Paper’s Introduction

“Audience response systems have enormous potential for transforming lectures from dry recitals into rich jam sessions of deeply resonant learning. The technology is widely available, but the key to success is not in the technology; it’s in the instruction. To maximize meaningful learning, instructors must become adept in using questioning and discussion techniques. Unfortunately, some of us may come to believe that we can simply sprinkle our lectures with a few multiple-choice questions. This approach is emphatically inadequate, and is simply not worthy of our profession.

This report provides a near-exhaustive list of questioning strategies, and a comprehensive guide on using questions to facilitate classroom learning. No other resource exists that is research-based and comprehensive, while also being practical and useful. It has been designed specifically to provide practical guidance for trainers, teachers, and professors so that their learners—whether they are eight, forty-eight, or eighty years old—can experience deep and meaningful learning.”

Special thanks to eInstruction for agreeing to license the paper for distribution to their clients. Such underwriting helps move the audience-response field forward and demonstrates an enlightened commitment to effective learning in classrooms of all types throughout the world. Other underwriting opportunities are available for research on audience-response learning. Contact Dr. Thalheimer with inquiries.


Additional Information

    Number of Pages: 124
    Number of Research Citations: 54
    Publication Date: 2007
    Available to you Immediately as downloadable Electronic file (PDF).
    Value: $495.00 (US) but FREE to you.


Download Button

This page is offered to help you search for more information about audience response technology.

Unfortunately, the field has not yet moved toward a single label for the technology.


Search Terms

Terms to search on:

  1. audience response systems
  2. student response systems
  3. classroom response systems
  4. clickers (most hits on Google, but unfortunately associated with animal training)


Best Websites for Unbiased Information


Books


Manufacturers/Developers:

Note: Some offer clickers; Some offer technology where clickers aren’t needed.

IMPORTANT NOTE: In addition to these websites there are dozens of cell-phone apps that you can find for use on iPhone or Android phones.

In no particular order:

Classroom audience response systems provide learners with handsets (or other input devices) that enable them to respond to instructor questions or other queries. Learner inputs are typically compiled in a database and are displayed through a projection system so that learners and instructors can see the results. Today’s audience response systems typically include (a) handsets, (b) a receiver to gather learner inputs, and (c) software to compile, capture, and display learner inputs. In addition, these systems require (d) a computer, and (e) a projection system. When used for data gathering, the systems can by augmented with (f) spreadsheet software. Older response systems were often hardwired, whereas most current systems are portable and wireless.

The following graphic diagram shows the most critical elements of audience response learning.

 

ARS-Graphic_png2

Although others who have diagrammed these systems often omit the instructor—probably to avoid cluttering the diagram—I include the instructor to highlight the importance of instructor facilitation, question development, and session organization. One thing that is critical, but hidden in the diagram, is the software that runs the audience response interface. Note that some instructors prefer two projectors/screens, using one screen to show the question and one to display either (a) the acknowledgement of handset responses and/or (b) the graph of the answer results. Finally, note that due to space limitations the graphic above does not depict the learners working in groups or involved in discussion, a key component of the learning process.