Research Annotated
The following articles are some of the most cited research articles in regard to audience response systems and learning. They are briefly annotated so you can quickly determine their quality and meaning.
Bruff, D. (2009). Teaching with classroom response systems: Creating active learning environments. San Francisco: Jossey-Bass.
Type of Publication?
Book.
Covered What?
For the most part provides an overview of how college professors are teaching with audience response systems.
Strengths?
Does a really nice job covering some of the best ways to encourage deep student learning in the classroom.
Weaknesses?
Looks at some of the best of “what is” in college teaching—admittedly some really good stuff—but tends to assume that this is all that is possible. Unfortunately, there are many more opportunities—based on what we know about learning—that are not included in the book.
Evaluation?
A good book, with good ideas, but this shouldn’t be seen as the full spectrum of what is possible in using audience response systems. Still, if you want to teach thoughtfully with audience response systems, you should read this book.
Buy it?
Click here to purchase from Amazon.com: Teaching with Classroom Response Systems: Creating Active Learning Environments
Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy, and implications. In David A. Banks (Ed.) Audience response systems in higher education: Applications and cases (pp. 1-25).
Type of Publication?
Book chapter.
Reviewed What?
Reviewed the history of the use of audience response systems.
Strengths?
Interesting and enlightening review. Who knew, for example, that audience response systems were first used in the 1960’s?
Evaluation?
Provides a wide-ranging introduction to the history and thinking around audience response systems.
Barak, M., Lipson, A., & Lerman, S. (2006). Wireless laptops as means for promoting active learning in large lecture halls. Journal of Research on Technology in Education, 38(3), 245-263.
Type of Publication?
Descriptive article in refereed journal on learning.
Tested What?
Did not compare. Instead, the authors describe their experiences using laptops to teach computer science. They did measure student reactions.
Strengths?
One of the first articles to examine the use of wireless laptops in the classroom.
Weaknesses?
No comparison data.
Evaluation?
Not much evidence for effectiveness.
Beatty, I. D., Gerace, W. J., Leonard, W. J., & Dufresne, R. J. (2006). Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31-39.
Type of Publication?
Practice review article in refereed journal on physics.
Reviewed What?
The University of Massachusetts Physic’s department’s approach to using audience response systems.
Strengths?
Very thoughtful review of their approach to using audience response systems in teaching physics.
Weaknesses?
It’s just one approach.
Evaluation?
Great article by very thoughtful practitioners. Recommended for non-physics folks as well. They make a nice case that instructors ought to have more than just content goals for their learners. Instructors ought to also have a process (cognitive) goal, and a metacognitive goal. For their physics courses, the process goals include 12 “habits of mind” that all their learners are expected to come away with as well.
Carnaghan, C., & Webb, A. (2005). Investigating the effects of group response systems on learning outcomes and satisfaction in accounting education. Retrieved on January 1, 2007 from www.learning.uwaterloo.ca/LIF/ responsepad_june20051.pdf
Type of Publication?
Research article (available online) on learning.
Tested What?
Compared class sessions that used an audience response system with class sessions that did not, where both types of sessions utilized active-engagement techniques. Looked at learner satisfaction, objective measures of learner engagement, and at learning results.
Strengths?
Very strong research design. Compared same learners in a counterbalanced design using audience response systems in half the class sessions and not using them in other half of sessions. Measured learning results, not just learner satisfaction ratings.
Weaknesses?
Mostly minor issues, except the fact that the authors didn’t cite raw learning results, making analysis difficult. Can’t rule out instructor-enthusiasm effects. Measures of learner engagement are not convincing because they don’t account for handset responses and they are analyzed by only one observer who was not blind to condition.
Evaluation?
Seems to demonstrate improvements in learning on exam questions that were most similar to audience-response questions used in class, but no improvement for exam questions unrelated to class questions. Also shows the typical result of high learner satisfaction with audience response systems. However, while the learners rated the audience-response systems highly, on questions about the course in general, the ratings were similar whether learners had just finished the series of sessions using (or not using) audience response systems.
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977.
Type of Publication?
Research and practice review article in refereed journal on physics.
Reviewed What?
Eric Mazur’s (and Catherine Crouch’s) Peer Instruction practice and results. Research results compare classes taught in traditional ways to those taught with the Peer-Instruction methods.
Strengths?
Peer Instruction is one of the most widespread methodologies in use and the authors do a nice job of covering the practice and results. At least some of the research comparisons use standardized tests of physics competence, providing good face validity and lessening opportunities for bias. Many instructors utilized.
Weaknesses?
Peer Instruction is just one approach. Research comparisons are versus previously-taught traditional classes, leaving open the possibility for learning gains due to differences in instructor enthusiasm and instructor preparation. Also, because no pretests were provided on the traditional courses, we can’t rule out bias due to differences in learners.
Evaluation?
Great article by thoughtful practitioners. Research comparisons, while not perfect, are suggestive of the benefits of Peer Instruction methodologies. Note that authors do NOT focus on the use of audience response systems. In fact, in the research cited they didn’t always use them. In an endnote they say, “We did not see any significant changes in student learning on introducing the classroom network system, and find the main advantages of the [system] are anonymity of student responses and data collection…” (page 976, endnote 9). On the other hand, the improvements in results they cite (in Figure 2 on page 971 and Table 1 on page 972) all come after the audience response systems were introduced. In any case, the study did not test audience response systems vs. no audience response systems. It examined the Peer Instruction method.
Dori, Y. J., & Belcher (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? The Journal of the Learning Sciences, 14(2), 243-279.
Type of Publication?
Research article in refereed journal on learning.
Tested What?
Active learning methods in a specially-designed classroom that enabled simulation and visualization versus traditional lecture method in a standard classroom.
Strengths?
Interesting discussion.
Weaknesses?
Major methodological weaknesses. Comparison groups likely sampled from different populations. Conflated active learning with the many technology interventions. The treatment group received conceptual questions. The control group did not. Couldn’t rule out effects of exciting new classroom or instructor-enthusiasm effects.
Evaluation?
Major methodological weaknesses. Moreover, can’t differentiate between effects of audience response systems and the many other variables in play.
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.
Type of Publication?
Research article in refereed journal on learning.
Tested What?
Student opinions of audience response systems.
Strengths?
Looked at lots of classrooms and instructors. Very thoughtful analysis of potential benefits.
Weaknesses?
Only measured student opinions. No comparison with non-ARS situation.
Evaluation?
Methodology weak, but can tell us learner opinion, and did show improvements of opinion over time.
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94.
Type of Publication?
Descriptive article in refereed journal on learning.
Tested What?
Did not compare. Instead, the authors describe their institution’s experiences using audience response systems to improve classroom sessions. They did measure learner and instructor reactions.
Strengths?
Looked at many different topics taught with many different learners and instructors. Thoughtful analysis.
Weaknesses?
No comparison data. No data on learning improvements, only learner and instructor opinions.
Evaluation?
No evidence for learning effectiveness, but some suggestive evidence for learner satisfaction (though there is no comparison to traditional methods). Very thoughtful analysis with many helpful suggestions for how to use audience response technology.
Dufresne, R. J., Gerace, W. J., & Wenk, L. (1996). Classtalk: A classroom communication system for active learning. Journal of Computing in Higher Education, 7, 3-47.
Type of Publication?
Descriptive article in refereed journal on learning.
Tested What?
Did not compare. Instead, the authors describe their experiences using the audience response system and show some graphs of student reactions to using the response systems and active engagement.
Strengths?
An early look at how audience response systems were used.
Weaknesses?
No comparison data.
Evaluation?
Not much evidence for effectiveness.
Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response systems. San Francisco: Addison Wesley.
Type of Publication?
Short book (pamphlet).
Reviewed What?
Provides an introduction to using handsets in the classroom.
Strengths?
Covers many important areas. Good introduction.
Weaknesses?
Recommendations are limited to only a few questioning strategies. Book now a bit dated. Not generally available unless you contact publisher.
Evaluation?
Good primer. Could be better.
Hake, R. R. (1996). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64-74.
Type of Publication?
Research article in refereed journal on physics.
Tested What?
Active learning methods vs. traditional methods in classrooms.
Strengths?
Lots of classrooms, instructors, and institutions. Used nicely standardized relevant test in a pretest-posttest design.
Weaknesses?
Data seriously biased by collection method. Can’t rule out instructor bias and instructor preparation effects.
Evaluation?
Focuses not on ARS usage, but on active learning. Poor methodology makes results dubious.
Nicol, D. J., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28(4), 457-473.
Type of Publication?
Research article in refereed journal on learning.
Tested What?
Compared learner ratings on both peer instruction and classwide discussions, both of which were used in conjunction with audience response systems.
Strengths?
Used a within-subject design, examining the feelings of the same students to both peer-instruction techniques and classwide discussion techniques. Used several methods to gather learner opinions.
Weaknesses?
Measured only learner opinions, not actual learning results.
Evaluation?
Shows that learners seem to prefer peer-instruction techniques to classwide-discussion techniques (when both are used with audience response systems).
Poulis, J., Massen, C., Robens, E., & Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66(5), 439-441.
Type of Publication?
Short research-like article in refereed journal on physics.
Tested What?
Compared examination pass rates over the years when physics classes used, or did not use, simple one-button handsets.
Strengths?
Used an ABA design, measuring actual learning results, comparing no-handset classes in Year’s 1 and 2 versus handset classes in Year’s 3 and 4, and no-handset classes in Year’s 5 and 6.
Weaknesses?
Methodology section is too brief to really assess the experimental methodology. The one-button handsets in use are much more basic than today’s handsets. Use of handsets was conflated with active-engagement techniques, so we can’t be sure what produced the effect, handsets or active-engagement. No statistics were used, making it difficult to assess comparisons, though the handset classes appear to generally—but not always—outperform the no-handset classes.
Evaluation?
Shows suggestive improvements in learning due to active engagement and handset use, but methodology is quite suspect.
Abrahamson, L. (2006). A brief history of networked classrooms: Effects, cases, pedagogy, and implications. In David A. Banks (Ed.) Audience response systems in higher education: Applications and cases (pp. 1-25). |
Book chapter. Reviewed What? Reviewed the history of the use of audience response systems. Strengths? Interesting and enlightening review. Who knew, for example, that audience response systems were first used in the 1960’s? Evaluation? Provides a wide-ranging introduction to the history and thinking around audience response systems.
|
Barak, M., Lipson, A., & Lerman, S. (2006). Wireless laptops as means for promoting active learning in large lecture halls. Journal of Research on Technology in Education, 38(3), 245-263. |
Descriptive article in refereed journal on learning. Tested What? Did not compare. Instead, the authors describe their experiences using laptops to teach computer science. They did measure student reactions. Strengths? One of the first articles to examine the use of wireless laptops in the classroom. Weaknesses? No comparison data. Evaluation? Not much evidence for effectiveness.
|
Beatty, I. D., Gerace, W. J., Leonard, W. J., & Dufresne, R. J. (2006). Designing effective questions for classroom response system teaching. American Journal of Physics, 74(1), 31-39. |
Practice review article in refereed journal on physics. Reviewed What? The University of Massachusetts Physic’s department’s approach to using audience response systems. Strengths? Very thoughtful review of their approach to using audience response systems in teaching physics. Weaknesses? It’s just one approach. Evaluation? Great article by very thoughtful practitioners. Recommended for non-physics folks as well. They make a nice case that instructors ought to have more than just content goals for their learners. Instructors ought to also have a process (cognitive) goal, and a metacognitive goal. For their physics courses, the process goals include 12 “habits of mind” that all their learners are expected to come away with as well.
|
Carnaghan, C., & Webb, A. (2005). Investigating the effects of group response systems on learning outcomes and satisfaction in accounting education. Retrieved on January 1, 2007 from www.learning.uwaterloo.ca/LIF/ responsepad_june20051.pdf |
Research article (available online) on learning. Tested What? Compared class sessions that used an audience response system with class sessions that did not, where both types of sessions utilized active-engagement techniques. Looked at learner satisfaction, objective measures of learner engagement, and at learning results. Strengths? Very strong research design. Compared same learners in a counterbalanced design using audience response systems in half the class sessions and not using them in other half of sessions. Measured learning results, not just learner satisfaction ratings. Weaknesses? Mostly minor issues, except the fact that the authors didn’t cite raw learning results, making analysis difficult. Can’t rule out instructor-enthusiasm effects. Measures of learner engagement are not convincing because they don’t account for handset responses and they are analyzed by only one observer who was not blind to condition. Evaluation? Seems to demonstrate improvements in learning on exam questions that were most similar to audience-response questions used in class, but no improvement for exam questions unrelated to class questions. Also shows the typical result of high learner satisfaction with audience response systems. However, while the learners rated the audience-response systems highly, on questions about the course in general, the ratings were similar whether learners had just finished the series of sessions using (or not using) audience response systems.
|
Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970-977. |
Research and practice review article in refereed journal on physics. Reviewed What? Eric Mazur’s (and Catherine Crouch’s) Peer Instruction practice and results. Research results compare classes taught in traditional ways to those taught with the Peer-Instruction methods. Strengths? Peer Instruction is one of the most widespread methodologies in use and the authors do a nice job of covering the practice and results. At least some of the research comparisons use standardized tests of physics competence, providing good face validity and lessening opportunities for bias. Many instructors utilized. Weaknesses? Peer Instruction is just one approach. Research comparisons are versus previously-taught traditional classes, leaving open the possibility for learning gains due to differences in instructor enthusiasm and instructor preparation. Also, because no pretests were provided on the traditional courses, we can’t rule out bias due to differences in learners. Evaluation? Great article by thoughtful practitioners. Research comparisons, while not perfect, are suggestive of the benefits of Peer Instruction methodologies. Note that authors do NOT focus on the use of audience response systems. In fact, in the research cited they didn’t always use them. In an endnote they say, “We did not see any significant changes in student learning on introducing the classroom network system, and find the main advantages of the [system] are anonymity of student responses and data collection…” (page 976, endnote 9). On the other hand, the improvements in results they cite (in Figure 2 on page 971 and Table 1 on page 972) all come after the audience response systems were introduced. In any case, the study did not test audience response systems vs. no audience response systems. It examined the Peer Instruction method.
|
Dori, Y. J., & Belcher (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? The Journal of the Learning Sciences, 14(2), 243-279. |
Research article in refereed journal on learning. Tested What? Active learning methods in a specially-designed classroom that enabled simulation and visualization versus traditional lecture method in a standard classroom. Strengths? Interesting discussion. Weaknesses? Major methodological weaknesses. Comparison groups likely sampled from different populations. Conflated active learning with the many technology interventions. The treatment group received conceptual questions. The control group did not. Couldn’t rule out effects of exciting new classroom or instructor-enthusiasm effects. Evaluation? Major methodological weaknesses. Moreover, can’t differentiate between effects of audience response systems and the many other variables in play.
|
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94. |
Research article in refereed journal on learning. Tested What? Student opinions of audience response systems. Strengths? Looked at lots of classrooms and instructors. Very thoughtful analysis of potential benefits. Weaknesses? Only measured student opinions. No comparison with non-ARS situation. Evaluation? Methodology weak, but can tell us learner opinion, and did show improvements of opinion over time.
|
Draper, S. W., & Brown, M. I. (2004). Increasing interactivity in lectures using an electronic voting system. Journal of Computer Assisted Learning, 20, 81-94. |
Descriptive article in refereed journal on learning. Tested What? Did not compare. Instead, the authors describe their institution’s experiences using audience response systems to improve classroom sessions. They did measure learner and instructor reactions. Strengths? Looked at many different topics taught with many different learners and instructors. Thoughtful analysis. Weaknesses? No comparison data. No data on learning improvements, only learner and instructor opinions. Evaluation? No evidence for learning effectiveness, but some suggestive evidence for learner satisfaction (though there is no comparison to traditional methods). Very thoughtful analysis with many helpful suggestions for how to use audience response technology.
|
Dufresne, R. J., Gerace, W. J., & Wenk, L. (1996). Classtalk: A classroom communication system for active learning. Journal of Computing in Higher Education, 7, 3-47. |
Descriptive article in refereed journal on learning. Tested What? Did not compare. Instead, the authors describe their experiences using the audience response system and show some graphs of student reactions to using the response systems and active engagement. Strengths? An early look at how audience response systems were used. Weaknesses? No comparison data. Evaluation? Not much evidence for effectiveness.
|
Duncan, D. (2005). Clickers in the classroom: How to enhance science teaching using classroom response systems. San Francisco: Addison Wesley. |
Short book (pamphlet). Reviewed What? Provides an introduction to using handsets in the classroom Strengths? Covers many important areas. Good introduction. Weaknesses? Recommendations are limited to only a few questioning strategies. Book now a bit dated. Not generally available unless you contact publisher. Evaluation? Good primer. Could be better.
|
Hake, R. R. (1996). Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66, 64-74. |
Research article in refereed journal on physics. Tested What? Active learning methods vs. traditional methods in classrooms. Strengths? Lots of classrooms, instructors, and institutions. Used nicely standardized relevant test in a pretest-posttest design. Weaknesses? Data seriously biased by collection method. Can’t rule out instructor bias and instructor preparation effects. Evaluation? Focuses not on ARS usage, but on active learning. Poor methodology makes results dubious.
|
Nicol, D. J., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28(4), 457-473. |
Research article in refereed journal on learning. Tested What? Compared learner ratings on both peer instruction and classwide discussions, both of which were used in conjunction with audience response systems. Strengths? Used a within-subject design, examining the feelings of the same students to both peer-instruction techniques and classwide discussion techniques. Used several methods to gather learner opinions. Weaknesses? Measured only learner opinions, not actual learning results. Evaluation? Shows that learners seem to prefer peer-instruction techniques to classwide-discussion techniques (when both are used with audience response systems).
|
Poulis, J., Massen, C., Robens, E., & Gilbert, M. (1998). Physics lecturing with audience paced feedback. American Journal of Physics, 66(5), 439-441. |
Short research-like article in refereed journal on physics. Tested What? Compared examination pass rates over the years when physics classes used, or did not use, simple one-button handsets. Strengths? Used an ABA design, measuring actual learning results, comparing no-handset classes in Year’s 1 and 2 versus handset classes in Year’s 3 and 4, and no-handset classes in Year’s 5 and 6. Weaknesses? Methodology section is too brief to really assess the experimental methodology. The one-button handsets in use are much more basic than today’s handsets. Use of handsets was conflated with active-engagement techniques, so we can’t be sure what produced the effect, handsets or active-engagement. No statistics were used, making it difficult to assess comparisons, though the handset classes appear to generally—but not always—outperform the no-handset classes. Evaluation? Shows suggestive improvements in learning due to active engagement and handset use, but methodology is quite suspect. |