New research suggests that leaders are perceived as most effective when they know when to be assertive and when to be non-assertive.

Original Research Article: Ames, D. R., & Flynn, F. J. (2007). What Breaks a Leader: the Curvilinear Relation Between Assertiveness and Leadership. Journal of Personality and Social Psychology, 92(2), 307-324.

Read a synopsis or get a copy at: www.apa.org/releases/good_leaders.html

Lectures are widely reviled for putting learners in a passive mode. On the other hand, lectures are relatively easy to implement, even with large numbers of learners. And regardless of the pluses and minuses, lectures are ubiquitous. While there aren’t many lectures in kindergarten, by third grade teachers are talking a lot and learners are listening. The college classroom is dominated by lecture. So is the corporate training session, conference presentations, church sermons, public meetings, elder hostels, and the local library’s evening speaker series. Lectures aren’t going away anytime soon, nor should they. Like all tools for learning, they provide certain unique advantages and have certain unique limitations.

Lectures can be modified in different ways to increase the amount of active learning—to ensure that learners are more fully engaged, have a more robust understanding of the learning material,  are more likely to remember what they learned, are more likely to utilize the information at a later time.

One such method to increase active learning are "response cards." Response cards are provided to  students so that each one can respond to instructor questions. Two types of response cards are available, (1) those that enable each learner to write his or her answer on the card (for example with a dry-erase marker), and (2) those that enable learners to hold up preprinted answers (either True or False; or A, B, C, or D for example).

Research

While not a lot of good research has been done on response cards, the research seems to suggest that compared with the traditional method of having students raise their hands in response to questions, response cards improve learners’ classroom engagement, the amount they learn, and the amount they retain after a delay (Marmolejo, Wilder, & Bradley, 2004; Gardner, Heward, Grossi, 1994; Kellum, Carr, and Dozier, 2001; Narayan, Heward, Gardner, Courson, Omness, 1990; Christle & Schuster, 2003). Learners generally prefer response cards to simple hand-raising. Most of the research has focused on K-12 classrooms, with some research done in community college. The research has tended to focus on relatively low-level information and has not tested the value of response cards on higher-order thinking skills.

Recommendations

Getting learners to actively respond in lectures is certainly a worthwhile goal. Research has been fairly conclusive that learners learn better when they are actively engaged in learning (Bransford, Brown, & Cocking, 1999). Response cards may be one tool in the arsenal of methods to generate learner engagement. Of course, electronic keypads can be used in a similar way, at a significantly increased cost, with perhaps some added benefits as well. Still, at less than $30 a classroom, response cards may be worth a try.

Personally, I’m skeptical that audiences in adult training situations would be open to response cards. While 87% of college students rated the cards highly (Marmolejo, Wilder, & Bradley, 2004), the corporate audiences I’ve worked with over the years, might find them childish or unnecessary ("hey, why can’t we just raise our hands?"). On the other hand, electronic keypads are more likely to be accepted. Of course, such acceptance—whether we’re talking about response cards or electronic keypads—really depends on the relevance of the material and the questions used. If the questions are low-level rote memorization, adult audiences are likely to reject the instruction regardless of the technology employed.

Making lectures interactive has to be done with care. Adding questions and student responses can have negative consequences as well. When we ask questions, we signal to learners what to pay attention to. If we push our learners to think about low-level trivia, they will do that to the detriment of focusing on more important high-level concepts.

Advertisement Work-Learning Research

Limitations of the Research

The research on response cards tends to focus on low-level questions that are delivered all-to-frequently throughout lectures. Learners who have to answer a question every two minutes are being conditioned to focus on trivia, facts, and knowledge. Future research on response cards should focus on higher-level material in situations where more peer discussion are enabled.

Most of the research on response cards suffered from minor methodological difficulties (e.g., weaker than preferred comparison designs and a low level of learners actually tracked) and ambiguity (e.g., in reading the research articles, it was often difficult to tell whether the in-class questions were repeated on the final quizzes—those used as dependent variables; and no inferential statistics were available to test hypotheses).

References

Marmolejo, E. K., Wilder, D. A., & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37, 405-410.

Cristle, C. A., & Schuster, J. W. (2003). The effects of using response cards on student participation, academic achievement, and on-task behavior during whole-class, math instruction. Journal of Behavioral Education, 12(3), 147-165.

Gardner, R., Heward, W. L., & Grossi, T. A. (1994). Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole-class science instruction. Journal of Applied Behavior Analysis, 27, 63-71.

Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101-104.

Narayan, J. S., Heward, W. L., Gardner, R., Courson, F. H., & Omness, C. K. (1990). Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis, 23, 483-490.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

I’ve been suspicious about the learning-styles bandwagon for many years. The learning-style argument has gone something like this: If instructional designers know the learning style of their learners, they can develop material specifically to help those learners, and such extra efforts are worth the trouble.

I have my doubts, but am open to being proven wrong.

Here’s the criteria for my Learning-Styles Instructional-Design Challenge:

  1. The learning program must diagnose learners’ learning styles. It must then provide different learning materials/experiences to those who have different styles.
  2. The learning program must be compared against a similar program that does not differentiate the material based on learning styles.
  3. The programs must be of similar quality and provide similar information. The only thing that should vary is the learning-styles manipulation.
  4. The comparison between the two versions (the learning-style version and the non-learning-style version) must be fair, valid, and reliable. At least 70 learners must be randomly assigned to the two groups (with at least 35 minimum in each group completing the experience). The two programs must have approximately the same running time. For example, the time required by the learning-style program to diagnose learning styles can be used by the non-learning-styles program to deliver learning. The median learning time for the programs must be no shorter than 25 minutes.
  5. Learners must be adults involved in a formal workplace training program delivered through a computer program (e-learning or CBT) without a live instructor. This requirement is to ensure the reproducability of the effects, as instructor-led training cannot be precisely reproduced.
  6. The learning-style program must be created in an instructional-development shop that is dedicated to creating learning programs for real-world use. Programs developed only for research purposes are excluded. My claim is that real-world instructional design is unlikely to be able to utilize learning styles to create learning gains.
  7. The results must be assessed in a manner that is relatively authentic–at a minimum level learners should be asked to make scenario-based decisions or perform activities that simulate the real-world performance the program teaches them to accomplish. Assessments that only ask for information at the knowledge level (e.g., definitions, terminology, labels) are NOT acceptable. The final assessment must be delayed at least one week after the end of the training. The same final assessment must be used for both groups. It must fairly assess the whole learning experience.
  8. The magnitude of the difference in results between the learning-style program and the non-learning-style program must be at least 10%. (In other words, the average of the learning-styles scores subtracted by the average of the non-learning-styles scores must be more than 10% of the non-learning-styles scores). So for example, if the non-learning-styles average is 50, then the learning-styles score must be equal to 55 or more. This magnitude is to ensure that the learning-styles program produces meaningful benefits. 10% is not too much to ask.
  9. The results must be statistically significant at the p<.05 level. Appropriate statistical procedures must be used to gauge the reliability of the results. Cohen’s d effect size should be equal to .4 or more (a small to medium effect size according to Cohen, 1992).
  10. The learning-style program cannot cost more than twice as much as the non-learning-style program to develop, nor can it take more than twice as long to develop. I want to be generous here.
  11. The results can be documented by unbiased parties.

To reiterate, the challenge is this:

Can an e-learning program that utilizes learning-style information outperform an e-learning program that doesn’t utilize such information by 10% or more on a realistic test of learning, even it is allowed to cost up to twice as much to build?

$1,000 says it just doesn’t happen in the real-world of instructional design. $1,000 says we ought to stop wasting millions trying to cater to this phantom curse.

Which is better, (1) computer-based animations with audio narration or (2) paper-based diagrams with text narratives?

Suppose further that the animations and diagrams equalized as much as possible the amount of visual information presented, and the words in the narration and text were identical. In other words, the comparison was fair.

Suppose also that the content areas in the learning materials dealt with dynamic spatial causation, utilizing topics seemingly appropriate for dynamic graphical displays. Specifically, the topic areas included:

  • How lightning forms.
  • How a toilet works.
  • How ocean waves work.
  • How a car’s braking system works.

Suppose also that the tests queried learners on retention and transfer? In other words, to gauge their memory, learners were asked questions such as, “Please write down an explanation of how lightning works,” and to assess transfer, they were asked, “What could you do to decrease the intensity of a lightning storm?”

Under those conditions, which would produce the best learning on the questions asked?

A. Animations with audio narration.
B. Paper-based diagrams with text narratives.
C. Both would produce equal learning benefits.

Richard Mayer, Mary Hegarty, Sarah Mayer, and Julie Campbell (all of the University of California at Santa Barbara) created four experiments that attempted to answer this question. Given that each experiment had two comparisons (retention and transfer), they ended up with eight comparisons.

The results were clear. In not one case did the computer-based animations outperform the static paper-based depictions!! In four of the eight cases, the static diagrams outperformed the animations, and in the other four cases, the differences were not statistically significant. The animation conditions never outperformed the paper-based conditions.

The average percentage difference (for the paper-based depictions compared with the animation depiction) was 27% (with the average Cohen’s d effect size of 0.68, a moderately high magnitude difference). The animation conditions never outperformed the paper-based conditions.

The Authors’ Explanations of these Remarkable Findings

For many of us, this result is non-intuitive. Why would paper-based diagrams outperform animations? Although the authors of the research paper make some conjectures, their experiments don’t really shed light on this question. The experiments simply compare animations to paper-based depictions.

The authors suggest that paper-based depictions may have outperformed the animation-based depictions because (described in detail on page 264):

  1. The paper-based depictions involve simultaneous presentation of the graphical illustrations, whereas the animation-based depictions presented the graphical content in a chronological flow with no simultaneity.
  2. The paper-based materials enable learner control through pacing and eye movements, whereas the animations do not.
  3. The paper-based materials are purposely segmented into meaningful units showing crucial states of the system, whereas the animation presents the diagrams in one continuous flow.
  4. The paper-based materials utilize printed words, whereas the animation condition uses audio narration.
  5. The paper-based materials are presented on paper, whereas the animation materials are presented on a computer screen.

In future experiments, these things will need to be varied to determine the actual cause of the differences. Specifically, it would be helpful for e-learning designers to know the relative effectiveness of animations that also show crucial states of the system and enable more learner control.

Other Caveats and Shortcomings

Skeptical instructional designers may wonder about the target audience. Could it be, for example, that these results aren’t relevant for young adults (those who have great experience in using computers)? This worry seems misplaced. The learners in these experiments were all young college students, with an average age about 19 years old. On the other hand, 82% of the learners were women, suggesting that the results may not apply to men.

I worry about the short retention interval. As in most of Mayer’s experiments, immediate tests of retention and transfer are used. In other words, the students encounter the learning material and then are immediately tested on it. This should make us wonder whether the differences between animation and static images would survive the vagaries of cognitive forgetting processes. It might be true for example, that static images help for short retention intervals and animations help for longer—more realistic—retention intervals.

The experiments also use very short learning events—seven minutes or less in length, with some learning sessions lasting only a minute or two. This tends to limit the generalizability of the results. Real-world instructional designers are apt to question these results by noting that animations may energize learners to pay attention to e-learning courses that take, say, 30 minutes or more, whereas static animations are less likely to produce this energizing effect. So while static graphics may work for five-minute snippets of learning, more authentic learning events may benefit from animations.

Despite these major limitations, the findings are compelling. They show, at the very least, that in micro-learning situations, animations may not be as obvious a choice as we might have believed.

The experimental results also are partly consistent with a recent review of the research literature which found no difference in learning results between animations and paper-based depictions (Tversky, Morrison, & Betrancourt, 2002). Neither the current study or the review of the literature found any advantage for animations.

Again, it could be that well-designed animations have a facilitative effect. On the other hand, it appears that more research is needed to uncover principles that outline effective animation design.

Will’s Recommendations for Instructional Designers/Developers:

  1. If possible, utilize evidence-based instructional-design practices to experiment with different animation designs (to see which work for your content, your learners, and your delivery methods). Specifically, compare static graphics to animations and compare different animation designs.
  2. As a first cut in designing animations, enable learners to control the movement from one crucial system state to the next.
  3. As a first cut in designing animations, utilize audio narration, but also provide a text version that can be read separately (not simultaneously).
  4. Consider utilizing the spacing effect by presenting both a dynamic animation and a later static depiction with simultaneous text presentation. The second depiction, because it enables studying, could be utilized with some augmenting questions or exercises to get the learners to think deeply about the dynamic flow of events. Also consider alternating between dynamic and static depictions or presenting the static one before the dynamic.

Citations:

Mayer, R. E.; Hegarty, M.; Mayer, S.; Campbell, J. (2005). When Static Media Promote Active Learning: Annotated Illustrations Versus Narrated Animations in Multimedia Instruction. Journal of Experimental Psychology: Applied, 11, 256-265.

Tversky, B.; Morrison, J. B.; Betrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57, 247-262.

Mathemagenic Processing

In the mid-1960’s, Rothkopf (1965, 1966), investigating the effects of questions placed into text passages, coined the term mathemagenic, meaning “to give birth to learning.” His intention was to highlight the fact that it is something that learners do in processing (thinking about) learning material that causes learning and long-term retention of the learning material.

When learners are faced with learning materials, their attention to that learning material deteriorates with time. However, as Rothkopf (1982) illustrated, when the learning material is interspersed with questions on the material (even without answers), learners can maintain their attention at a relatively high level for long periods of time. The interspersed questions prompt learners to process the material in a manner that is more likely to give birth to learning.

Although the term mathemagenic was hot in the late 1960’s and 1970’s, it gradually faded from use as researchers lost interest in the study of adjunct questions and as critics complained that the word was too abstract and had little meaning beyond the operations of the research paradigm.

Despite having fallen into disfavor, the term—and the research it generated—have proven invaluable. The adjunct-question research showed us that test-like events are useful in helping learners to bolster memory for the information targeted by the question and to stay attentive to the most important aspects of the learning material. The concept of mathemagenic behavior is very much a central component in the way we think about learning. Who could doubt today that it’s the manner in which learners process the learning material that makes all of the difference in learning.

Citations:

Rothkopf, E. Z. (1965). Some theoretical and experimental approaches to problems in written instruction. In J. D. Krumboltz (Ed.). Learning and the education process (pp. 193-221). Chicago: Rand McNally.

Rothkopf, E. Z. (1966). Learning from written instructive materials: An exploration of the control of inspection behavior by test-like events. American Educational Research Journal, 3, 241-249.

Rothkopf, E. Z. (1982). Adjunct aids and the control of mathemagenic activities during purposeful reading. In W. Otto & S. White (Eds.) Reading expository material. New York: Academic Press.

What prevents people in the learning-and-performance field from utilizing proven instructional-design knowledge?

This is an update to an old newsletter post I wrote about in 2002. Most of it is still relevant, but I’ve learned a thing or two in the last few years.

Back in 2002, I spoke with several very experienced learning-and-performance consultants who have each—in their own way—asked the question above. In our discussions, we’ve considered several options, which I’ve flippantly labeled as follows:

  1. They don’t know it. (They don’t know what works to improve instruction.)
  2. They know it, but the market doesn’t care.
  3. They know it, but they’d rather play.
  4. They know it, but don’t have the resources to do it.
  5. They know it, but don’t think it’s important.

Argument 1.
They don’t know it. (They don’t know what works to improve instruction.)
Let me make this concrete. Do people in our field know that meaningful repetitions are probably our most powerful learning mechanism? Do they know that delayed feedback is usually better than immediate feedback? That spacing learning over time facilitates retention. That it’s important to increase learning and decrease forgetting? That interactivity can either be good or bad, depending on what we’re asking learners to retrieve from memory? One of my discussants suggested that "everyone knows this stuff and has known it since Gagne talked about it in the 1970’s."

Argument 2.
They know it, but the market doesn’t care.
The argument: Instructional designers, trainers, performance consultants and others know this stuff, but because the marketplace doesn’t demand it, they don’t implement what they know will really work. This argument has two variants: The learners don’t want it or the clients don’t want it.

Argument 3.
They know it, but they’d rather play.
The argument: Designers and developers know this stuff, but they’re so focused on utilizing the latest technology or creating the snazziest interface, that they forget to implement what they know.

Argument 4.
They know it, but don’t have the resources to use it.
The argument: Everybody knows this stuff, but they don’t have the resources to implement it correctly. Either their clients won’t pay for it or their organizations don’t provide enough resources to do it right.

Argument 5.
They know it, but don’t think it’s important.
The argument: Everybody knows this stuff, but instructional-design knowledge isn’t that important. Organizational, management, and cultural variables are much more important. We can instruct people all we want, but if managers don’t reward the learned behaviors, the instruction doesn’t matter.

My Thoughts In Brief

First, some data. On the Work-Learning Research website we provide a 15-item quiz that presents people with authentic instructional-design decisions. People in the field should be able to answer these questions with at least some level of proficiency. We might expect them to get at least 60 or 70% correct. Although web-based data-gathering is loaded with pitfalls (we don’t really know who is answering the questions, for example), here’s what we’ve found so far: On average, correct responses are running at about 30%. Random guessing would produce 20 to 25% correct. Yes, you’ve read that correctly—people are doing a little bit better than chance. The verdict: People don’t seem to know what works and what doesn’t in the way of instructional design.

Some additional data. Our research on learning and performance has revealed that learning can be improved through instruction by up to 220% by utilizing appropriate instructional-design methods. Many of the programs out there do not utilize these methods.

Should we now ignore the other arguments presented above? No, there is truth in them. Our learners and clients don’t always know what will work best for them. Developers will always push the envelope and gravitate to new and provocative technologies. Our organizations and our clients will always try to keep costs down. Instruction will never be the only answer. It will never work without organizational supports.

What should we do?

We need to continue our own development and bolster our knowledge of instructional-design. We need to gently educate our learners, clients, and organizations about the benefits of good instructional design and good organizational practices. We need to remind technology’s early adopters to remember our learning-and-performance goals. We need to understand instructional-design tradeoffs so that we can make them intelligently. We need to consider organizational realities in determining whether instruction is the most appropriate intervention. We need to develop instruction that will work where it is implemented. We need to build our profession so that we can have a greater impact. We need to keep an open mind and continue to learn from our learners, colleagues, and clients, and from the research on learning and performance.

New Thoughts in 2006

All the above suggestions are worthy, but I have two new answers as well. First, people like me need to do a much better job (me included) communicating research-based ideas. We need to figure out where the current state of knowledge stands and work the new information into that tapestry in a way that makes sense to our audiences. We also have to avoid heavy-handedness in sharing research-based insights, as we must realize that research is not the only means of moving us toward more effective learning interventions.

Secondly, I have come to believe that sharing research-based information like this is not enough. If the field doesn’t get better feedback loops into our instructional-design-and-development systems, then nothing much will improve over time, even with the best information presented in the most effective ways.

A new educational research organization is forming as a counterpart to AERA (the American Educational Research Association). I get the impression that the impetus for this is that too much of the current research on education has the flaw that it doesn’t really care about cause and effect.

Since cause-and-effect are critical—because it’s the only way to really find out what the critical factors are, I’m hoping for great things from this organization.

It’s name is: Society for Research on Educational Effectiveness

Well, it looks like one of my previous brainstorms was wrong. Check out this link from the American Psychological Association on cell-phone use while driving. Initial research on cell phones while driving seems to suggest that cell phones Do hurt driving. Still not sure if drivers can learn to use cell phones more effectively while driving.

Research has shown that a conversational writing style is generally more effective at producing learning results than more formal writing.

See a blog blurb by Kathy Sierra to learn more. She did a nice job of writing a review of a 2000 study by Moreno and Mayer from the Journal of Educational Psychology.

Also, check out all the comments after her blog post to see the research findings put into perspective. Some people loved the comments. Others went crazy with angst.


Some Minor Caveats (it might be best to read this after you read Kathy’s post)

In the Moreno and Mayer study, the researchers found the following improvements due to a more personalized style.

  • Transfer Improvements: Experiment 1: 36%, Exp. 2: 116%, Exp. 3: 46%, Exp. 4: 20%, Exp. 5: 27%
  • Retention Improvements: Experiment 1: 3%, Exp. 2: 6%, Exp. 3: 22%, Exp. 4: 10%, Exp. 5: 12%

Transfer in this case meant the ability to answer questions regarding the topic that were not directly discussed in the text. So for example, one transfer question used was, "What could be done to decrease the intensity of a lightning storm?"

Retention was measured with the question, "Please write down an explanation of how lightning works."

You’ll notice from the above numbers that transfer improved learning results more than retention did. In fact, 2 of the 5 experiments did NOT show statistical improvements in retention. While transfer measures are generally considered more difficult to obtain and thus more important, the actual tests of transfer and retention in the 5 experiments cited are roughly equal in difficulty. Certainly if we wanted learners to be able to explain how lightning works, the experiments do NOT show definitively that a more personalized writing style would guarantee such a result. On the other hand, personalization did not hurt the learning either.

Note further that in the two experiments where retention was not statistically improved (Experiments 1 and 2), the learners were observers and did not have to interact with the learning material. This is relevant to the other caveat I want to discuss.

The other caveat is that while conversational style is highlighted in Kathy’s blog, the researchers are very careful to focus on the personalization of the writing, and drawing the readers (or listeners) into the dialogue. So for example, it may be helpful to do the following in our instructional designs (these insights are not necessarily empirically tested, but they are consistent with the research results):

  1. When writing or speaking you should use the word "you," instead of a third-person more-formal style.
  2. When writing or speaking, it may also be useful to use the word "I," as such a use may encourage your audience members to respond on a personal level.
  3. It may be best to address learners as participants not as observers.
  4. It may be best to relate the content to the learners’ real world experiences.

Note that more research is needed in this area. There are not enough studies to predict this same effect with all learners, all learning materials, and all learning and performance situations.

Research Article Cited by Kathy: 

Moreno, R.,  & Mayer, R. E. (2000). Engaging students in active learning: The case for personalized multimedia messages.   Journal of Educational Psychology, 92, 724-733.

There have been several published studies (and even more newspaper articles) that show cell-phone use while driving is correlated with accidents. The suggestion from these studies is that cell phones CAUSE accidents. The implication is that we should ban cell phones while driving.

This may be true. I was scared to death last week while my taxi driver was looking at his cell phone to dial numbers. He clearly did not have his eyes on the road. If anything unusual occurred (like the van in the next lane entering our lane right in front of us—watch out please watch out!), his reaction time would have been considerably slowed and we would have been much more likely to have an accident.

On the other hand, I wonder how much of the current problems are caused by a learning deficit. After all, for most of us cell phones are rather new. More importantly, driving while using a cell phone is also new. This kind of multitasking can be learned. There are research studies that show that experience doing multitasking can increase performance on the tasks being done. With enough practice, less working-memory capacity is needed, freeing up capacity to engage in the various tasks.

One hypothesis suggested by this is that cell-phone-related accidents will decrease with time as drivers get more practice using their cell phones while driving. Judging from the number of people I see driving and phoning, not many people are heeding the warnings, so lots of people are gaining more experience. Cell-phone accident rates will also decline as new technologies are utilized, namely voice-dialing and hands-free cell-phones.

On the other hand, a second hypothesis is that anything that prompts drivers to take their eyes off the road will produce similar deficits to cell-phone driving. Here’s a short list:

  1. People who read maps while driving.
  2. People who look at the radio to tune to a particular station.
  3. People who glance at the person sitting next to them while in conversation.
  4. People who look at their food before stuffing it in their mouths.
  5. People who admire the scenery.
  6. People who rubberneck at accident scenes.

People who look at their cell phones to dial a number are just asking for trouble. It probably helps to have two hands on the wheel, as well.

I’d be willing to bet that for most people fewer accidents will occur when using a hands-free, voice-dialing cell phone than when talking with someone sitting beside them in the front seat, assuming equal levels of experience doing both. The natural human tendency to want to look someone in the eyes while talking to them will prompt most of us to try and steal a glance at our conversational partners, increasing slightly the danger from unforeseen events.

Like most things in life, learning plays a central role in our cell-phone-while-driving performance. Like most things for us humans, our cognitive machinery sets the boundaries for this performance.

New Information from the Research (An Update on My Thinking)

Although I still wonder about our ability to learn how to utilize cell phones while driving, recent research suggests that right now, we are not too good at it. Check out my updated post on this.