Thanks to George Siemens blog, I learned of a wonderful phrase, "Strong Opinions, Weakly Held," as blogged by Bob Sutton.

I’d actually like to modify it a bit to: "Strong Ideas, Weakly Held." This adds the connotation that the thoughts have been well researched (not just back-of-the-envelope opinions).

In a sense, it’s the researchers’ mindset–to work tirelessly to gather relevant data, make sense of it, state a conclusion, but then be willing to test that conclusion with data.

Today’s New York Times has a nice article on how games are being used to help people learn about real-world issues like the Middle-East Conflict.

The article does a really nice job of reviewing the "serious games" movement, including the passion of the developers and the thin research support for effectiveness (as of yet).

I’m inclined to think these serious games can have profound learning benefits, but that  measurements of effectiveness are probably difficult to get right.

Also design difficulties include:

  1. Insuring the correctness of the cause-and-effect relationships in the game.
  2. Insuring that the design doesn’t distract from the main points.
  3. Insuring that the game itself generates attention to the most important points.
  4. Being sure that other methods aren’t more efficient and/or effective.

Chris Anderson, editor of Wired Magazine, has a new book—The Long Tail—another inspired insight ready to rear up like a tsunami and sweep indiscriminately over everything.

The insight from the book and from the original article in a 2004 edition of Wired is this: The low cost of infinite shelf space and the reach of the internet enables niche products to reach an audience—and for this reach to be economically viable. The following chart from Slate Magazine captures the concept nicely.

Longtailchartfromslate

It’s a nice insight and Anderson musters compelling evidence for the increasing power of the long tail to transform our economy and our business infrastructure.

To hear Anderson being interviewed by the incomparable Tom Ashbrook, check out the On Point archive from July 18, 2006.

To read a critique of the concept—a warning about its boundary conditions—from Slate Magazine’s Tim Wu, check out the article entitled, "The Wrong Tail: How to turn a powerful idea into a dubious theory of everything."

To read how the concept might affect the publishing industry, check this out from the NY Times Book Review.

To read how the concept might apply to the healthcare field, read Jim Walker’s thoughtful analysis.

To read a critique of the concept (and the actual economic data) from the Wall Street Journal, read Lee Gomes excellent article, "It May Be a Long Time Before the Long Tail Is Wagging the Web."

To read more about the Long-Tail concept, check out The Long Tail blog.

How does the long tail relate to the learning-and-performance field? I offer some initial thoughts:

1. Course content: The long tail may enable more and more niche players to succeed in the marketplace, potentially hurting companies with large libraries of everything. Is this why Skillsoft’s stock has lost 75% of its value over the last four years (though it’s been inching up lately).
2. Conferences: The long tail may kill, or significantly weaken, the mega conference, pushing attendees into smaller, niche-driven conferences. Simultaneously, vendors (the financial life blood of most conferences), may avoid mega conferences and spend their exhibit dollars in smaller conferences, especially industry-specific conferences.
3. Industry organizations: Organizations like ASTD, ISPI, Masie Center, etc., may lose their influence to specialty organizations (like the elearningGuild) that are attracting an increasingly devoted membership.
4. Employment: Perhaps the long tail will push more and more individuals and small groups into external consulting, development, and delivery functions.
5. Publishing (including books, magazines, eBooks, and blogs): The long tail may make it easier and easier for authors and thought leaders to distribute their works online, putting pressure on book publishers like Pfeiffer and ASTD to compete, and periodicals like T+D, PI, and CLO to maintain an audience in the face of a burgeoning swell of blogs, white papers, and webinars.
6. Industry Advertising: If we think of advertising placement opportunities as products, the long tail may push vendors in our industry to seek niche placements. One opportunity for this is the Google or Yahoo! online advertisements that are already changing the advertising game, but other niche opportunities are available as well.
7. Best practices: Until a seismic event occurs in our industry, professionals have too little impetus to create effective learning-and-performance interventions. Thus, sadly, we will continue to window shop for fad-of-the-moment ideas emerging from the long tail. After emerging from the long tail, fad-of-the-moment ideas will enjoy a brief explosion into the head of the curve, before gradually fading back into the obscurity of the long tail.

Where will the long tail not apply:

1. Credentialing: Although ISPI’s CPT (Certified Performance Technologist) and ASTD’s CPLP (Certified Professional in Learning and Performance)—are both available (as are a few other credentials), multiple credentialing agencies weaken the meaningfulness of a credential, making it likely to put a ceiling on the market power of these credentials.
2. Authoring Tools: Buyers tend to gravitate toward stable tools and systems, not wanting to deal with the uncertainty of new technologies.
3. LMS’s: Again, buyers tend to gravitate toward tools that are proven and vendors that can afford to invest in interface-diverse interoperability.
4. Work-Learning Research and Will Thalheimer: Though I am likely to always to be hidden somewhere in the long tail, I have ambitions to be ubiquitous with my message of research-based practice. BIG TONGUE-IN-CHEEK SMILE.

Mark Cuban has a nice piece on his blog about how the internet is NOT the reason for innovation, but that cost cutting is. Click to read the piece here.

It’s not immediately clear to me how this relates to the learning-and-performance field–maybe the push toward rapid e-learning technologies–but I share Cuban’s piece knowing that one of you bright readers may get an insight that leads to improvements in our field.

The book, The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results; by Calhoun Wick, Roy Pollock, Andrew Jefferson, and Richard Flanagan; is one of the most important books published in the training and development industry in a very long time.

Book: The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results

Authors: Calhoun Wick, Roy Pollock, Andrew Jefferson, and Richard Flanagan

Publisher: Pfeiffer

Publication Date: April 2006

Introduction

The learning-and-performance field—of which I am a devoted member—hasn’t had a really big idea since the performance-improvement crusade began gathering momentum in the 1980’s. But now, thanks to the work of Cal Wick, Roy Pollock, Andrew Jefferson, Richard Flanagan and their colleagues at the Fort Hill Company, we finally have a new innovation—a systematic method for training follow-through.

It’s not a surprise that training can only be effective if learners put what they learn into practice. What Wick and company have done is demonstrate the feasibility of driving training transfer into the flow of work. Their book is really a culmination of years of exploration as they bravely embraced the exhausting and dangerous work of pioneers.

They’ve taken an evidence-based approach to learning design—grappling with real-world clients, making careful observations, gathering data, utilizing research findings, and fine-tuning their practices. Perhaps most importantly, they’ve created a breakthrough technology that enables training-and-development leaders to push learning results into the actual workplace.

E-learning pundits haven’t recognized it yet, but Fort Hill’s Friday5s training-follow-through software (along with competitive products like ZengerFolkman’s ActionPlan Mapper) may be the most disruptive e-learning technology yet devised. While web-meeting platforms, LMS’s, rapid authoring tools, and even Google may seem potent, they don’t change training effectiveness as much as a good training follow-through system.

Wick, Pollock, Jefferson, and Flanagan may enjoy promoting Fort Hill products, but they go out of their way to craft a broader message in their brilliant new book, The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results. The authors lay out a devastating analysis of the current state of training practice—not by being negative—but by illustrating with cases, examples, and research how to do training right.

The book is nothing short of revolutionary. Unfortunately, in our dysfunctional field not everyone will take up arms against their own ineffective practices, but the book provides solid guidance to the enlightened soldiers in our midst. If you want to improve on-the-job performance and business results, this book is a guiding light.

Changing the Paradigm and Technology of Learning

In the flow of our everyday lives, the world as we know it follows predictable patterns. Things change, but they change predictably. Every once in while, however, something new appears—an innovation or idea so strange and yet so perfectly in tune with the cravings, resources, and zeitgeist of the time that it changes everything.

Disruptive technologies like electricity, phones, computers, and the internet have produced powerful ripples through the human fabric. Automobiles not only displaced the horse, they enabled the rise of the middle class, the building of suburbs, and intellectual and social freedom for young adults. Paradigm shifts and scientific discoveries create the same effects, changing the way we see the world—changing the possibilities. If not for the ideas of Jesus, Darwin, Gandhi, Confucius, Freud, Einstein, Watson and Crick, Kuhn, and others, we would live in a different world.

The last great disruptive innovation to arise within the learning-and-performance field was the move away from “training” and toward “performance improvement.” Unfortunately, that movement is not yet complete. The hard truth is that we talk more about on-the-job results than we achieve them.

In the move from training to performance improvement, something got lost. Performance gurus often badmouth training as inadequate, but they give short shrift to its strengths, and are blind about how to design the complete training experience to make training work. This kind of blindness is endemic in our field for two reasons, (1) because we have so little understanding of the basics of human learning, and (2) because we rarely evaluate our performance.

Thankfully, Cal Wick and his team (as well as a few others) have tired of training’s big lie. They know that training can be powerful—if only the right processes and procedures are put into place. Because they understand learning, they can envision a systematic set of guidelines that work. Because they measured the performance of their learners, they have been able to fine-tune their recommendations.

The Six Disciplines is poised to become one of the most important books in the learning-and-performance field. Not since the publication of Dana Gaines Robinson and James C. Robinson’s book on performance consulting or the seminal work of Bob Mager on performance-based instructional design, has our field been offered a new system of thinking—a new way to do our jobs as learning-and-performance professionals.

The Book’s Overarching Message

The book proposes six disciplines and offers scores of recommendations, but it’s central message is that what happens after training is just as important—and probably more important—than the training itself.

The six disciplines are:

1. Define Outcomes in Business Terms
2. Design the Complete Experience
3. Deliver for Application
4. Drive Follow-Through
5. Deploy Active Support
6. Document Results

Wick, Pollock, Jefferson, and Flanagan suggest that training ought to be conceptualized with a new finish line.

The “finish line” for learning and development has been redefined. It is no longer enough to deliver highly rated and well-attended programs; learning and development’s job is not complete until learning has been converted into results that matter to the business. (p. 13)

This new finish line enables us to see possibilities beyond the completion of smile sheets. A learner’s job—indeed an organization’s job—is not done when the classroom door swings shut.

The authors also emphasize the importance of visualizing training as something that occurs within an expanded timeline. Before-training efforts and after-training efforts are just as critical as the training efforts themselves. Particularly important are the after-training efforts because they focus learner attention on implementing the learning, reinforce fading memories, and transform the process of learning from an individual pursuit to an organizational responsibility. Learning changes from a love-it-and-leave-it experience to a system of reciprocal reinforcement where the results are measured in on-the-job performance.

The Book’s Evidence

The authors cite lots of organizational research to back up their claims, from thinkers and researchers like Broad, Brinkerhoff, and Newstrom. And, the notion of a new finish line is entirely consistent with the research on fundamental learning factors—the kind of research I’ve been working with for almost a decade. For example, we know learners forget most of what they learn—unless that information is reinforced in the workplace. Each one of the six disciplines push us to design an expanded learning experience, one that focuses on workplace implementation, not training per se.

Other forms of evidence are equally important. In addition to research from refereed journals, the book details dozens of real-world learning executives describing their successes in broadening the conception of training and implementing the six disciplines. Wisdom from learning leaders was relayed from these and other organizations: Sony, Gap, 3M, Humana, BBC, Center for Creative Leadership, General Mills, Corning, Forum, University of Notre Dame, Honeywell, AstraZeneca, Pfizer.

Evidence of the effectiveness of technology-based training follow-through is described using data from the powerful methodology of control-group designs. Graphs and text clearly illustrate the results. For example, page 128 conveys how “Use of a Follow-Through Management System Increased Managers’ Awareness of Their Direct Reports’ Development Goals from 40 Percent to 100 Percent.”

While the majority of books in our field fail to convey more than a few breadcrumbs of credible evidence, The Six Disciplines hits for the triple crown, utilizing refereed research, experience of real-world learning leaders, and data from control-group studies. In our field, it simply doesn’t get any better than this.

The Book’s Design

The book is well organized, with an introductory chapter, a summary chapter, and one chapter for each of the six disciplines described in the title. Each chapter ends with a nice twist—two lists of action points; one for “learning leaders” and one for “line leaders.” There are many design examples such as this that demonstrate that the authors are really serious about on-the-job performance. The book utilizes some valuable repetitions of key points. The text design makes reading a pleasure. Quotations are pithy and relevant. Examples are illustrative of the main points in the text.

I read every page of the book, so I can tell you with confidence that it is well written. There are hundreds of specific recommendations throughout the book. I found many insights that I hadn’t thought of—ideas that I will use in my work as a consultant, instructional-design strategist, and creator of training. The graphs and charts are clear and there are some very useful templates. For example, the first chapter concludes with the “Learning Transfer and Application Scorecard,” a 10-item questionnaire. It’s a powerful tool because—and this is my opinion not the authors’—most current training programs will fail miserably when measured by these questions. I’d bet that most training programs will have low scores on ALL 10 items of the scorecard.

I have two almost insignificant complaints about the book. First, the cover is uninspired. The book deserves better. Second, the six disciplines are shoehorned into starting with the letter “D,” in a way that is more misleading than it should be. For example, the second “D” stands for Design the Complete Experience. The author’s emphasis is on the complete experience, but the shorthand version “Design” connotes the traditional instructional-design notion of design—a notion completely inadequate; as the authors argue persuasively in the actual text.

The Book’s Recommendations

The book is jam-packed with recommendations, so I’ll only convey a few of the specific recommendation here. You really ought to buy The Six Disciplines and read it and share it with everyone you know who cares about doing training right. Here’s my short list:

  1. View training follow-up as part of every training intervention.
  2. Get learners’ managers involved before and after training.
  3. Evaluate your training programs to determine whether they’re working and to improve subsequent training.
  4. Before designing a training program, determine what learners will be doing better and differently after the program. Be clear about what evidence will be acceptable to determine success.
  5. Understand the business. Be proactive in suggesting training-and-development solutions. Check your understanding with line leaders.
  6. Utilize a technology-based training-follow-through system to drive learning application and accountability.
  7. Utilize evidence-based practices, including research-based instructional design and after-training evaluation.
  8. Avoid “dense-pack education—the tendency to cram every conceivable topic into a program of a few days.”
  9. Focus on creating transfer during all phases of training—while designing the training, while delivering it, and during follow-up.
  10. Consider using senior executives to teach leadership—it is one of the fastest- growing trends in executive education.
  11. During training, stop after each topic and ask participants questions that challenge them to think about applying what they know.
  12. Learners should develop “learning transfer objectives” and be prepared to work toward them while back on the job.
  13. Send learners’ objectives to the learners’ managers to increase follow-up application and accountability.
  14. Utilize Marshall Goldsmith’s “feedforward” techniques to help learners generate ideas for training application.
  15. Recognize that there are factors that decrease the likelihood that learners will put their learning into practice, and that the impact of these factors can be minimized only through a systematic follow-through process.
  16. Utilize reminders to facilitate memory and spur on-the-job application of training.
  17. Hold employees accountable for making effective use of the training they receive.
  18. Consider coaching as a complement to training, providing learners with coaches to increase the likelihood of energetic and appropriate application.
  19. Learning programs that “demonstrate sound, thorough, credible, and auditable evidence of results are able to garner additional investment; those that cannot are at risk.”
  20. Learning and development units within companies need to communicate their results to the organization using multiple communication attempts and various communication channels.

How do Your Learning Programs Rate?

As I mentioned earlier, the Fort Hill Company has developed a Learning Transfer and Application Scorecard (displayed on pages 10 and 11) that targets the most important and leveragable characteristics that make training effective. Every training program ought to be measured with this scorecard. To get an idea of how well your training stacks up, I’ve included three of the ten items. I changed the wording slightly to help you make sense of the items before you read the book. How well do your training programs do the following?

  • After the program, participants are reminded periodically of their post-learning objectives and of opportunities to apply what they learned.
  • Participants’ managers are actively engaged during the postprogram period. They review and agree on after-learning objectives, and expect and monitor the progress that learners are making in applying what they’ve learned.
  • The design of the learning program covers the entire process from initial invitation to attend, through the learning sessions, and through on-the-job application and measurement of results.

Summary

The Six Disciplines is the most important book written in our field in quite some time. It provides a comprehensive system to make training effective. Its radical new nugget of truth is its insistence on training follow-through. The book’s ideas are evidence-based and are consistent with the human learning system. The messages in the book have been tested and refined in the real world. Tools are available (for example, Fort Hill’s Friday5s follow-through management system) that make the recommendations actionable.

Training Follow-Through Systems

I am aware of two training follow-through systems, Fort Hill’s Friday5s, and ZengerFolkman’s ActionPlan Mapper. I have formally reviewed the ZengerFolkman product, but have yet to put my review of Friday5s on paper. Both are powerful programs. Friday5s may have an edge given its longer tenure in the marketplace and its ability to provide learning reminders, not just reminders about learning transfer objectives. My recommendation is that you test them for yourself.

 

Will’s Note:

Original Post from 2006:

Let me propose a new taxonomy for learning objectives.

This taxonomy is needed to clear up the massive confusion we all have about the uses and benefits of learning objectives. I have tried to clarify this in the past in some of my conference presentations—but I have not been successful. When I get evaluation-sheet comments like, “Get real you idiot!” from more than a few people, I know I’ve missed the mark. SMILE

Because I don’t give up easily—and because learning objectives are so vitally important—I’m going to give this another try. Your feedback is welcome.

The premise I’m working from is simple. Instructional professionals use learning objectives for different purposes—even for different audiences. Learning objectives are used to guide the attention of the learner toward critical learning messages. Learning objectives are used to tell the learner what’s in the course. They are used by instructional designers to guide the design of the learning. They are used by evaluation designers to develop metrics and assessments.

Each use requires its own form of learning objective. Doesn’t it seem silly to use the exact same wording regardless of the use or intended audience? Do we provide doctors and patients with the exact same information about a particular prescription drug? Do designers of computer software require the same set of goal statements as users of that software? Do creators of films need to have the same set of objectives as movie goers?

Until recently I have argued that we ought to delineate between objectives for learners and objectives for designers. This was a good idea in principle, but it still left people confused because it didn’t cover all the uses of objectives. For example, learners can be presented with objectives to help guide their attention or to simply give them a sense of the on-the-job performance they’ll be expected to perform. Instructional designers can utilize objectives to guide the design process or to develop evaluations.

The New Taxonomy

  1. Focusing Objective
    A statement presented to learners before they encounter learning material—provided to help guide learner attention the most important aspects of that learning material.
  2. Performance Objective
    A statement presented to learners before they encounter learning material—provided to help learners get a quick understanding of the competencies they will be expected to learn.
  3. Instructional-Design Objective
    A statement developed by and for instructional designers to guide the design and development of learning and instruction.
  4. Instructional-Evaluation Objective
    A statement developed by and for program evaluators (or instructional designers) to guide the evaluation of instruction.

I made a conscious decision not to include a “table-of-contents objective” despite the widespread use of this method for presenting learners with objectives. I can’t decide whether this should be included. There’s no direct research on this (that I’ve encountered), but there may be some benefit for learners in having an outline of the coming learning material. Your comments welcome. I’m leaning toward including this notion into the taxonomy because it is a stategy that I’ve seen in use. Maybe I’ll call them “Content-Outlining Objectives” or “Outlining Objectives.”

One of the clear benefits of this taxonomy is that it separates Focusing Objectives from the other objectives. These objectives—those presented to learners to help focus their attention—have been researched with the greatest vigor. And the results of that research are clear:

  1. Focusing objectives guide learner attention to the information in subsequent learning material that has been targeted by objectives, but they also take attention away from the information not targeted by objectives.
  2. Similarly, focusing objectives improve learning for the targeted information and hurt learning for the information not targeted.
  3. Prequestions are as powerful in creating this focusing effect as learning objectives, and they may be more powerful.
  4. The wording of the focusing objective or prequestion must specifically mirror the wording in the learning material. General or abstract wording doesn’t cut it.
  5. Adding extra words, particularly words that specify the criteria of performance (ala Mager) will actually distract learners and hurt learning.

Well, it looks like one of my previous brainstorms was wrong. Check out this link from the American Psychological Association on cell-phone use while driving. Initial research on cell phones while driving seems to suggest that cell phones Do hurt driving. Still not sure if drivers can learn to use cell phones more effectively while driving.

Most of what we call "training" is designed with the intention of improving people’s performance on the job. While it is true that much of training does not do this very well, it is still true that on-the-job performance is the singular stated goal of training.

But something is missing from this model. What’s missing is that a learning intervention can also prepare learners for future on-the-job learning. Let’s think this through a bit.

People on the job—people in any situation—are faced with a swarm of stimuli that they have to make sense of. Their mental models of how the world works will determine what they perceive. I’ve noticed this myself when I walk in the woods with experienced bird watchers. I hear birds, but can’t see them, no matter how hard I look. Experienced bird watchers see birds where I see nothing. The same stimuli have different outcomes because the expert birders have superior mental models about where birds might locate themselves.

The same is true for many things. As a better-than-average chess player, I will understand the patterns of the pieces better than a novice will. Experienced computer programmers see things that inexperienced programmers do not. Experienced lawyers will understand the nuances in someone’s testimony more than a novice lawyer.

Experience enables distinctions to be drawn between otherwise ambiguous stimuli. It enables people to perceive things that others don’t perceive. It helps people notice what others ignore.

Learning can be designed to provide amazing-grace moments, helping those who were once blind to see. If we’re serious about on-the-job learning, we ought to begin to build models of how to design formal learning to facilitate informal on-the-job learning.

Dan Schwartz, PhD (a learning psychologist at Stanford) has written recently about a concept called Preparation for Future Learning or PFL. Schwartz argues that generally poor transfer results may be due to the common practice of assessing what was learned but failing to assess what learners are able to learn. This makes a lot of sense given how complex the real world is, how learners forget stuff so quickly, and how much they learn on the job.

Schwartz and his colleagues are working on ways to improve future learning by using "contrasting cases" that enable  learners to see distinctions they hadn’t previously noticed. This concept might be used in formal training courses to prepare learners to see things they hadn’t seen before when they return to the job. For example, a manager being trained on supervisory skills may be taught that some decisions require group input, whereas other decisions require managers to decide on their own. Cases of both types could be provided in training so that relevant distinctions will be better noticed on the job.

A different way to prepare learners for future learning is to prime them with questions. In my dissertation research, I included one experiment that I asked college students questions about campus attractions. For example, I asked them what the statue "Alma Mater" was carrying. A week later, I suprised the students by asking them some of the same questions again. The results revealed that simply asking them questions (even when no feedback was provided) improved how much they paid attention to the items on which they were queried. Between the two sets of questions, learners apparently paid attention to the statue in ways they hadn’t before. By being asked about an item, the learners were more likely to spend time learning about that item when they encountered those items in their day-to-day walking around.

There are likely to be other similar learning opportunities, but the point is that we need ways to design our learning interventions to intentionally create these types of learning responses. I’m going to be thinking about this for a while. My hope is that you will too.

Perhaps these meager paragraphs have prepared you for future learning. SMILE.

There have been several published studies (and even more newspaper articles) that show cell-phone use while driving is correlated with accidents. The suggestion from these studies is that cell phones CAUSE accidents. The implication is that we should ban cell phones while driving.

This may be true. I was scared to death last week while my taxi driver was looking at his cell phone to dial numbers. He clearly did not have his eyes on the road. If anything unusual occurred (like the van in the next lane entering our lane right in front of us—watch out please watch out!), his reaction time would have been considerably slowed and we would have been much more likely to have an accident.

On the other hand, I wonder how much of the current problems are caused by a learning deficit. After all, for most of us cell phones are rather new. More importantly, driving while using a cell phone is also new. This kind of multitasking can be learned. There are research studies that show that experience doing multitasking can increase performance on the tasks being done. With enough practice, less working-memory capacity is needed, freeing up capacity to engage in the various tasks.

One hypothesis suggested by this is that cell-phone-related accidents will decrease with time as drivers get more practice using their cell phones while driving. Judging from the number of people I see driving and phoning, not many people are heeding the warnings, so lots of people are gaining more experience. Cell-phone accident rates will also decline as new technologies are utilized, namely voice-dialing and hands-free cell-phones.

On the other hand, a second hypothesis is that anything that prompts drivers to take their eyes off the road will produce similar deficits to cell-phone driving. Here’s a short list:

  1. People who read maps while driving.
  2. People who look at the radio to tune to a particular station.
  3. People who glance at the person sitting next to them while in conversation.
  4. People who look at their food before stuffing it in their mouths.
  5. People who admire the scenery.
  6. People who rubberneck at accident scenes.

People who look at their cell phones to dial a number are just asking for trouble. It probably helps to have two hands on the wheel, as well.

I’d be willing to bet that for most people fewer accidents will occur when using a hands-free, voice-dialing cell phone than when talking with someone sitting beside them in the front seat, assuming equal levels of experience doing both. The natural human tendency to want to look someone in the eyes while talking to them will prompt most of us to try and steal a glance at our conversational partners, increasing slightly the danger from unforeseen events.

Like most things in life, learning plays a central role in our cell-phone-while-driving performance. Like most things for us humans, our cognitive machinery sets the boundaries for this performance.

New Information from the Research (An Update on My Thinking)

Although I still wonder about our ability to learn how to utilize cell phones while driving, recent research suggests that right now, we are not too good at it. Check out my updated post on this.