Recently, I had the honor of being interviewed by Karl Kapp, EdD, in an interview sponsored by The E-Learning Guru, Kevin Kruse.

It was a fun interview, covering many wide-ranging issues in our industry and in learning research. Click to read more.

This blurb is reprised from an earlier Work-Learning Research Newsletter, circa 2002. These "classic" pieces are offered again to make them available permanently on the web. Also, they’re just good fun. I’ve added an epilogue to the piece below.

"What prevents people in the learning-and-performance field from utilizing proven instructional-design knowledge?"

Recently, I’ve spoken with several very experienced learning-and-performance consultants who have each—in their own way—asked the question above. In our discussions, we’ve considered several options, which I’ve flippantly labeled as follows:

  1. They don’t know it. (They don’t know what works to improve instruction.)
  2. They know it, but the market doesn’t care.
  3. They know it, but they’d rather play.
  4. They know it, but don’t have the resources to do it.
  5. They know it, but don’t think it’s important.

Argument 1.

They don’t know it. (They don’t know what works to improve instruction.)
Let me make this concrete. Do people in our field know that meaningful repetitions are probably our most powerful learning mechanism? Do they know that delayed feedback is usually better than immediate feedback? That spacing learning over time facilitates retention. That it’s important to increase learning and decrease forgetting? That interactivity can either be good or bad, depending on what we’re asking learners to retrieve from memory? One of my discussants suggested that "everyone knows this stuff and has known it since Gagne talked about it in the 1970’s."

Argument 2.

They know it, but the market doesn’t care.
The argument: Instructional designers, trainers, performance consultants and others know this stuff, but because the marketplace doesn’t demand it, they don’t implement what they know will really work. This argument has two variants: The learners don’t want it or the clients don’t want it.

Argument 3.

They know it, but they’d rather play.
The argument: Designers and developers know this stuff, but they’re so focused on utilizing the latest technology or creating the snazziest interface, that they forget to implement what they know.

Argument 4.

They know it, but don’t have the resources to use it.
The argument: Everybody knows this stuff, but they don’t have the resources to implement it correctly. Either their clients won’t pay for it or their organizations don’t provide enough resources to do it right.

Argument 5.

They know it, but don’t think it’s important.
The argument: Everybody knows this stuff, but instructional-design knowledge isn’t that important. Organizational, management, and cultural variables are much more important. We can instruct people all we want, but if managers don’t reward the learned behaviors, the instruction doesn’t matter.

My Thoughts In Brief

First, some data. On the Work-Learning Research website we provide a 15-item quiz that presents people with authentic instructional-design decisions. People in the field should be able to answer these questions with at least some level of proficiency. We might expect them to get at least 60 or 70% correct. Although web-based data-gathering is loaded with pitfalls (we don’t really know who is answering the questions, for example), here’s what we’ve found so far: On average, correct responses are running at about 30%. Random guessing would produce 20 to 25% correct. Yes, you’ve read that correctly—people are doing a little bit better than chance. The verdict: People don’t seem to know what works and what doesn’t in the way of instructional design.

Some additional data. Our research on learning and performance has revealed that learning can be improved through instruction by up to 220% by utilizing appropriate instructional-design methods. Many of the programs out there do not utilize these methods.

Should we now ignore the other arguments presented above? No, there is truth in them. Our learners and clients don’t always know what will work best for them. Developers will always push the envelope and gravitate to new and provocative technologies. Our organizations and our clients will always try to keep costs down. Instruction will never be the only answer. It will never work without organizational supports.

What should we do?

We need to continue our own development and bolster our knowledge of instructional-design. We need to gently educate our learners, clients, and organizations about the benefits of good instructional design and good organizational practices. We need to remind technology’s early adopters to remember our learning-and-performance goals. We need to understand instructional-design tradeoffs so that we can make them intelligently. We need to consider organizational realities in determining whether instruction is the most appropriate intervention. We need to develop instruction that will work where it is implemented. We need to build our profession so that we can have a greater impact. We need to keep an open mind and continue to learn from our learners, colleagues, and clients, and from the research on learning and performance.

Will’s New Thoughts (November 2005)

I started Work-Learning Research in 1998 because I saw a need in the field to bridge the gap between research and practice. In these past seven years, I’ve made an effort to compile research and disseminate it, and though partly successful, I often lament my limited reach. Like most entrepreneurs, I have learned things the hard way. That’s part of the fun, the angst, and the learning. 

In the past few years, the training and development field has gotten hungrier and hungrier for research. I’ve seen this in conferences where I speak. The research-based presentations are drawing the biggest crowds. I’ve seen this in the increasing number of vendors who are highlighting their research bonafides, whether they do good research or not. I’ve seen this recently in Elliott Masie’s call for the field to do more research.

This hunger for research has little to do with my meager efforts at Work-Learning Research. Though sometimes in my daydreams I like to think I have influenced at least some in the field—maybe even some opinion leaders. As a data-first empiricist, the evidence is clear. I know that my efforts are often under the radar. Ultimately, this is unimportant. What is important is what gets done.

I’m optimistic. Our renewed taste for research-based practice provides an opportunity for all of us to keep learning and to keep sharing with one another. I’ve got some definite ideas about how to do this. I know many of you who read this do too. We may not—as a whole industry—use enough research-based practices, but there are certainly some individuals and organizations who are out there leading the way. They are the heroes, for it is they who are out there taking risks, asking for the organizational support most of us don’t ask for, making a difference one mistake at at time.

One thing we need to spur this effort is a better feedback loop. If we don’t go beyond the smile sheet, we’re never going to improve our practices. We need feedback on whether our learning programs are really improving learning and long-term retrieval. Don’t think that just because you are out there on the bleeding edge that you’re championing the revolution. You need to ensure that your efforts are really making things better—that your devotion is really improving learning and long-term retention. If you’re not measuring it, you don’t really know.

Let me end by saying that research from refereed journals and research-based white papers should not be the only arbiter of what is good. Research is useful as a guide—especially when our feedback loop is so enfeebled and organizational funds for on-the-job learning measurement are so impoverished.

It would be better, of course, that we might all test our instructional designs in their real-world contexts. Let us more toward this, drawing from all sources of wisdom, dipping our ladles into the rich research-base and the experiences of those who measure their learning efforts.

Research from the world’s preeminent refereed journals on learning and instruction shows that by aligning the learning and performance contexts, learning results can be improved by substantial amounts. In fact, it is this alignment that makes simulations effective, that creates the power behind hands-on training, and that enables action learning to produce its most profound effects.

The research suggests the following points related to instructional design:

1. When humans learn, we absorb both the instructional message and background stimuli and integrate them into memory so that they become interconnected.

2. Humans in their performance situations are reactive beings. Our thoughts and actions are influenced by stimuli in our surrounding environment. If cues in our environment remind us of what we previously learned, we’ll remember more.

3. These first two principles can combine to aid remembering, and hence performance in powerful ways. If during the learning situation we can connect the key learning points to background stimuli that will be observed in the learner’s on-the-job performance situation, than these stimuli will remind learners of what they previously learned!

4. The more the learning context mirrors the real-world performance context, the greater the potential for facilitating remembering. When the learning and performance contexts include similar stimuli, we can say they are "aligned."

5. The more learners pay attention to the background contextual stimuli, the higher the likelihood of obtaining context effects.

6. Context effects can take many forms. People who learn in one room will remember more in that room than in other rooms. People who learn a topic when they are sad, will remember more about that topic when they are sad. People who learn while listening to Mozart will retrieve more information from memory while listening to Mozart than listening to jazz. People who learn a fact while smelling peppermint will be better able to recall that fact while smelling peppermint than while smelling another fragrance. People who learn in the presence of their coworkers will remember more of what they learned in the presence of those coworkers.

7. Context can aid remembering and performance, but it can have negative effects when aspects of the learning context are not available in the on-the-job performance context.

8. Context effects can be augmented by prompting learners to focus on the background context. Context effects can be diminished by prompting learners to focus less on the background context.

9. The fewer the background contextual elements per learning point, the more powerful the context effects.

10. The easiest and most effective way to align the learning and performance contexts is to modify the learning context. But other options are available as well.

11. The performance context can be modified through management involvement, performance-support tools, and other reminding devices.

12. When the performance context cannot be determined in advance—or when the learned tasks will be performed in many contexts—multiple learning contexts can facilitate later memory retrieval and performance.

13. Learners in their performance situations can improve the recall of what they learned by visualizing the learning situation.

14. Cues can be added to the both the learning contexts and the performance context to aid remembering.

15. Context effects have their most profound impact when other retrieval cues are not available for use. For example, context effects typically do not occur on multiple-choice tests or for other performance situations where learners are provided with hints.

16. To fully align the learning and performance contexts, instructional practice should include opportunities for learners to face all four aspects of performance, (1) situation, (2) evaluation, (3) decision, and (4) action. To create the best results, learners must be faced with realistic situations, make sense of them, decide what to do, and then practice the chosen action.

To read more about this fundamental learning factor (or to see the research behind these suggestions), you can access an extensive report from the Work-Learning Research catalog.

Learning is a many-splendored thing. Want evidence? Consider the overabundance of theories of learning. Greg Kearsley has a nice list. To me, this overabundance is evidence that the human learning system has not yet been lassoed and cataloged with any great precision. Ironic that DNA is easier to map than learning.

Being a political junkie, I’m fascinated with how a population of citizens learns about their government and the societal institutions of power. Democracy is rooted in the idea that we the citizenry have learned the right information to make good decisions. In theory this makes sense, while in practice imperfect knowledge is the norm. This discussion may relate to learning in the workplace as well.

Take one example from recent events. On September 11th, 2001, the United States was attacked by terrorists. The question arose, who were these terrorists? Who sent them? Who helped them? One particular question was asked. "Was Saddam Hussein (dictator of Iraq) involved?" I use this question because there is now generally-accepted objective evidence that Saddam Hussein was not involved in the 9/11 attack in any way. Even President Bush has admitted this. On September 17th, 2003, Bush said, in answer to a question from a reporter, "No, we’ve had no evidence that Saddam Hussein was involved with September the 11th." Despite this direct piece of information, the Bush administration has repeatedly implied, before and after this statement, that the war in Iraq is a response to 9/11. We could discuss many specific instances of this—we could argue about this—but I don’t want to belabor the point. What I want to get at is how U.S. citizens learned about the reality of the question.

Poll_data_4

Take a look at polling data, which I found at PollingReport.com. I’ve marked it up to draw your eyes toward two interesting realities. First, look at the "Trend" data. It shows that we the citizens have changed our answer to the question asked over time. In September of 2002, 51% of Americans incorrectly believed that Saddam was personally involved in September 11th. Last month in October or 2005, the number had dived to 33%. The flip side of this showed that 33% correctly denied any link between Saddam and 9/11 in October of 2002, while today the number is a more healthy 55% correct, but still a relatively low number. If we think in terms of school-like passing-grade cutoffs, our country gets a failing grade.

The second interesting reality is how different groups of people have "Different Realities" about what is true. You’ll notice the difference in answering these questions between Republicans and Democrats.

These data encourage me to conclude or wonder about the following:

  1. Even well-established facts can engender wide gaps in what is considered true. Again, this highlights the human reality of "imperfect knowledge."
  2. Stating a fact (or a learning point) will not necessarily change everyone’s mind. It is not clear from the data whether the problem is one of information exposure or information processing. Some people may not have heard the news. People who heard the news may not have understood it, they may have rejected it, or they may have subsequently forgotten it.
  3. Making implied connections between events can be more powerful than stating things explicitly. It is not clear whether this is also a function of the comparative differences in the number of repetitions people are exposed to. This implied-connection mechanism reminds me of the "false-memory" research findings of folks like Elizabeth Loftus. Are the Republicans better applied psychologists than the Democrats?
  4. Why is it that so many citizens are so ill-informed? Why don’t (or why can’t) our societal information-validators do their jobs? If the media, if our trusted friends, if our political leaders, if our religious leaders, if opinion leaders can’t persuade us toward the truth, is something wrong with these folks, is something wrong with us, is there something about human cognitive processing that enables this disenfranchisement from objective reality? (Peter Berger be damned).
  5. I’m guessing that lots of the differences between groups depends upon which fishtank of stimuli we swim in. Anybody who has friends, coworkers, or family members in the opposing political encampment will recognize how the world the other half swims in looks completely different than the world we live in.
  6. It appears from the trend data that there was a back-and-forth movement. We didn’t move inexorably toward the truth. What were the factors that pushed these swings?

These things are too big for me to understand. But lots of the same issues are relevant to learning in organizations—both formal training and informal learning.

  1. How can we better ensure that information flows smoothly to all?
  2. How can we ensure that information is processed by all?
  3. How can we ensure that information is understood in more-or-less the same way by all?
  4. How can we be sure that we are trusted purveyors of information?
  5. How can we speed the acceptance of true information?
  6. How can we prevent misinformation from influencing people?
  7. How can we use implied connections, as opposed to explicit presentations of learning points, to influence learning and behavior? Stories is one way, perhaps.
  8. Can we figure out a way to map our organizations and the fishtanks of information people swim in, and inject information into these various networks to ensure we reach everyone?
  9. What role can knowledge testing, performance testing, or management oversight (and the feedback mechanisms inherent in these practices) be used to correct misinformation?

Most of what we call "training" is designed with the intention of improving people’s performance on the job. While it is true that much of training does not do this very well, it is still true that on-the-job performance is the singular stated goal of training.

But something is missing from this model. What’s missing is that a learning intervention can also prepare learners for future on-the-job learning. Let’s think this through a bit.

People on the job—people in any situation—are faced with a swarm of stimuli that they have to make sense of. Their mental models of how the world works will determine what they perceive. I’ve noticed this myself when I walk in the woods with experienced bird watchers. I hear birds, but can’t see them, no matter how hard I look. Experienced bird watchers see birds where I see nothing. The same stimuli have different outcomes because the expert birders have superior mental models about where birds might locate themselves.

The same is true for many things. As a better-than-average chess player, I will understand the patterns of the pieces better than a novice will. Experienced computer programmers see things that inexperienced programmers do not. Experienced lawyers will understand the nuances in someone’s testimony more than a novice lawyer.

Experience enables distinctions to be drawn between otherwise ambiguous stimuli. It enables people to perceive things that others don’t perceive. It helps people notice what others ignore.

Learning can be designed to provide amazing-grace moments, helping those who were once blind to see. If we’re serious about on-the-job learning, we ought to begin to build models of how to design formal learning to facilitate informal on-the-job learning.

Dan Schwartz, PhD (a learning psychologist at Stanford) has written recently about a concept called Preparation for Future Learning or PFL. Schwartz argues that generally poor transfer results may be due to the common practice of assessing what was learned but failing to assess what learners are able to learn. This makes a lot of sense given how complex the real world is, how learners forget stuff so quickly, and how much they learn on the job.

Schwartz and his colleagues are working on ways to improve future learning by using "contrasting cases" that enable  learners to see distinctions they hadn’t previously noticed. This concept might be used in formal training courses to prepare learners to see things they hadn’t seen before when they return to the job. For example, a manager being trained on supervisory skills may be taught that some decisions require group input, whereas other decisions require managers to decide on their own. Cases of both types could be provided in training so that relevant distinctions will be better noticed on the job.

A different way to prepare learners for future learning is to prime them with questions. In my dissertation research, I included one experiment that I asked college students questions about campus attractions. For example, I asked them what the statue "Alma Mater" was carrying. A week later, I suprised the students by asking them some of the same questions again. The results revealed that simply asking them questions (even when no feedback was provided) improved how much they paid attention to the items on which they were queried. Between the two sets of questions, learners apparently paid attention to the statue in ways they hadn’t before. By being asked about an item, the learners were more likely to spend time learning about that item when they encountered those items in their day-to-day walking around.

There are likely to be other similar learning opportunities, but the point is that we need ways to design our learning interventions to intentionally create these types of learning responses. I’m going to be thinking about this for a while. My hope is that you will too.

Perhaps these meager paragraphs have prepared you for future learning. SMILE.

There have been several published studies (and even more newspaper articles) that show cell-phone use while driving is correlated with accidents. The suggestion from these studies is that cell phones CAUSE accidents. The implication is that we should ban cell phones while driving.

This may be true. I was scared to death last week while my taxi driver was looking at his cell phone to dial numbers. He clearly did not have his eyes on the road. If anything unusual occurred (like the van in the next lane entering our lane right in front of us—watch out please watch out!), his reaction time would have been considerably slowed and we would have been much more likely to have an accident.

On the other hand, I wonder how much of the current problems are caused by a learning deficit. After all, for most of us cell phones are rather new. More importantly, driving while using a cell phone is also new. This kind of multitasking can be learned. There are research studies that show that experience doing multitasking can increase performance on the tasks being done. With enough practice, less working-memory capacity is needed, freeing up capacity to engage in the various tasks.

One hypothesis suggested by this is that cell-phone-related accidents will decrease with time as drivers get more practice using their cell phones while driving. Judging from the number of people I see driving and phoning, not many people are heeding the warnings, so lots of people are gaining more experience. Cell-phone accident rates will also decline as new technologies are utilized, namely voice-dialing and hands-free cell-phones.

On the other hand, a second hypothesis is that anything that prompts drivers to take their eyes off the road will produce similar deficits to cell-phone driving. Here’s a short list:

  1. People who read maps while driving.
  2. People who look at the radio to tune to a particular station.
  3. People who glance at the person sitting next to them while in conversation.
  4. People who look at their food before stuffing it in their mouths.
  5. People who admire the scenery.
  6. People who rubberneck at accident scenes.

People who look at their cell phones to dial a number are just asking for trouble. It probably helps to have two hands on the wheel, as well.

I’d be willing to bet that for most people fewer accidents will occur when using a hands-free, voice-dialing cell phone than when talking with someone sitting beside them in the front seat, assuming equal levels of experience doing both. The natural human tendency to want to look someone in the eyes while talking to them will prompt most of us to try and steal a glance at our conversational partners, increasing slightly the danger from unforeseen events.

Like most things in life, learning plays a central role in our cell-phone-while-driving performance. Like most things for us humans, our cognitive machinery sets the boundaries for this performance.

New Information from the Research (An Update on My Thinking)

Although I still wonder about our ability to learn how to utilize cell phones while driving, recent research suggests that right now, we are not too good at it. Check out my updated post on this.

Here’s the title of the research article:

When you know that you know and when you think that you know but you don’t.

Great title. Much more colorful than most academic research articles. The thing for us to realize is that the "you" in the title of this classic research study is OUR LEARNERS. Sometimes our learners can become overly optimistic about their ability to remember. Oh no. If we believe—even a little bit—the adult-learning-theory mantra that our learners always know best (an incorrect assumption by the way), then we might be setting our learners up for failure.

The research these folks did was designed to look at the spacing effect, the finding that widely-spaced repetitions are more effective than narrowly-spaced repetitions. As the authors say, the spacing effect "has been obtained in a wide variety of memory paradigms, suggesting that it reflects the operation of a fundamental property of the memory system."

Here’s What They Did

They had people study lists of words some of which were repeated. They repeated some of the words immediately and some of them after 3 to 5 other words. Notice that the spacings we’re talking about here are rather small. For some of the words, they asked the subjects to rate how likely they would be to remember the words if asked about them later.

Here’s What They Found

  1. Learners were pretty good at estimating which words they would be able to recall. The one’s they rated most likely to recall, they recalled 66% of the time on a later test. The one’s they rated least likely to recall, they recalled only 35% of the time.
  2. Learners recalled words they were asked to rate better than words they were not asked to rate. Rated words were recalled 52% of the time, non-rated words were recalled 40% of the time. This shows that extra processing, especially meaningful processing, improves learning results. Learners had an extra six seconds per word to do the rating.
  3. Learners recalled repeated items better than items that were not repeated. This is the repetition effect. Non-repeated items were recalled 39% of the time. Repeated items were recalled 49% of the time.
  4. Perhaps what is most intriguing about the data is that narrowly-spaced repetitions gave learners greater confidence (that they would be able to recall a particular word) than widely-spaced repetitions, BUT they actually recalled narrowly-spaced repetitions less highly than the widely-spaced repetitions. Narrowly-spaced repetitions were recalled at a rate of 49% compared with widely-space repetitions, which were recalled at a rate of 62%.

Zeichmeister_shaughnessy_1980_1 Is This Study Relevant to Me?

This is the generalizability question. Although the research had learners study words in a free-recall experimental design, instead of using complex knowledge in a more ecologically valid cued-recall design (real-world memory retrieval is almost always cued recall), there is no reason to think that the research results aren’t widely applicable. The spacing effect is one of the most replicated phenomenon in learning research. Widely-spaced repetitions minimize forgetting, whereas immediate repetitions cannot. And because the spacing effect is such a fundamental learning mechanism, the college-student-as-subject problem is not relevant.

Zeichmeister_shaughnessy_1980_lines_3 Practical Recommendations

  1. Repeat key learning points.
  2. Use spaced repetitions when logistically possible. Even these short spacings made a big difference in the learning results.
  3. Consider prompting your learners to do additional processing of key learning points, especially if that processing is meaningful.
  4. Challenge your learners (within reason) by showing them that their memory is fallible, and that forgetting is likely if they avoid additional learning time.

Citation

Zechmeister, E. B., & Shaughnessy, J. J. (1980). When you know that you know and when you think that you know but you don’t. Bulletin of the Psychonomic Society, 15(1), 41-44.

At Elliott Masie’s Learning 2005 Conference there was a session on Learning R&D efforts within organizations. This seems like a great idea to me. The Masie Center has offered to host a discussion on the conference’s wiki, at this web address. It’s just getting started, but I seeded the discussion with the following goals one might have for an Internal Learning Practice R&D effort. I repeat that here. Please comment.

Goals for Internal Learning R&D Group:

  1. Evaluate the possibilities for new learning technologies versus organizational needs and opportunities.
  2. Compile research-based best practices to share with internal instructional professionals.
  3. Look for opportunities to evaluate current training offerings on learning effectiveness, behavior change, and business results.
  4. Encourage an more entrepreneurial or experimental mindset within the organization’s learning practice, to enable small test-case trials of learning innovations.
  5. Help the organization build a set of standards for the organization’s learning practice, including ethical considerations related to effectiveness as well.

Learning 2005 was a very good conference, but as is typical in our industry (training & development) this conference still lacked some fundamental elements. Some of the good and bad points of the conference are detailed below. They are provided to encourage changes in all future industry conferences.

The Good:

  1. High energy and excitement, created by Elliott and his pre-conference emailings, Elliott’s energetic facilitation of the general sessions, and the experimental use of new technologies.
  2. Taking risks and trying new models for conferences.
  3. Good opportunity to spur future thinking on leading edge topics. For example, everyone at the conference will be likely to consider the use of wiki’s.
  4. The strong encouragement to presenters to be facilitators instead of lecturers created some really good sessions. It also helped a few participants feel free to criticize (or be skeptical of) the ideas of others. This is a good thing, though more of this is necessary.
  5. The human element rose higher than at any other conference in the industry. Not-for-profit groups were highlighted. People’s good works were celebrated with formal "Pioneers of Learning" awards. Elliott’s empathy and general-session conversations channeled a sense of relationship with the speakers and a sense of community.
  6. The keynotes actually has something to say about learning and performance. Yippee!!
  7. Elliott’s strong emphasis on research, metrics, and innovation.
  8. The sense that technological innovations were about to really get traction. A tipping point toward better e-learning design.
  9. Malcolm Gladwell was a clear and thoughtful thinker!!
  10. The Masie Center’s efforts to organize the group "The Learning Consortium" provided a sense that we were all in this together with the goal of helping each other.
  11. The participants seemed very knowledgable and thoughtful (at least in the sessions I attended).
  12. The wiki at www.learningwiki.com provides a nice opportunity for discussion and further community building. It also provides participants with a way connect with sessions/presenters who they missed because two or more sessions of interest were scheduled at the same time. It remains to see how this will work, but it is certainly worth the experiment. We as conference-goers may need to learn how to use this technology to get the most out of it.
  13. The Masie Center people are really nice and really helpful.
  14. Elliott didn’t let the keynote speakers simply deliver their canned speeches. He forced them to focus on what was most relevant by interviewing them. Very effective.
  15. The conference was fun!!

The Neutral

  1. No exhibition hall. Good and bad. Good because sponsors didn’t show up just to sell. Bad because the selling took a more stealthy approach. Bad because sometimes it is nice as a participant to seek out potential vendors and have real conversations with them.
  2. The recommendation NOT to use PowerPoints is both good and bad. Sometimes visuals are helpful to make the point, clarify potential confusions, etc. PowerPoint is a tool. It is good for some things and bad for others. For example, I would have loved to see a brief PowerPoint when Elliott invited people up on stage so that I could catch their names. This seems respectful as well as a good learning design. Also, many presenters did not follow the recommendation and used PowerPoints anyway.

The Bad

  1. Commercial interests still dominated many conversations. Some sponsors paid as much as $15,000 to lead three conference sessions. Although some sponsors led excellent sessions, others gave thinly-disguised sales pitches. If vendors dominate the conversational agenda, we diminish the wisdom of experts and we diminish the credibility of our field. Vendors should certainly be a part of the conversation, because many are leading the way, but they shouldn’t be the most powerful voice.
  2. Some of the sessions floundered under the goal of creating audience-generated discussions. Some of the facilatators were unskilled or talked too much. Some were simply new at facilitating, having more experience in traditional conference sessions delivering their presentations. Some sessions, while great as brainstorming sessions, were unencumbered by the hard work of separating the good ideas from the bad ones.
  3. There was not enough planned skepticism, making it likely that some bad ideas were accepted as good ideas. This is a major failing in our industry and one of the reasons why we jump from one fad to another. We are simply not skeptical enough and our conferences don’t encourage skepticism.
  4. It would be nice to have each presenter asked a mandatory set of questions, including (a) what evidence do you have that your recommendations worked in your situation, (b) what evidence or research can you cite that shows that your recommendations are likely to work in other situations, (c) what research supports the basic concepts you’ve implemented, and/or (d) what negative and positive side effects might your recommendations create?
  5. Sometimes the gee-whiz factor went way too far, as when everybody got real excited just because a technology was new.
  6. There was very little discussion of the human learning system and how we needed to align our learning designs with it.
  7. The price was rather high for the conference admission.
  8. Disney’s hermetically-sealed hotel rooms, ingrained with toxic cleaning products, did not provide a healthy environment for sleeping or hanging out.

Yesterday was the final day of the conference. In the final general session, one of the speakers, Mike somebody (sorry, but there were no overheads to announce the general session speakers), warned the audience not to oversell Extreme Learning, Elliott’s term for pushing the technological boundaries and creating very short and quick learning episodes. Actually, now that I think about it, "Extreme Learning" is not really that clear. I guess it has to do with technical innovation. Anyway, the warning seems reasonable to me. Every time there is a new technology, vendors and consultants and everybody oversells that technology and then there is the inevitable backlash. The key is to keep things aligned with the human learning system.

Bob Pike was the most prominent general-session speaker, and he spoke very knowledgeably about how to deal with difficult participants, and a few other things as well. He then literally went crazy and diagnosed 600 people’s personality with just one question. He asked us to choose one of six animals—the one that we most resonated with. He then asked us to get out of our chairs and join our other fellow beavers, dolphins, owls, care bears, and two other animals I can’t remember. These six groups were supposed to predict our behavior. It was freakin’ unbelievable, and yet most of the participants seemed to buy into it completely. Actually, I probably shouldn’t be so bold in that statement because I left the room after he divided us into groups. I sort of had to pee and I knew I wasn’t going to miss anything. I returned in five or ten minutes just before we were all allowed to return to our chairs.

In a defense of the participants, it was the last day of the conference, everybody was a bit burned out, many had been up late the night before at the MGM theme park that Elliott rented for participants (too many long falls from the Tower of Terror), and Pike used one of the oldest tricks in the persuasion handbook, introducing the topic as an area of rare scientific inquiry, in this case "The Science of Axiology," developed by Robert S. Hartman. The main idea of this brilliant science is that (and this is a direct quote I think) "something is good when it completely fulfills its characteristics."

What this means I’m not sure. In fact, I can’t make sense of it. Let’s see, I have many characteristics. What does it mean to fulfill a characteristic? I have brown eyes. How do I fulfill that characteristic? Since brown eyes are supposed to be more suited to climates with lots of sunlight (than blue eyes), can I only be good if I live near the equator? I have a tongue. Tongues can do many things. Must I do them all each day to be good? My nose can smell and help me breathe, but it can also run with boogers when I have a cold. Must it run each day if I am to be good? It would have to run with boogers if it was going to enable the tongue to fulfill all of its characteristics. I think I’ll stop there with the body parts. Maybe Pike meant inanimate objects only. Let’s take cars. Maybe cars can only be good if they fulfill the characteristic of reaching 120 miles per hour, which most are capable of. My car has never reached that speed, so it must be bad. Cars are large objects that go fast. These characteristics make them perfect for killing small animals and children. Can a car only be good if it fulfills its potential for killing (insects on the windshield count to)? Cars also have the characteristic of breaking down. Do cars have to be bad to be good?

I watched Elliott on the Jumbotron at the end of Pike’s delusionary sermon. He looked to be in pain. But he saved the show remarkably by thanking Pike for pushing the boundaries of our thinking (or something like that). Nicely done Elliott.

Pike ended in a classy fashion, giving a rather heartfelt plug to Elliott, hailing him as "The Great Connector." It was a touching moment. And it was true. Throughout the conference, we learned again and again how many people Elliott knew. It was quite amazing, and impressive, really.

And then, the show was over, and everyone went home.

in a later post, I’m going to rate the Learning 2005 conference along several dimensions. Overall, it was one of the better conferences I attended, but it is not near to where our conferences need to be to really generate useful learning.