This is a review of ZengerFolkman’s ActionPlan Mapper. Let me provide a little background so you’ll understand my conclusions.

In 2002 I wrote an article on e-learning’s unique capability—that it was one of the few learning media that enabled us “to have meaningful and renewable contact with learners over time.” I argued that e-learning was a tool, and that we ought to figure out what it does well and maximize the advantage of that capability—as long as our e-learning methods are aligned with the human learning system. No sense utilizing an e-learning method if it doesn’t facilitate learning and performance.

I wrote about several learning factors that seemed ideal for e-learning. I also challenged the industry to get its butt in gear. At that time I didn’t see many applications of e-learning that took advantage of the connectedness capability. I just reread that article in preparation for writing this blog piece. It was really quite brilliant—even though I must say so myself—and I recommend it highly. You can purchase it for five bucks at Go ahead, make me rich.

A few years ago, I also taught an online class entitled Leveraging E-Learning. One of the suggestions I made in that class was that we ought to use our new-found internet/intranet capacity to connect with our learner’s managers as well as our learners. I even developed some rudimentary templates that outlined how this could be done.

Although I’m recounting my former brilliance for you in the hopes that you’ll hire me as your learning consultant in the near future—and to make myself feel good during these dark winter days—true geniuses don’t just rant and rave, they make things.

Today’s training-genius award goes to the folks at ZengerFolkman who developed the ActionPlan Mapper ( They have given me renewed faith that eventually e-learning will meet its promise.

The ActionPlan Mapper is a web-based hosted solution that is available 24/7. It was designed to help training participants take what they learned and apply it to their jobs. As Kelly Clayton, Product Leader for the ActionPlan Mapper, has said, “What we’re trying to prevent is the Monday-morning problem. People go to a training course, they take notes, they have discussions, they get energized, they’re roaring to go, but when they get back to the job on Monday, they are overwhelmed with their normal workload and the momentum for action fades to oblivion…The ActionPlan Mapper works by prodding the learners, reminding them to stay focused and keep pursuing the action items they previously resolved to accomplish.”


Review Details

From the two intensive demos I’ve seen, the ActionPlan Mapper is a great tool. From a learning-to-performance perspective, it creates some powerful learning effects:

  1. It indirectly reminds learners of what they learned, helping them to remember what they learned long enough to put it into action.
  2. It spurs workplace action by regularly reminding learners that they ought to be working to implement what they learned.
  3. It helps learners keep a focus on their intended post-training actions.
  4. It brings managers into the process of training implementation, making them partners and/or drivers of training application.
  5. It can be used to hold learners accountable for their action plans, helping to significantly lift the priority of training implementation from a “nice-to-do” to a “must-do.”

Although no formal evaluation studies have been completed as of yet on ActionPlan Mapper (maybe they haven’t heard of, I’m willing to bet that learners who use ActionPlan Mapper will be at least 50% more likely to utilize (on the job) what they learned in the classroom or in an e-learning course. I actually think performance improvements could be more like 200 to 300% for many post-training situations, but I’m being conservative because results always depend on many variables. Besides, a 50% improvement in on-the-job application is huge already!

The cost of the product seems reasonable to me, especially given the upside I just discussed. For only $40 to $250 per person (depending on several factors), ActionPlan Mapper is yours! The thinking behind the design is that simpler is better. Clayton claims that what ZengerFolkman was aiming for was a product that people would find easy and intuitive to use. As long as they have a web connection, people can use ActionPlan Mapper anytime anywhere to stay in touch with their action-planning projects. This design strategy is paying off as clients are using the tool beyond the training context for development planning, follow-up to performance reviews and strategy sessions, and more.

Description by way of Screen Shots

(Note: You can click on the screen shots to enlarge them.)

On the first screen below, participant Bob Sherwin has two action-planning “projects.”



The second screen shows Bob’s goals for his action-planning project, “Becoming a Better Manager.” The grayed ones are already accomplished. The goal 4.4 has a lock next to it to indicate that it is a private goal (only viewable by participant, not by his or her manager).



Participants are prodded and reminded with emails from the system. They can also be encouraged to focus on their goals by their managers. In fact, the system seems ideally structured to encourage conversations on tasks central to business goals and organizational success.

The third screen shows the manager’s view.



More complex systems like Microsoft Project are available for some similar applications, but these are not really suited to the kind of use envisioned by ZengerFolkman. A product offering similar capability in providing training follow-up, FridayFives, is offered by the Fort Hill Company (

A bright idea.

Since it’s easier for me to come up with ideas than it is for these folks to develop these products, here’s an idea, for what it’s worth.

I’d like to see these systems augmented to create a parallel structure to provide direct learning reminders and/or practice opportunities. For example, for a leadership course, learners could be given periodic scenarios related to managing people. Learners would have to decide what to do in these leadership situations. These scenarios would help remind learners of what they learned and thus make it much more likely that when faced with similar situations on the job that they’ll remember how to perform successfully. The learning research shows clearly that such “retrieval-practice” opportunities are great to prompt long-term memory. The leadership scenarios would also provide learners with feedback and help them assess their competence, thereby giving them a heads-up to the kinds of information they could look for as they attempt to learn on the job.

Other reminder systems and retrieval-practice systems could be developed as well.

Still, bottom line, I love the ActionPlan Mapper concept. It’s simple, but it drives training transfer. It’s relatively inexpensive, but it utilizes one of the uniquely potent characteristics of online learning—the connection we can have to our learners and their managers. Way to go ZengerFolkman!!

Recently, I had the honor of being interviewed by Karl Kapp, EdD, in an interview sponsored by The E-Learning Guru, Kevin Kruse.

It was a fun interview, covering many wide-ranging issues in our industry and in learning research. Click to read more.

This blurb is reprised from an earlier Work-Learning Research Newsletter, circa 2002. These "classic" pieces are offered again to make them available permanently on the web. Also, they’re just good fun. I’ve added an epilogue to the piece below.

"What prevents people in the learning-and-performance field from utilizing proven instructional-design knowledge?"

Recently, I’ve spoken with several very experienced learning-and-performance consultants who have each—in their own way—asked the question above. In our discussions, we’ve considered several options, which I’ve flippantly labeled as follows:

  1. They don’t know it. (They don’t know what works to improve instruction.)
  2. They know it, but the market doesn’t care.
  3. They know it, but they’d rather play.
  4. They know it, but don’t have the resources to do it.
  5. They know it, but don’t think it’s important.

Argument 1.

They don’t know it. (They don’t know what works to improve instruction.)
Let me make this concrete. Do people in our field know that meaningful repetitions are probably our most powerful learning mechanism? Do they know that delayed feedback is usually better than immediate feedback? That spacing learning over time facilitates retention. That it’s important to increase learning and decrease forgetting? That interactivity can either be good or bad, depending on what we’re asking learners to retrieve from memory? One of my discussants suggested that "everyone knows this stuff and has known it since Gagne talked about it in the 1970’s."

Argument 2.

They know it, but the market doesn’t care.
The argument: Instructional designers, trainers, performance consultants and others know this stuff, but because the marketplace doesn’t demand it, they don’t implement what they know will really work. This argument has two variants: The learners don’t want it or the clients don’t want it.

Argument 3.

They know it, but they’d rather play.
The argument: Designers and developers know this stuff, but they’re so focused on utilizing the latest technology or creating the snazziest interface, that they forget to implement what they know.

Argument 4.

They know it, but don’t have the resources to use it.
The argument: Everybody knows this stuff, but they don’t have the resources to implement it correctly. Either their clients won’t pay for it or their organizations don’t provide enough resources to do it right.

Argument 5.

They know it, but don’t think it’s important.
The argument: Everybody knows this stuff, but instructional-design knowledge isn’t that important. Organizational, management, and cultural variables are much more important. We can instruct people all we want, but if managers don’t reward the learned behaviors, the instruction doesn’t matter.

My Thoughts In Brief

First, some data. On the Work-Learning Research website we provide a 15-item quiz that presents people with authentic instructional-design decisions. People in the field should be able to answer these questions with at least some level of proficiency. We might expect them to get at least 60 or 70% correct. Although web-based data-gathering is loaded with pitfalls (we don’t really know who is answering the questions, for example), here’s what we’ve found so far: On average, correct responses are running at about 30%. Random guessing would produce 20 to 25% correct. Yes, you’ve read that correctly—people are doing a little bit better than chance. The verdict: People don’t seem to know what works and what doesn’t in the way of instructional design.

Some additional data. Our research on learning and performance has revealed that learning can be improved through instruction by up to 220% by utilizing appropriate instructional-design methods. Many of the programs out there do not utilize these methods.

Should we now ignore the other arguments presented above? No, there is truth in them. Our learners and clients don’t always know what will work best for them. Developers will always push the envelope and gravitate to new and provocative technologies. Our organizations and our clients will always try to keep costs down. Instruction will never be the only answer. It will never work without organizational supports.

What should we do?

We need to continue our own development and bolster our knowledge of instructional-design. We need to gently educate our learners, clients, and organizations about the benefits of good instructional design and good organizational practices. We need to remind technology’s early adopters to remember our learning-and-performance goals. We need to understand instructional-design tradeoffs so that we can make them intelligently. We need to consider organizational realities in determining whether instruction is the most appropriate intervention. We need to develop instruction that will work where it is implemented. We need to build our profession so that we can have a greater impact. We need to keep an open mind and continue to learn from our learners, colleagues, and clients, and from the research on learning and performance.

Will’s New Thoughts (November 2005)

I started Work-Learning Research in 1998 because I saw a need in the field to bridge the gap between research and practice. In these past seven years, I’ve made an effort to compile research and disseminate it, and though partly successful, I often lament my limited reach. Like most entrepreneurs, I have learned things the hard way. That’s part of the fun, the angst, and the learning. 

In the past few years, the training and development field has gotten hungrier and hungrier for research. I’ve seen this in conferences where I speak. The research-based presentations are drawing the biggest crowds. I’ve seen this in the increasing number of vendors who are highlighting their research bonafides, whether they do good research or not. I’ve seen this recently in Elliott Masie’s call for the field to do more research.

This hunger for research has little to do with my meager efforts at Work-Learning Research. Though sometimes in my daydreams I like to think I have influenced at least some in the field—maybe even some opinion leaders. As a data-first empiricist, the evidence is clear. I know that my efforts are often under the radar. Ultimately, this is unimportant. What is important is what gets done.

I’m optimistic. Our renewed taste for research-based practice provides an opportunity for all of us to keep learning and to keep sharing with one another. I’ve got some definite ideas about how to do this. I know many of you who read this do too. We may not—as a whole industry—use enough research-based practices, but there are certainly some individuals and organizations who are out there leading the way. They are the heroes, for it is they who are out there taking risks, asking for the organizational support most of us don’t ask for, making a difference one mistake at at time.

One thing we need to spur this effort is a better feedback loop. If we don’t go beyond the smile sheet, we’re never going to improve our practices. We need feedback on whether our learning programs are really improving learning and long-term retrieval. Don’t think that just because you are out there on the bleeding edge that you’re championing the revolution. You need to ensure that your efforts are really making things better—that your devotion is really improving learning and long-term retention. If you’re not measuring it, you don’t really know.

Let me end by saying that research from refereed journals and research-based white papers should not be the only arbiter of what is good. Research is useful as a guide—especially when our feedback loop is so enfeebled and organizational funds for on-the-job learning measurement are so impoverished.

It would be better, of course, that we might all test our instructional designs in their real-world contexts. Let us more toward this, drawing from all sources of wisdom, dipping our ladles into the rich research-base and the experiences of those who measure their learning efforts.

Research from the world’s preeminent refereed journals on learning and instruction shows that by aligning the learning and performance contexts, learning results can be improved by substantial amounts. In fact, it is this alignment that makes simulations effective, that creates the power behind hands-on training, and that enables action learning to produce its most profound effects.

The research suggests the following points related to instructional design:

1. When humans learn, we absorb both the instructional message and background stimuli and integrate them into memory so that they become interconnected.

2. Humans in their performance situations are reactive beings. Our thoughts and actions are influenced by stimuli in our surrounding environment. If cues in our environment remind us of what we previously learned, we’ll remember more.

3. These first two principles can combine to aid remembering, and hence performance in powerful ways. If during the learning situation we can connect the key learning points to background stimuli that will be observed in the learner’s on-the-job performance situation, than these stimuli will remind learners of what they previously learned!

4. The more the learning context mirrors the real-world performance context, the greater the potential for facilitating remembering. When the learning and performance contexts include similar stimuli, we can say they are "aligned."

5. The more learners pay attention to the background contextual stimuli, the higher the likelihood of obtaining context effects.

6. Context effects can take many forms. People who learn in one room will remember more in that room than in other rooms. People who learn a topic when they are sad, will remember more about that topic when they are sad. People who learn while listening to Mozart will retrieve more information from memory while listening to Mozart than listening to jazz. People who learn a fact while smelling peppermint will be better able to recall that fact while smelling peppermint than while smelling another fragrance. People who learn in the presence of their coworkers will remember more of what they learned in the presence of those coworkers.

7. Context can aid remembering and performance, but it can have negative effects when aspects of the learning context are not available in the on-the-job performance context.

8. Context effects can be augmented by prompting learners to focus on the background context. Context effects can be diminished by prompting learners to focus less on the background context.

9. The fewer the background contextual elements per learning point, the more powerful the context effects.

10. The easiest and most effective way to align the learning and performance contexts is to modify the learning context. But other options are available as well.

11. The performance context can be modified through management involvement, performance-support tools, and other reminding devices.

12. When the performance context cannot be determined in advance—or when the learned tasks will be performed in many contexts—multiple learning contexts can facilitate later memory retrieval and performance.

13. Learners in their performance situations can improve the recall of what they learned by visualizing the learning situation.

14. Cues can be added to the both the learning contexts and the performance context to aid remembering.

15. Context effects have their most profound impact when other retrieval cues are not available for use. For example, context effects typically do not occur on multiple-choice tests or for other performance situations where learners are provided with hints.

16. To fully align the learning and performance contexts, instructional practice should include opportunities for learners to face all four aspects of performance, (1) situation, (2) evaluation, (3) decision, and (4) action. To create the best results, learners must be faced with realistic situations, make sense of them, decide what to do, and then practice the chosen action.

To read more about this fundamental learning factor (or to see the research behind these suggestions), you can access an extensive report from the Work-Learning Research catalog.

Learning is a many-splendored thing. Want evidence? Consider the overabundance of theories of learning. Greg Kearsley has a nice list. To me, this overabundance is evidence that the human learning system has not yet been lassoed and cataloged with any great precision. Ironic that DNA is easier to map than learning.

Being a political junkie, I’m fascinated with how a population of citizens learns about their government and the societal institutions of power. Democracy is rooted in the idea that we the citizenry have learned the right information to make good decisions. In theory this makes sense, while in practice imperfect knowledge is the norm. This discussion may relate to learning in the workplace as well.

Take one example from recent events. On September 11th, 2001, the United States was attacked by terrorists. The question arose, who were these terrorists? Who sent them? Who helped them? One particular question was asked. "Was Saddam Hussein (dictator of Iraq) involved?" I use this question because there is now generally-accepted objective evidence that Saddam Hussein was not involved in the 9/11 attack in any way. Even President Bush has admitted this. On September 17th, 2003, Bush said, in answer to a question from a reporter, "No, we’ve had no evidence that Saddam Hussein was involved with September the 11th." Despite this direct piece of information, the Bush administration has repeatedly implied, before and after this statement, that the war in Iraq is a response to 9/11. We could discuss many specific instances of this—we could argue about this—but I don’t want to belabor the point. What I want to get at is how U.S. citizens learned about the reality of the question.


Take a look at polling data, which I found at I’ve marked it up to draw your eyes toward two interesting realities. First, look at the "Trend" data. It shows that we the citizens have changed our answer to the question asked over time. In September of 2002, 51% of Americans incorrectly believed that Saddam was personally involved in September 11th. Last month in October or 2005, the number had dived to 33%. The flip side of this showed that 33% correctly denied any link between Saddam and 9/11 in October of 2002, while today the number is a more healthy 55% correct, but still a relatively low number. If we think in terms of school-like passing-grade cutoffs, our country gets a failing grade.

The second interesting reality is how different groups of people have "Different Realities" about what is true. You’ll notice the difference in answering these questions between Republicans and Democrats.

These data encourage me to conclude or wonder about the following:

  1. Even well-established facts can engender wide gaps in what is considered true. Again, this highlights the human reality of "imperfect knowledge."
  2. Stating a fact (or a learning point) will not necessarily change everyone’s mind. It is not clear from the data whether the problem is one of information exposure or information processing. Some people may not have heard the news. People who heard the news may not have understood it, they may have rejected it, or they may have subsequently forgotten it.
  3. Making implied connections between events can be more powerful than stating things explicitly. It is not clear whether this is also a function of the comparative differences in the number of repetitions people are exposed to. This implied-connection mechanism reminds me of the "false-memory" research findings of folks like Elizabeth Loftus. Are the Republicans better applied psychologists than the Democrats?
  4. Why is it that so many citizens are so ill-informed? Why don’t (or why can’t) our societal information-validators do their jobs? If the media, if our trusted friends, if our political leaders, if our religious leaders, if opinion leaders can’t persuade us toward the truth, is something wrong with these folks, is something wrong with us, is there something about human cognitive processing that enables this disenfranchisement from objective reality? (Peter Berger be damned).
  5. I’m guessing that lots of the differences between groups depends upon which fishtank of stimuli we swim in. Anybody who has friends, coworkers, or family members in the opposing political encampment will recognize how the world the other half swims in looks completely different than the world we live in.
  6. It appears from the trend data that there was a back-and-forth movement. We didn’t move inexorably toward the truth. What were the factors that pushed these swings?

These things are too big for me to understand. But lots of the same issues are relevant to learning in organizations—both formal training and informal learning.

  1. How can we better ensure that information flows smoothly to all?
  2. How can we ensure that information is processed by all?
  3. How can we ensure that information is understood in more-or-less the same way by all?
  4. How can we be sure that we are trusted purveyors of information?
  5. How can we speed the acceptance of true information?
  6. How can we prevent misinformation from influencing people?
  7. How can we use implied connections, as opposed to explicit presentations of learning points, to influence learning and behavior? Stories is one way, perhaps.
  8. Can we figure out a way to map our organizations and the fishtanks of information people swim in, and inject information into these various networks to ensure we reach everyone?
  9. What role can knowledge testing, performance testing, or management oversight (and the feedback mechanisms inherent in these practices) be used to correct misinformation?

Here’s the title of the research article:

When you know that you know and when you think that you know but you don’t.

Great title. Much more colorful than most academic research articles. The thing for us to realize is that the "you" in the title of this classic research study is OUR LEARNERS. Sometimes our learners can become overly optimistic about their ability to remember. Oh no. If we believe—even a little bit—the adult-learning-theory mantra that our learners always know best (an incorrect assumption by the way), then we might be setting our learners up for failure.

The research these folks did was designed to look at the spacing effect, the finding that widely-spaced repetitions are more effective than narrowly-spaced repetitions. As the authors say, the spacing effect "has been obtained in a wide variety of memory paradigms, suggesting that it reflects the operation of a fundamental property of the memory system."

Here’s What They Did

They had people study lists of words some of which were repeated. They repeated some of the words immediately and some of them after 3 to 5 other words. Notice that the spacings we’re talking about here are rather small. For some of the words, they asked the subjects to rate how likely they would be to remember the words if asked about them later.

Here’s What They Found

  1. Learners were pretty good at estimating which words they would be able to recall. The one’s they rated most likely to recall, they recalled 66% of the time on a later test. The one’s they rated least likely to recall, they recalled only 35% of the time.
  2. Learners recalled words they were asked to rate better than words they were not asked to rate. Rated words were recalled 52% of the time, non-rated words were recalled 40% of the time. This shows that extra processing, especially meaningful processing, improves learning results. Learners had an extra six seconds per word to do the rating.
  3. Learners recalled repeated items better than items that were not repeated. This is the repetition effect. Non-repeated items were recalled 39% of the time. Repeated items were recalled 49% of the time.
  4. Perhaps what is most intriguing about the data is that narrowly-spaced repetitions gave learners greater confidence (that they would be able to recall a particular word) than widely-spaced repetitions, BUT they actually recalled narrowly-spaced repetitions less highly than the widely-spaced repetitions. Narrowly-spaced repetitions were recalled at a rate of 49% compared with widely-space repetitions, which were recalled at a rate of 62%.

Zeichmeister_shaughnessy_1980_1 Is This Study Relevant to Me?

This is the generalizability question. Although the research had learners study words in a free-recall experimental design, instead of using complex knowledge in a more ecologically valid cued-recall design (real-world memory retrieval is almost always cued recall), there is no reason to think that the research results aren’t widely applicable. The spacing effect is one of the most replicated phenomenon in learning research. Widely-spaced repetitions minimize forgetting, whereas immediate repetitions cannot. And because the spacing effect is such a fundamental learning mechanism, the college-student-as-subject problem is not relevant.

Zeichmeister_shaughnessy_1980_lines_3 Practical Recommendations

  1. Repeat key learning points.
  2. Use spaced repetitions when logistically possible. Even these short spacings made a big difference in the learning results.
  3. Consider prompting your learners to do additional processing of key learning points, especially if that processing is meaningful.
  4. Challenge your learners (within reason) by showing them that their memory is fallible, and that forgetting is likely if they avoid additional learning time.


Zechmeister, E. B., & Shaughnessy, J. J. (1980). When you know that you know and when you think that you know but you don’t. Bulletin of the Psychonomic Society, 15(1), 41-44.

Learning 2005 was a very good conference, but as is typical in our industry (training & development) this conference still lacked some fundamental elements. Some of the good and bad points of the conference are detailed below. They are provided to encourage changes in all future industry conferences.

The Good:

  1. High energy and excitement, created by Elliott and his pre-conference emailings, Elliott’s energetic facilitation of the general sessions, and the experimental use of new technologies.
  2. Taking risks and trying new models for conferences.
  3. Good opportunity to spur future thinking on leading edge topics. For example, everyone at the conference will be likely to consider the use of wiki’s.
  4. The strong encouragement to presenters to be facilitators instead of lecturers created some really good sessions. It also helped a few participants feel free to criticize (or be skeptical of) the ideas of others. This is a good thing, though more of this is necessary.
  5. The human element rose higher than at any other conference in the industry. Not-for-profit groups were highlighted. People’s good works were celebrated with formal "Pioneers of Learning" awards. Elliott’s empathy and general-session conversations channeled a sense of relationship with the speakers and a sense of community.
  6. The keynotes actually has something to say about learning and performance. Yippee!!
  7. Elliott’s strong emphasis on research, metrics, and innovation.
  8. The sense that technological innovations were about to really get traction. A tipping point toward better e-learning design.
  9. Malcolm Gladwell was a clear and thoughtful thinker!!
  10. The Masie Center’s efforts to organize the group "The Learning Consortium" provided a sense that we were all in this together with the goal of helping each other.
  11. The participants seemed very knowledgable and thoughtful (at least in the sessions I attended).
  12. The wiki at provides a nice opportunity for discussion and further community building. It also provides participants with a way connect with sessions/presenters who they missed because two or more sessions of interest were scheduled at the same time. It remains to see how this will work, but it is certainly worth the experiment. We as conference-goers may need to learn how to use this technology to get the most out of it.
  13. The Masie Center people are really nice and really helpful.
  14. Elliott didn’t let the keynote speakers simply deliver their canned speeches. He forced them to focus on what was most relevant by interviewing them. Very effective.
  15. The conference was fun!!

The Neutral

  1. No exhibition hall. Good and bad. Good because sponsors didn’t show up just to sell. Bad because the selling took a more stealthy approach. Bad because sometimes it is nice as a participant to seek out potential vendors and have real conversations with them.
  2. The recommendation NOT to use PowerPoints is both good and bad. Sometimes visuals are helpful to make the point, clarify potential confusions, etc. PowerPoint is a tool. It is good for some things and bad for others. For example, I would have loved to see a brief PowerPoint when Elliott invited people up on stage so that I could catch their names. This seems respectful as well as a good learning design. Also, many presenters did not follow the recommendation and used PowerPoints anyway.

The Bad

  1. Commercial interests still dominated many conversations. Some sponsors paid as much as $15,000 to lead three conference sessions. Although some sponsors led excellent sessions, others gave thinly-disguised sales pitches. If vendors dominate the conversational agenda, we diminish the wisdom of experts and we diminish the credibility of our field. Vendors should certainly be a part of the conversation, because many are leading the way, but they shouldn’t be the most powerful voice.
  2. Some of the sessions floundered under the goal of creating audience-generated discussions. Some of the facilatators were unskilled or talked too much. Some were simply new at facilitating, having more experience in traditional conference sessions delivering their presentations. Some sessions, while great as brainstorming sessions, were unencumbered by the hard work of separating the good ideas from the bad ones.
  3. There was not enough planned skepticism, making it likely that some bad ideas were accepted as good ideas. This is a major failing in our industry and one of the reasons why we jump from one fad to another. We are simply not skeptical enough and our conferences don’t encourage skepticism.
  4. It would be nice to have each presenter asked a mandatory set of questions, including (a) what evidence do you have that your recommendations worked in your situation, (b) what evidence or research can you cite that shows that your recommendations are likely to work in other situations, (c) what research supports the basic concepts you’ve implemented, and/or (d) what negative and positive side effects might your recommendations create?
  5. Sometimes the gee-whiz factor went way too far, as when everybody got real excited just because a technology was new.
  6. There was very little discussion of the human learning system and how we needed to align our learning designs with it.
  7. The price was rather high for the conference admission.
  8. Disney’s hermetically-sealed hotel rooms, ingrained with toxic cleaning products, did not provide a healthy environment for sleeping or hanging out.

I’ve been avoiding blogging.

Why? Mostly because I thought blogs were evil—just another contributor to "bad information gone wide." Despite my worries about an expanding universe of vacuous claptrap, I’ve decided to take my own advice and view blogging as just another tool—with strengths and weaknesses.

Why am I starting to blog now?

  1. Kathleen Gilroy, pixie mensch and esteemed leader of the Otter Group, talked me into it when she explained the Web 2.0 idea to me.
  2. I needed a way to convey information quickly and informally.
  3. I needed a way to get the Work-Learning Research newsletter up online permanently.
  4. I wanted to learn firsthand about the potential of this new technology for learning.
  5. I wanted a way to connect with clients, thought-leaders, colleagues, friends.
  6. I wanted a way to learn from others.
  7. I wanted to get started on idea projects that were on the back burner, pushing them toward completion, getting something out there in the event of my early demise.
  8. I wanted to get younger, hipper, and better looking.

I’d like to offer a special thanks to Kathleen and also to someone she introduced me to—Bill Ives, author of Business Blogs: A Practical Guide. Bill’s wisdom has helped me think through my blogging strategy as he has eased me up the learning curve.

This blog, Will at Work Learning, will throw out lots of ideas about research-based learning design and the learning-and-performance industry.

The Work-Learning Journal will offer longer pieces on more in-depth topics. It will also include thought-provoking pieces from other researchers and thinkers in our field.

Please Note the name of this blog, Will at Work Learning. Not only is this convenient because it parallels the Work-Learning Research company name, but it is also my dearest hope—that this blog will engender my own learning. Please contribute with your comments.

Among the many changes on the horizon at Work-Learning Research, one of the most exciting for me is our new emphasis on learning audits. As the field moves more toward "evidence-based practices" and "evaluation-tested learning" more and more decision-makers are incorporating outside evaluations into their instructional-design repertoires.

Before we unveil our learning-audit offerings in their new finery, we’d like to offer viewers of this journal a 25% discount for all audits that are started and completed within this calendar year. This offer only lasts while we have capacity to perform the audits—as of today, we can schedule about 10 more audits through the end of the year.

Whether for your organization, or your clients’ organizations, a learning audit might be just the thing to energize your instructional-design efforts, your e-learning efforts, your strategic organizational-learning initiatives.

Learning Audits aren’t cheap, but the information they produce can be priceless.

If you want to look more closely at our old verbiage on audits, check out this link. Better yet, contact me directly to get started or just ask questions.

Jonathon Levy, currently Senior Learning Strategist at the Monitor Group, tells the story from his days as Vice President at Harvard Business School Publishing. The story is funny and sad at the same time, but it’s very instructive on several fronts.

Levy’s client decided that he would award end-of-year Christmas bonuses based on how successful his employees were in completing the Harvard online courses. Levy advised against it, but the client did it anyway.

The results were predictable, but they might never have been noticed if Jonathon’s Harvard team had not integrated into all their courses a tracking system to provide themselves with feedback about how learners used the courses. The tracking system showed that learners didn’t read a thing; they just scanned the course and clicked where they were required to click. They just wanted to get credit so they could maximize their bonuses.

Although very little learning took place, everyone was happy.

  • Learners were happy because they got their bonuses.
  • The client (training manager) was happy because he could show a remarkable completion rate.
  • Line managers were happy, because the learners wasted very little time in training.
  • Senior management was happy because they could demonstrate a higher rate of utilization of online learning.

What can we learn from this?

  1. Be careful what you reward, because you might get the specific behavior you reinforced (course-completion in this case).
  2. Completion rates are a poor measure of training effectiveness. Same as butts in seats.
  3. We need authentic measures of training effectiveness to prevent such silliness.
  4. Instructional-design activities benefit when good tracking and feedback mechanisms are built into our designs.