Among the many changes on the horizon at Work-Learning Research, one of the most exciting for me is our new emphasis on learning audits. As the field moves more toward "evidence-based practices" and "evaluation-tested learning" more and more decision-makers are incorporating outside evaluations into their instructional-design repertoires.

Before we unveil our learning-audit offerings in their new finery, we’d like to offer viewers of this journal a 25% discount for all audits that are started and completed within this calendar year. This offer only lasts while we have capacity to perform the audits—as of today, we can schedule about 10 more audits through the end of the year.

Whether for your organization, or your clients’ organizations, a learning audit might be just the thing to energize your instructional-design efforts, your e-learning efforts, your strategic organizational-learning initiatives.

Learning Audits aren’t cheap, but the information they produce can be priceless.

If you want to look more closely at our old verbiage on audits, check out this link. Better yet, contact me directly to get started or just ask questions.

Jonathon Levy, currently Senior Learning Strategist at the Monitor Group, tells the story from his days as Vice President at Harvard Business School Publishing. The story is funny and sad at the same time, but it’s very instructive on several fronts.

Levy’s client decided that he would award end-of-year Christmas bonuses based on how successful his employees were in completing the Harvard online courses. Levy advised against it, but the client did it anyway.

The results were predictable, but they might never have been noticed if Jonathon’s Harvard team had not integrated into all their courses a tracking system to provide themselves with feedback about how learners used the courses. The tracking system showed that learners didn’t read a thing; they just scanned the course and clicked where they were required to click. They just wanted to get credit so they could maximize their bonuses.


Although very little learning took place, everyone was happy.

  • Learners were happy because they got their bonuses.
  • The client (training manager) was happy because he could show a remarkable completion rate.
  • Line managers were happy, because the learners wasted very little time in training.
  • Senior management was happy because they could demonstrate a higher rate of utilization of online learning.


What can we learn from this?

  1. Be careful what you reward, because you might get the specific behavior you reinforced (course-completion in this case).
  2. Completion rates are a poor measure of training effectiveness. Same as butts in seats.
  3. We need authentic measures of training effectiveness to prevent such silliness.
  4. Instructional-design activities benefit when good tracking and feedback mechanisms are built into our designs.