My list of Best Books of 2006 for the Workplace Learning-and-Performance Field are as follows:

You can purchase these books directly at the Will’s Blog Amazon.com Store

Wick, Pollock, Jefferson, and Flanagan (2006). The Six Disciplines of Breakthrough Learning: How to Turn Training and Development Into Business Results.

This book is nothing short of revolutionary, providing a comprehensive analysis of how to create learning interventions that have performance impact. I previously reviewed this book and awarded the lead author the 2006 Neon Elephant Award.

Pfeffer and Sutton (2006). Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management.

This book provides an evidence-based critique of many of today’s most common management fads. The information in the book is critical to learning-and-performance professionals responsible for leadership and management development efforts (BECAUSE CONTENT MATTERS), but also as a shining example for our field (BECAUSE IF WE’RE NOT USING EVIDENCE-BASED PRACTICES, WE’RE NOT GETTING OPTIMAL RESULTS).


Clark, Nguyen, AND Sweller (2005). Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load

Another in Ruth Clark’s excellent series of research-to-practice books where she partners with leading learning researchers. Here she joins with Frank Nguyen and partners with John Sweller, developer of cognitive load theory. The book tells us how to create learning interactions that avoid overloading our learners’ limited working memory and perceptual-channel capacities.

Rossett and Schafer (2006). Job Aids and Performance Support: Moving From Knowledge in the Classroom to Knowledge Everywhere

Allison Rossett and Lisa Schafer have created a great book on performance-support systems. With an introduction from Gloria Gery, this book nicely outlines the power and potential of performance support to replace and/or augment traditional training interventions. The book is written using a sensible, research-based approach. It describes when performance support is valuable, and when it’s not.

Israelite (2006). Lies About Learning: Leading Executives Separate Truth from Fiction In a $100 Billion Industry

Larry Israelite edits this book as the various authors take on many of the most important myths in our field. This kind of book is critical in our typically lost-in-denial field.

Allen (2006). Michael Allen’s E-Learning Library: Creating Successful E-Learning: A Rapid System For Getting It Right First Time, Every Time

Michael Allen knows instructional design from the ground floor up. For four decades he’s led the field. Now he shares his hard-earned knowledge with the rest of us. This book provides a model for real-world instructional designers to create effective e-learning. This is the first in Michael Allen’s ongoing Pfeiffer e-learning series.

You can purchase these books directly at the Will’s Blog Amazon.com Store

And here’s another example of a well-respected industry analyst lazily sharing the biggest myth in the learning field. This time it’s from a Senior Industry Analyst with Forrester Research (October 19th, 2006). See recorded webinar.

Forrester_schooley_10per_20per

Read my initial post describing how this myth got started, and how it harms our field and our learners.

The source of the offending PowerPoint slide claims the data as their own ("Source: Forrester Research"). Yeah, I guess if you find false information on the web, then change it around a little bit to help you make your point, that you ought to cite yourself. Is it plagiarism if you steal a lie?

Makes you wonder what other information Forrester has "researched."

To make it easier for the Forrester marketing and public relations folks to respond to this outing, I’ve developed a new logo for them. Instead of the name "Forrester" superimposed on the sea-green ellipsis, how about the following?

Forrester_fiction_2

This constant myth-sharing should stop.

Do you think it would help if I started naming names? What about photographs? Email addresses?

Maybe sarcasm will work.

Newspapers and magazines have recognized for many years that there must be a clear distinction between content and advertising in the perceptions of readers. These entities not only have clear standards about these distinctions, but they also enforce a clear delineation between their sales departments and their news departments so that their sales efforts cannot influence (bias) their news reporting.

It is similar in the TV news business. Even as recently as last year, news organizations have been chastened with public outrage when Bush Administration video propoganda—created to look like real news broadcasts—was used by several news organizations as if it was an actual independent news clip. See the following article for details, for example (access costs money), THE MESSAGE MACHINE: How the Government Makes News; Under Bush, a New Age of Prepackaged News. By DAVID BARSTOW AND ROBIN STEIN; ANNE E. KORNBLUT CONTRIBUTED REPORTING FOR THIS ARTICLE. NY Times: March 13, 2005

News organizations don’t follow these ethical practices only because they believe in their sanctity. They follow these practices because the general public becomes outraged when news is seen as biased by financial interests or the influence of powerful elites.

Unfortunately, on the web—where the general public is being sorted into smaller and smaller subgroups—it is easier for organizations to deceive these micro-publics into believing that commercial messages come from independent unbiased sources.

With this in mind, I offer the following minimum standards of ethics for internet sellers and advertisers:

  1. Almost everyone who views your advertising should understand immediately that it is an advertisement.
  2. It is more ethical to offer independent content than sponsored content.
  3. It is more ethical to offer both independent and sponsored content than it is to offer sponsored content alone.
  4. It is more ethical to offer sponsored content and independent content if the financial benefits of the sponsored content are required to enable the production of the independent content.
  5. Readers/viewers must be able to immediately recognize which information is sponsored and which information is independent.
  6. Almost everyone who views sponsored content (web pages, white papers, articles, awards, best-of lists, etc.) should understand immediately that the information is being presented to sell or persuade the readers or viewers.
  7. The development of independent content should not be influenced by sponsoring entities.
  8. Authors must clearly acknowledge any relationships that may bias their work.

What else am I missing?

What do I have wrong?

What evidence of this do you see in the Workplace Learning-and-Performance Field?

What else, specifically, do we have to worry about? Any of the following (I’m just brainstorming)?

  • product placements in e-learning?
  • trade organization behaviors?
  • helping our learners distinguish truth from sales pitches?
  • conferences that offer sponsorships?
  • webinars that are sales pitches masquerading as information?
  • gurus that have vendor money in their pockets?
  • online communities based on sponsorship money?
  • white papers?
  • award programs?
  • vendor research?

New research in the New England Journal of Medicine shows that doctors who spend more time doing a colonoscopy perform better (find more polyps) than doctors who do the procedures more quickly.

As the New York Times reported, the study found "an astonishing gap in proficiency — clearly related to the time spent looking. The doctors who spent six minutes or more searching for polyps — the minimum time recommended in the professional literature — detected growths at nearly four times the rate of those who did the procedure more quickly. The doctor who took the longest time found polyps 10 times more often than the doctor who spent the least time."

The times further noted that the group of doctors studied was subsequently able to improve their performance, just by using a clock. The group "decided to spend at least eight minutes on withdrawing the instrument and looking for polyps — clocked by a timer with a bell — and has increased its polyp detection rate 50 percent." See original story.

Learnings for the learning-and-performance field:

  1. Research can uncover insights that would otherwise be overlooked.
  2. Simple measures can drastically improve performance.
  3. Evaluating our performance is a moral imperative, because it can have real-world effects.

I’m a strong advocate for assessment. If we don’t assess our learning interventions, (a) we as instructional designers don’t learn ourselves, (b) we don’t have valid data to give ourselves feedback, and (c) we can’t possibly improve our learning designs.

If we’re going to validly assess our learning interventions, we have to understand human learning and beware of biasing our results. Some of the things we have to watch out for:

  1. Testing our learners only when information is top-of-mind.
  2. Testing our learners in the learning context.
  3. Testing our learners unfairly with biasing pretests.
  4. Testing our learners with stupid, irrelevant questions.
  5. Using Level 1 smile-sheet data exclusively.
  6. Measuring with post-hoc metrics.

To help folks in the field avoid some of these pitfalls, I’ve developed the Fair-Assessment Quick-Audit courtesy of LearningAudit.com and Work-Learning Research.

Download The_Fair-Assessment_Quick-Audit.pdf

Some of these criteria are critical, especially because we in the field tend to do the exact opposite of what is fair and valid.

See this (click here) recent post that describes the mistakes we’re making in assessment.

In a webinar this month (December 2006), I asked a group of about 100 e-learning professionals, what was the highest level of assessment they did (based on Kirkpatrick’s Four Levels) on their most recent learning-intervention development project.

  • 11% said they did NO evaluation
  • 26% said they did Level 1 smile sheets
  • 48% said they measured Level 2 learning
  • 15% said they measured Level 3 on-the-job performance
  • 0% said they measured Level 4 business results (or ROI).

Unfortunately, smile sheets are very poor predictors of meaningful learning outcomes, being correlated at less than an r of .2 with learning and performance. See Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Stunning: Even after all the hot air expelled, ink spilled, and electrons excited in the last 10 years regarding how we ought to be measuring business results, nobody is doing it !!!!!!!!!

——————————

When I asked them how they did their most recent assessment, in terms of WHEN they did the assessment—whether they did the assessment immediately at the end of learning or after a delay,

  • 77% said they did the assessment, "At the end of training."
  • 7% said they did the assessment, "After a delay."
  • 14% said they did the assessment, "At end—and after a delay."
  • 2% said, "Never done / Can’t remember."

Unfortunately, the 77% are biasing the results in a positive direction. They are measuring learning when it is top-of-mind, easily accessible from long-term memory. They are measuring the learning intervention’s ability to create short-term learning effects. They are not measuring its ability to support long-term remembering, or its ability to specifically minimize forgetting.

In the graphic depiction below, the top of the left axis (the y axis) represents more remembering, the bottom is less remembering. Consider what happens if we assess learning at the end (or the top) of the first (leftmost) "Learning" curve. If the learners utilize what they’ve learned on the job, such an assessment has a negative bias. However, what typically happens over time is more like the forgetting curve (depicted at the lower right). Unless learners regularly use what they’ve learned, any assessment at the end of the first learning curve is likely to be a poor predictor of future remembering—and show a definite positive bias.

Learningforgettingcurve

——————————

When I asked them how they did their most recent assessment, in terms of WHERE the learners were when they completed the assessment,

  • 70% said they did the assessment, "In the training room/context."
  • 26% said they did the assessment, "In a different room/context."
  • 5% said, "Never done / Can’t remember."

Unfortunately, the 70% are biasing the results of their assessments in a positive direction. When learners are in the same context during retrieval as during learning, they tend to recall more because the background context stimulates improved retrieval. So, providing our training assessments in the training room (or using the same background stimuli in an e-learning course), is not a fair way for us to get feedback on our performance as instructional designers.

For one example of this research paradigm, see the following example. Smith, S. M., Glenberg, A., & Bjork, R. A. (1978). Environmental context and human memory. Memory & Cognition, 6, 342-353.

Testing_in_out_of_contextjpeg

The Bottom Line

First, I’m not blaming these particular folks. This is a common reality. I have regularly failed—and continue to fail often—in validly assessing my own instructional-development efforts. I’m much better when people pay me to evaluate their learning interventions (see LearningAudit.com).

Almost all of us—as far as I can tell—are just not getting valid feedback on our instructional-development efforts. Note that though 48% said they did Level 2 learning evaluation on their most recent project, probably most of those folks delivered the assessment in a way that biased those results. This leaves very few of us who are getting valid feedback on our designs.

We’re in a dark fog about how we’re doing, and so we have massively impoverished information to use to make improvements.

Basically, we live in a shameful, self-imposed fog.

Bring on the fog lights!!

Dan Savage, noted syndicated sex-advice columnist (warning: graphic discussions), said recently on the radio show Infinite Mind that through the years he’s seen a change in the questions people are asking him. No longer are they asking him simple questions about what a particular sex act is, how to do it, etc. He’s says the internet has changed what people want to know about. They can quickly google the basic stuff online. Now they want to know more about the deeper stuff, the relationship stuff, etc.

Hmmm.

Is there a parallel for employee-training situations?

Do we need to embrace deeper, more emotional, and/or more penetrating learning methods?

  • Coaching?
  • Mentoring?
  • In-depth Articles?
  • Questions and Answers from Experts?
  • Group Discussions
  • Collaboration?
  • Management Involvement?
  • Online Communities?
  • Interpersonal Network Analysis?

Does this relate to some content and not others?

  • IT Training
  • Soft Skills
  • Onboarding
  • Business Acumen
  • Ethics and Safety

Or, should we make sure we do a new round of needs assessment to explore how our learners are getting their information, what information they’re getting already, what information they still need?

Or, should we just remember to ask ourselves, "What can we do to make training/learning sexy?"

I was in a video conference today with the Chicagoland Learning Leaders, and out of my mouth came the following sentence (I’m paraphrasing):

"The m-Learning tsunami is coming."

I know where this phrasing came from. It came from a research report I wrote on e-learning a number of years ago. Here’s what I wrote back in 2002:

E-learning is churning the learning-and-performance field like a great tsunami hovering high above, a dark blue wave threatening everything that has come before. As those on the shore—trainers, instructional designers, performance consultants—run furiously in all directions, everybody wants to know:

1. Can this e-learning stuff work?
2. For what uses is e-learning most appropriate?
3. Will e-learning replace traditional learning approaches?
4. How can e-learning be designed to be effective?

Replace the "e" with an "m" and the same questions still apply.

The point I made today in the video conference was that m-learning is on its way, that its going to become its own force of nature and that we’re going to feel a lot of pressure to ride the wave.

Just like all learning technologies, we need to remember the fundamentals. Especially relevant:

  1. First determine the learning outcomes you hope for (ignoring the m-learning technology).
  2. Then determine the best way to reach those goals (considering m-learning as one of a hundred options).
  3. When implementing a new learning technology, rapidly develop some prototypes.
  4. Remember to base all your designs so that the learning intervention aligns with the human learning system. Utilize research-based instructional design.
  5. Test your rapid prototypes in a meaningful way (measure learners ability to make decision, measure changes in job performance, measure business results).
  6. Keep what works, rolling it out in a bigger way. Keep testing to see if the roll-out works as well.
  7. Lather, rinse, repeat.

Yes, it’s obvious. The m-Learning Tsunami is coming.

  • The installed base of MP3 players is growing exponentially
  • Cell phones are becoming usable for learning
  • PDA’s and small Tablet PC’s are being deployed
  • m-Learning is one of the hottest conference sessions right now
  • Vendors are investing in R&D, and more vendors are focusing on m-Learning.

And yes, we’ll all be swept up in it. Many of us will drown. Some of us will have the ride of our lives.

Most of us will do best to remember the fundamentals.

m-Learning is a bunch of different tools. We have to learn when and how to use them—and when not to use them.

We need to pay attention to WHAT’S NEW, but remember WHAT’S TRUE.

I hereby offer the following challenge—The Storefront Learning Challenge.

Here’s my first entry:

Title: Lifetime Transfer

Storefrontpict_lifetime_transfer

Explanation: One of our goals in the learning and performance field is to enable transfer, to help our learners minimize forgetting so that they can remember what they’ve learned over a long long time.

Location: Union Square, Somerville, Massachusetts, US.

This contest has no rules except this one:

Try to capture in a storefront photograph, some insight, truth, or humorous notion regarding our field—the training, development, learning, performance, e-learning field(s). Give it a title, caption, and/or explanation if you like. Tell us the earth location of the storefront. In the comments section below, tell us about your photograph and give us a URL to access it. You might try a free photo service to post it if you don’t have a blog or website of your own.

Feel free to use video as well.