This blurb is reprised from an earlier Work-Learning Research Newsletter, circa 2002. These "classic" pieces are offered again to make them available permanently on the web. Also, they’re just good fun. I’ve added an epilogue to the piece below.

"What prevents people in the learning-and-performance field from utilizing proven instructional-design knowledge?"

Recently, I’ve spoken with several very experienced learning-and-performance consultants who have each—in their own way—asked the question above. In our discussions, we’ve considered several options, which I’ve flippantly labeled as follows:

  1. They don’t know it. (They don’t know what works to improve instruction.)
  2. They know it, but the market doesn’t care.
  3. They know it, but they’d rather play.
  4. They know it, but don’t have the resources to do it.
  5. They know it, but don’t think it’s important.

Argument 1.

They don’t know it. (They don’t know what works to improve instruction.)
Let me make this concrete. Do people in our field know that meaningful repetitions are probably our most powerful learning mechanism? Do they know that delayed feedback is usually better than immediate feedback? That spacing learning over time facilitates retention. That it’s important to increase learning and decrease forgetting? That interactivity can either be good or bad, depending on what we’re asking learners to retrieve from memory? One of my discussants suggested that "everyone knows this stuff and has known it since Gagne talked about it in the 1970’s."

Argument 2.

They know it, but the market doesn’t care.
The argument: Instructional designers, trainers, performance consultants and others know this stuff, but because the marketplace doesn’t demand it, they don’t implement what they know will really work. This argument has two variants: The learners don’t want it or the clients don’t want it.

Argument 3.

They know it, but they’d rather play.
The argument: Designers and developers know this stuff, but they’re so focused on utilizing the latest technology or creating the snazziest interface, that they forget to implement what they know.

Argument 4.

They know it, but don’t have the resources to use it.
The argument: Everybody knows this stuff, but they don’t have the resources to implement it correctly. Either their clients won’t pay for it or their organizations don’t provide enough resources to do it right.

Argument 5.

They know it, but don’t think it’s important.
The argument: Everybody knows this stuff, but instructional-design knowledge isn’t that important. Organizational, management, and cultural variables are much more important. We can instruct people all we want, but if managers don’t reward the learned behaviors, the instruction doesn’t matter.

My Thoughts In Brief

First, some data. On the Work-Learning Research website we provide a 15-item quiz that presents people with authentic instructional-design decisions. People in the field should be able to answer these questions with at least some level of proficiency. We might expect them to get at least 60 or 70% correct. Although web-based data-gathering is loaded with pitfalls (we don’t really know who is answering the questions, for example), here’s what we’ve found so far: On average, correct responses are running at about 30%. Random guessing would produce 20 to 25% correct. Yes, you’ve read that correctly—people are doing a little bit better than chance. The verdict: People don’t seem to know what works and what doesn’t in the way of instructional design.

Some additional data. Our research on learning and performance has revealed that learning can be improved through instruction by up to 220% by utilizing appropriate instructional-design methods. Many of the programs out there do not utilize these methods.

Should we now ignore the other arguments presented above? No, there is truth in them. Our learners and clients don’t always know what will work best for them. Developers will always push the envelope and gravitate to new and provocative technologies. Our organizations and our clients will always try to keep costs down. Instruction will never be the only answer. It will never work without organizational supports.

What should we do?

We need to continue our own development and bolster our knowledge of instructional-design. We need to gently educate our learners, clients, and organizations about the benefits of good instructional design and good organizational practices. We need to remind technology’s early adopters to remember our learning-and-performance goals. We need to understand instructional-design tradeoffs so that we can make them intelligently. We need to consider organizational realities in determining whether instruction is the most appropriate intervention. We need to develop instruction that will work where it is implemented. We need to build our profession so that we can have a greater impact. We need to keep an open mind and continue to learn from our learners, colleagues, and clients, and from the research on learning and performance.

Will’s New Thoughts (November 2005)

I started Work-Learning Research in 1998 because I saw a need in the field to bridge the gap between research and practice. In these past seven years, I’ve made an effort to compile research and disseminate it, and though partly successful, I often lament my limited reach. Like most entrepreneurs, I have learned things the hard way. That’s part of the fun, the angst, and the learning. 

In the past few years, the training and development field has gotten hungrier and hungrier for research. I’ve seen this in conferences where I speak. The research-based presentations are drawing the biggest crowds. I’ve seen this in the increasing number of vendors who are highlighting their research bonafides, whether they do good research or not. I’ve seen this recently in Elliott Masie’s call for the field to do more research.

This hunger for research has little to do with my meager efforts at Work-Learning Research. Though sometimes in my daydreams I like to think I have influenced at least some in the field—maybe even some opinion leaders. As a data-first empiricist, the evidence is clear. I know that my efforts are often under the radar. Ultimately, this is unimportant. What is important is what gets done.

I’m optimistic. Our renewed taste for research-based practice provides an opportunity for all of us to keep learning and to keep sharing with one another. I’ve got some definite ideas about how to do this. I know many of you who read this do too. We may not—as a whole industry—use enough research-based practices, but there are certainly some individuals and organizations who are out there leading the way. They are the heroes, for it is they who are out there taking risks, asking for the organizational support most of us don’t ask for, making a difference one mistake at at time.

One thing we need to spur this effort is a better feedback loop. If we don’t go beyond the smile sheet, we’re never going to improve our practices. We need feedback on whether our learning programs are really improving learning and long-term retrieval. Don’t think that just because you are out there on the bleeding edge that you’re championing the revolution. You need to ensure that your efforts are really making things better—that your devotion is really improving learning and long-term retention. If you’re not measuring it, you don’t really know.

Let me end by saying that research from refereed journals and research-based white papers should not be the only arbiter of what is good. Research is useful as a guide—especially when our feedback loop is so enfeebled and organizational funds for on-the-job learning measurement are so impoverished.

It would be better, of course, that we might all test our instructional designs in their real-world contexts. Let us more toward this, drawing from all sources of wisdom, dipping our ladles into the rich research-base and the experiences of those who measure their learning efforts.