Okay, here’s another example of the same incorrect information that plagues our field. This is from a company named Percepsys:


Hopefully, sometime soon, the webpage on their site won’t work because the vendor will smarten up and remove this misinformation. NOTE from 2017: The company appears to be no longer in business.

It’s NOT TRUE that people remember 10% of what they read, 20% of what they hear, etc. Moreover, if you know anything about learning, you’d know it would be impossible to pin down the amount of remembering. It depends on the materials, the learners, the duration of learning, the type of learning activities, the consistency between the learning situation and the retrieval situation, and the length of retention among other things. Finally, while this information (the 10%, 20%, 30% information) is often attached to Dale’s Cone, Dale never actually had any numbers on his cone.

For the best review of the history of this misdirection, if I must say so myself, is here.

Thanks to George Siemens blog, I learned of a wonderful phrase, "Strong Opinions, Weakly Held," as blogged by Bob Sutton.

I’d actually like to modify it a bit to: "Strong Ideas, Weakly Held." This adds the connotation that the thoughts have been well researched (not just back-of-the-envelope opinions).

In a sense, it’s the researchers’ mindset–to work tirelessly to gather relevant data, make sense of it, state a conclusion, but then be willing to test that conclusion with data.

The linked NYTimes article shows how a simple 10-minute video used in a 40-minute class is helping to prevent kids from engaging in unsafe sex in Africa.

Some interesting insights I draw from the article:

  1. Relatively inexpensive learning interventions can work.
  2. Sometimes new technologies are useful because they grab attention.
  3. Videos can facilitate learning. We don’t necessarily have to create some fancy e-learning program.
  4. Data matters. Even kids pay attention.
  5. Researchers can make a difference. As the reporter (CELIA W. DUGGER) wrote,

    "A year after the researchers intervened, girls who had been given information about the greater risk of sex with older men were 65 percent less likely to have gotten pregnant by an adult partner."

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

I’ve been suspicious about the learning-styles bandwagon for many years. The learning-style argument has gone something like this: If instructional designers know the learning style of their learners, they can develop material specifically to help those learners, and such extra efforts are worth the trouble.

I have my doubts, but am open to being proven wrong.

Here’s the criteria for my Learning-Styles Instructional-Design Challenge:

  1. The learning program must diagnose learners’ learning styles. It must then provide different learning materials/experiences to those who have different styles.
  2. The learning program must be compared against a similar program that does not differentiate the material based on learning styles.
  3. The programs must be of similar quality and provide similar information. The only thing that should vary is the learning-styles manipulation.
  4. The comparison between the two versions (the learning-style version and the non-learning-style version) must be fair, valid, and reliable. At least 70 learners must be randomly assigned to the two groups (with at least 35 minimum in each group completing the experience). The two programs must have approximately the same running time. For example, the time required by the learning-style program to diagnose learning styles can be used by the non-learning-styles program to deliver learning. The median learning time for the programs must be no shorter than 25 minutes.
  5. Learners must be adults involved in a formal workplace training program delivered through a computer program (e-learning or CBT) without a live instructor. This requirement is to ensure the reproducability of the effects, as instructor-led training cannot be precisely reproduced.
  6. The learning-style program must be created in an instructional-development shop that is dedicated to creating learning programs for real-world use. Programs developed only for research purposes are excluded. My claim is that real-world instructional design is unlikely to be able to utilize learning styles to create learning gains.
  7. The results must be assessed in a manner that is relatively authentic–at a minimum level learners should be asked to make scenario-based decisions or perform activities that simulate the real-world performance the program teaches them to accomplish. Assessments that only ask for information at the knowledge level (e.g., definitions, terminology, labels) are NOT acceptable. The final assessment must be delayed at least one week after the end of the training. The same final assessment must be used for both groups. It must fairly assess the whole learning experience.
  8. The magnitude of the difference in results between the learning-style program and the non-learning-style program must be at least 10%. (In other words, the average of the learning-styles scores subtracted by the average of the non-learning-styles scores must be more than 10% of the non-learning-styles scores). So for example, if the non-learning-styles average is 50, then the learning-styles score must be equal to 55 or more. This magnitude is to ensure that the learning-styles program produces meaningful benefits. 10% is not too much to ask.
  9. The results must be statistically significant at the p<.05 level. Appropriate statistical procedures must be used to gauge the reliability of the results. Cohen’s d effect size should be equal to .4 or more (a small to medium effect size according to Cohen, 1992).
  10. The learning-style program cannot cost more than twice as much as the non-learning-style program to develop, nor can it take more than twice as long to develop. I want to be generous here.
  11. The results can be documented by unbiased parties.

To reiterate, the challenge is this:

Can an e-learning program that utilizes learning-style information outperform an e-learning program that doesn’t utilize such information by 10% or more on a realistic test of learning, even it is allowed to cost up to twice as much to build?

$1,000 says it just doesn’t happen in the real-world of instructional design. $1,000 says we ought to stop wasting millions trying to cater to this phantom curse.