Below is another example of the misuse of the now-infamous bogus percentages by a speaker at a prominent international conference in the workplace learning field, this time in an online session in January 2009.

I have documented this problem starting in 2002. The following posts illustrate this problem.

A manager at Qube Learning joins the list of folks who have been fooled, and who foolishly and irresponsibly re-gift this faulty information. Point: If you can't verify the credibility of the so-called "research" you come across, don't share it.

Cone_January2009

And this follow-up slide:

Cone_January2009b

It's a shame we have to keep revisiting this bogus information. I truly wish I didn't have to do this.

Of course, even if you and I wipe this bogus-information example off the face of the earth, there will be more misinformation we'll have to deal with. It's okay. It's the nature of living I think. The learning point here is that all of us in the learning-and-performance field must be vigilant. We must be skeptical of claims. We must build structures where we can test these bogus claims in the crucible of an evidence-based marketplace. It is only then that we will be able to build a fully-worthy profession.

Keep sending me your examples. Thanks to the helpful soul who sent me this example.

Interestingly, just today a major player in our field asked me permission to publish the original blog post (the one debunking the bogus-percentage myth) in their company newsletter (which goes out to over 100,000 people). They too had been using this misinformation in their work and now wanted to correct their mistake. I salute their action.

Judith Gustafson just left an excellent comment on an earlier blog post. She let us know about a presentation at the Association of (AECT) Educational Communications and Technology conference in 2002.

Click here for the PPT presentation by Tony Betrus and Al Januszewski of the State University of New York at Potsdam that does a great job of describing what Edgar Dale meant to convey with his cone, AND shows numerous examples of how the cone has been used improperly with the numbers added.

Here is my original post on this.

In what has become an eternal vigil against the myth that "People remember 10% of what they…" I just hit the jackpot with the help of Jay Banks who just sent me an email.

The Wikipedia entry for Edgar Dale had two incorrect references to the bogus numbers that I talk about so often (see my blog category Myths and Worse). I fixed it today, I hope for good.

Here’s what it looked like:

Cone_of_learning_export_from_wikipe

And here’s what it looked like in Wikipedia:

Cone_of_learning_export_in_wikipedi

For those who are shocked that information on the internet might be wrong—or that Wikipedia might be wrong—see my previous entries about Wikipedia (1st Most-Recent).

It has been exactly one year since I offered $1,000 to anyone who could demonstrate that utilizing learning styles improved learning outcomes. Click here for the original challenge.

So far, no one has even come close.

For all the talk about learning styles over the last 15 years, we might expect that I was at risk of quickly losing my money.

Let me be clear, my argument is not that people don’t have different learning styles, learning preferences, or learning skills. My argument is that for real-world instructional-development situations, learning styles is an ineffective and inefficient waste of resources that is unlikely to produce meaningful results.

Let me leave you with the original challenge:

“Can an e-learning program that utilizes learning-style information outperform an e-learning program that doesn’t utilize such information by 10% or more on a realistic test of learning, even it is allowed to cost up to twice as much to build?”

The challenge is still on.

I just came across another sighting of the mythologic numbers of memory retention, this time on the webpage of HRDQ.

Take a look:

Hrdq_6192007_2

They claim that, "Research shows people remember only 5% of what they hear in a lecture. But they retain 75% when they learn by doing." Bulloney!!

If you want to read my full debunking, click here.

If you want to see many bogus sightings, click here and scroll down.

And here’s another example of a well-respected industry analyst lazily sharing the biggest myth in the learning field. This time it’s from a Senior Industry Analyst with Forrester Research (October 19th, 2006). See recorded webinar.

Forrester_schooley_10per_20per

Read my initial post describing how this myth got started, and how it harms our field and our learners.

The source of the offending PowerPoint slide claims the data as their own ("Source: Forrester Research"). Yeah, I guess if you find false information on the web, then change it around a little bit to help you make your point, that you ought to cite yourself. Is it plagiarism if you steal a lie?

Makes you wonder what other information Forrester has "researched."

To make it easier for the Forrester marketing and public relations folks to respond to this outing, I’ve developed a new logo for them. Instead of the name "Forrester" superimposed on the sea-green ellipsis, how about the following?

Forrester_fiction_2

This constant myth-sharing should stop.

Do you think it would help if I started naming names? What about photographs? Email addresses?

Maybe sarcasm will work.

It’s time to publicly vilify NTL Institute for Applied Behavioral Science for propagating the myth that learners remember 10% of what they read, 20% or what they see visually, etc. They continue to claim that they did this research and that it is accurate.

The research is NOT accurate, nor could it be. Even a casual observer can see that research results that end neatly in 5’s or 0’s (as in 5%, 10%, 20%, 30%) are extremely unlikely. To see a complete debunking of this hoax, click here.

Normally, I choose not to name names when it comes to the myths in our field. We all make mistakes, right? But NTL continues to harm our field by propagating this myth. Here is the document (Download NTL’s email)–the one they send to people who inquire about the percentages. At least five separate people have sent me this document after contacting NTL on their own initiative.

I have talked to NTL staff people and emailed them (over a year ago), and even with my charming personality, I have failed to persuade them of the problems they are causing.

The people who write me about this are outraged (and frankly confused) that an organization would propagate such an obvious falsehood. Are you?

Here are claims that NTL makes in its letter that are false:

NTL: We know that in 1954 a similar pyramid with slightly different numbers appeared on p. 43 of a book called Audio-Visual Methods in Teaching, published by the Edgar Dale Dryden Press in New York.

Why false? There are NO numbers on page 43 of Edgar Dale’s book.

NTL: We are happy to respond to your inquiry about The Learning Pyramid. Yes, it was developed and used by NTL Institute at our Bethel, Maine campus in the early sixties when we were still part of the National Education Association’s Adult Education Division.

Very Intriguing: How could NTL have developed the pyramid in the 1960’s, when a similar version was published by Edgar Dale in 1954? Professor Michael Molenda of Indiana University has found some evidence that the numbers first appeared in the 1940’s. Maybe NTL has a time machine.

NTL: Yet the Learning Pyramid as such seems to have been modified and always has been attributed to NTL Institute.

No. It wasn’t attributed to NTL by Dale. Dale thought it was his. And again, Dale did not use any numbers. Just a cone.

Okay, so now half of you hate NTL, and the other half of you hate me for being the “know-it-all kid” from 7th grade. Well, I’ll take the heat for that. But still, is this the kind of field you want to work in?

And what is the advantage for NTL to continue the big lie?

Here’s what NTL should write when people inquire:

Thanks for your inquiry to the NTL Institute. Yes, we once utilized the “Learning Pyramid” concept in our work, starting in the 1960’s. However, we can no longer locate the source of the original information and recent research tends to debunk those earlier recommendations. We apologize for any harm or confusion we may have caused.

Okay, here’s another example of the same incorrect information that plagues our field. This is from a company named Percepsys:

 

Hopefully, sometime soon, the webpage on their site won’t work because the vendor will smarten up and remove this misinformation. NOTE from 2017: The company appears to be no longer in business.

It’s NOT TRUE that people remember 10% of what they read, 20% of what they hear, etc. Moreover, if you know anything about learning, you’d know it would be impossible to pin down the amount of remembering. It depends on the materials, the learners, the duration of learning, the type of learning activities, the consistency between the learning situation and the retrieval situation, and the length of retention among other things. Finally, while this information (the 10%, 20%, 30% information) is often attached to Dale’s Cone, Dale never actually had any numbers on his cone.

For the best review of the history of this misdirection, if I must say so myself, is here.

I will give $1000 (US dollars) to the first person or group who can prove that taking learning styles into account in designing instruction can produce meaningful learning benefits.

I’ve been suspicious about the learning-styles bandwagon for many years. The learning-style argument has gone something like this: If instructional designers know the learning style of their learners, they can develop material specifically to help those learners, and such extra efforts are worth the trouble.

I have my doubts, but am open to being proven wrong.

Here’s the criteria for my Learning-Styles Instructional-Design Challenge:

  1. The learning program must diagnose learners’ learning styles. It must then provide different learning materials/experiences to those who have different styles.
  2. The learning program must be compared against a similar program that does not differentiate the material based on learning styles.
  3. The programs must be of similar quality and provide similar information. The only thing that should vary is the learning-styles manipulation.
  4. The comparison between the two versions (the learning-style version and the non-learning-style version) must be fair, valid, and reliable. At least 70 learners must be randomly assigned to the two groups (with at least 35 minimum in each group completing the experience). The two programs must have approximately the same running time. For example, the time required by the learning-style program to diagnose learning styles can be used by the non-learning-styles program to deliver learning. The median learning time for the programs must be no shorter than 25 minutes.
  5. Learners must be adults involved in a formal workplace training program delivered through a computer program (e-learning or CBT) without a live instructor. This requirement is to ensure the reproducability of the effects, as instructor-led training cannot be precisely reproduced.
  6. The learning-style program must be created in an instructional-development shop that is dedicated to creating learning programs for real-world use. Programs developed only for research purposes are excluded. My claim is that real-world instructional design is unlikely to be able to utilize learning styles to create learning gains.
  7. The results must be assessed in a manner that is relatively authentic–at a minimum level learners should be asked to make scenario-based decisions or perform activities that simulate the real-world performance the program teaches them to accomplish. Assessments that only ask for information at the knowledge level (e.g., definitions, terminology, labels) are NOT acceptable. The final assessment must be delayed at least one week after the end of the training. The same final assessment must be used for both groups. It must fairly assess the whole learning experience.
  8. The magnitude of the difference in results between the learning-style program and the non-learning-style program must be at least 10%. (In other words, the average of the learning-styles scores subtracted by the average of the non-learning-styles scores must be more than 10% of the non-learning-styles scores). So for example, if the non-learning-styles average is 50, then the learning-styles score must be equal to 55 or more. This magnitude is to ensure that the learning-styles program produces meaningful benefits. 10% is not too much to ask.
  9. The results must be statistically significant at the p<.05 level. Appropriate statistical procedures must be used to gauge the reliability of the results. Cohen’s d effect size should be equal to .4 or more (a small to medium effect size according to Cohen, 1992).
  10. The learning-style program cannot cost more than twice as much as the non-learning-style program to develop, nor can it take more than twice as long to develop. I want to be generous here.
  11. The results can be documented by unbiased parties.

To reiterate, the challenge is this:

Can an e-learning program that utilizes learning-style information outperform an e-learning program that doesn’t utilize such information by 10% or more on a realistic test of learning, even it is allowed to cost up to twice as much to build?

$1,000 says it just doesn’t happen in the real-world of instructional design. $1,000 says we ought to stop wasting millions trying to cater to this phantom curse.

The Bloom is Off the Vine

I just came across this nifty little piece on Bloom’s Taxonomy, written by Brenda Sugrue for ISPI’s Performance Express.

It’s a nice critique on the validity and usefulness of Bloom’s Taxonomy for Instructional Design.

Read it here.

I tend to agree with Brenda’s Critique. For a long time I’ve been suspicious of Blooms.

 

==================

In case that link ever goes away, I’m repeating her piece here:

Problems with Bloom’s Taxonomy
by Brenda Sugrue, PhD, CPT

I did a 99-second critique of Bloom’s taxonomy at the 2002 ISPI conference, and it generated more unsolicited feedback than any other presentation I have made. The response varied from those who completely agreed with me and abandoned Bloom many years ago to those who are still true believers and avid users. In those 99 seconds, I criticized the taxonomy but did not have time to present more valid alternatives. This article summarizes the criticisms and presents two alternative strategies for classifying objectives in order to design appropriate instruction and assessment.

Invalidity
Bloom’s taxonomy is almost 50 years old. It was developed before we understood the cognitive processes involved in learning and performance. The categories or “levels” of Bloom’s taxonomy (knowledge, comprehension, application, analysis, synthesis, and evaluation) are not supported by any research on learning. The only distinction that is supported by research is the distinction between declarative/conceptual knowledge (which enables recall, comprehension, or understanding) and procedural knowledge (which enables application or task performance).

Unreliability
The consistent application of Bloom’s taxonomy across multiple designers/developers is impossible. Given any learning objective, it might be classified into either of the two lowest levels (knowledge or comprehension) or into any of the four highest levels (application, analysis, synthesis, or evaluation) by different designers. Equally, there is no consistency in what constitutes instruction or assessment that targets separate levels. A more reliable approach is to separate objectives and practice/assessment items into those that elicit or measure declarative/conceptual knowledge from those that elicit or measure task performance/procedural knowledge.

Impracticality
The distinctions in Bloom’s taxonomy make no practical difference in diagnosing and treating learning and performance gaps. Everything above the “knowledge” level is usually treated as “higher-order thinking” anyway, effectively reducing the taxonomy to two levels.

The Content-by-Performance Alternative
Recent taxonomies of objectives and learning object strategies distinguish among types of content (usually facts, concepts, principles, procedures, and processes) as well as levels of performance (usually remember and use). This content-by-performance approach leads to general prescriptions for informational content and practice/assessment such as those presented in Figure 1.

Figure 1. Prescriptions for Information and Practice Based on Content-Performance Matrix.

Content Type

Information to Present
(Regardless of Level of Performance

Practice/Assessment
(Depending on Level of Performance)

Remember

Use

Fact the fact recognize or recall the fact recognize or recall during task performance
Concept the definition, critical attributes, examples, non-examples recognize or recall the definition or attributes Identify, classify, or create examples
Principle/
Rule
the principle/rule, examples, analogies, stories recognize, recall, or explain the principle decide if the principle applies, predict an event, apply the principle to solve a problem
Procedure list of steps, demonstration recognize, recall, or reorder the steps perform the steps
Process description of stages, inputs, outputs, diagram, examples, stories recognize, recall, or reorder the stages identify origins of problems in the process; predict events in the process; solve problems in the process

The Pure Performance Alternative
A more radical approach would be to have no taxonomy at all, to simply assume that all objectives are at the use level (that is, “performance” objectives) and that learners will practice or be assessed on the particular performance in representative task situations. If there are “enabling” sub-objectives, those too can be treated as performance objectives without further classification. If, for example, a loan officer needs to be able to distinguish among types of mortgages and describe the pros and cons of each type of mortgage as an enabling skill for matching house buyers with mortgages, then we design/provide opportunities to practice categorizing mortgages and listing their pros and cons before we practice on matching buyers to mortgages. If a car salesperson needs to be able to describe the features of different car models as an enabling skill for selling cars, then we design/provide opportunities to practice describing the features of different cars before we practice on selling cars.

References
Bereiter, C., & Scardamalia, M. (1998). Beyond Bloom’s taxonomy: Rethinking knowledge for the knowledge age. In A. Hargreaves, A. Lieberman, M. Fullen, & D. Hopkins, (Eds.), International handbook of educational change. Boston: Kluwer Academic.

Merrill, M.D. (1994). Instructional design theory. Englewood Cliffs, NJ: Educational Technology Publications.

Moore, D.S. (1982). Reconsidering Bloom’s Taxonomy of educational objectives, cognitive domain. Educational Theory, 32(1) 29-34.