Dr. Gary Woodill, Senior Researcher at Brandon-Hall Research has recently released the second in his three-part series on emerging e-learning technologies. These reports briefly cover the emerging technologies and methods, and then—for each topic—provide an exhaustive set of references and web links.

I have to admit to being amazed at the breadth and scope of these reports. These reports won’t be for everyone—they’re much too packed with reference-like information—but for someone like me who likes to see what I’ve failed to learn as of yet, they’re wonderful.

I’d recommend these reports for folks who are responsible for their company’s Learning R&D efforts, for those interested in brainstorming new e-learning design options, and for academics and researchers who want to see what the new areas are.

These reports aren’t perfect, of course. I wouldn’t expect such an exhaustive research effort to be flawless.

The reports don’t provide an organizing framework for understanding how these technologies might interface with the flesh-and-blood machinery of human learners. The danger is that readers who willy-nilly act to employ these technologies may not get the learning benefits they might expect—or, they may create negative outcomes.

Some technologies were not described at all, although they may be in the third report. One e-learning technology I’m passionate about wasn’t mentioned—learning-follow-through software, which facilitates training transfer by encouraging learners to implement their after-training goals, by getting managers involved in the process, by recording actual work follow-through, and by keeping the learning conversations going after the formal learning events are finished.

I’ve written about this technology before, both in a software review and a book review.

This software is a paradigm buster—one of e-learning’s killer aps—but these reports don’t mention the technology at all. See Friday5s from the Fort Hill Company and ActionPlan Mapper from ZengerFolkman as the two exemplars of this technology.

To reiterate, the weaknesses of these reports are minor in comparison with their strengths. For those who really want to delve into what’s out there, Gary Woodill has provided a great service in these reports.

  1. Emerging E-Learning: New Approaches to Delivering Engaging Online Learning Content
  2. Emerging E-LearningTechnologies: Tools for Developing Innovative Online Training

One of the reasons information like this is so critical is that too many of us associated with e-learning have not really pushed ourselves to develop models beyond the page-turner, the multiple-stupid interaction page clicker, the game show, the webinar, and the branching simulation.

We need to be more creative—testing our ideas of course in the crucible of the real world—but really working on alternative models. Compilations of ideas (like these reports provide) are very helpful in this regard, again with the caveat that these technologies absolutely have to be designed with the human learner in mind and tested for learning effectiveness.

The NY Times had a nice article in last Sunday’s edition—on the front page above the fold—on the concept of Web 3.0, which may have implications for our field.

To give you a sense of what Web 3.0 is, here are some quotes from the article:

  • "[The Web 3.0] goal is to add a layer of meaning on top of the existing Web that would make it less of a catalog and more of a guide — and even provide the foundation for systems that can reason in a human fashion."
  • "But in the future, more powerful systems could act as personal advisers in areas as diverse as financial planning, with an intelligent system mapping out a retirement plan for a couple, for instance, or educational consulting, with the Web helping a high school student identify the right college."
  • "[The holy-grail of Web 3.0 developers] is to build a system that can give a reasonable and complete response to a simple question like: ‘I’m looking for a warm place to vacation and I have a budget of $3,000. Oh, and I have an 11-year-old child.’"

The goal then is to be able to analyze the information from the web and come up with quick and meaningful responses to queries people ask.

Advertisement Work-Learning Research

Hmmm. That’s sort of a learning application in a way.

And, if we can create such systems, why couldn’t we ask a query like, "I want to become a CLO at a socially responsible company, and I’m currently an instructional designer with an undergraduate degree in humanties and an MBA, plus 5 years experience as a leadership trainer. What do I have to do reach my goal and what do I have to learn?"

What would Web 3.0 mean for our field?

Elliott Masie, proving he’s not just our field’s preeminent hype-master, invited the thoughtful and wise Cal Wick as the closing keynote to Learning2006.

Cal and his team at Fort Hill Company have developed a great tool to help training transfer, called Friday5s.

You can read my review of Cal and his team’s book to see my thoughts on how important the book and this software is to our field.

Training Media Review reviews are now available on both a subscription basis ($229 per year) or a daily basis ($25).

The current issue reviews authoring tools such as Lectora, Captivate, Articulate, Camtasia, and ViewLet Builder. You can check out all these reviews for only $25 by buying a one-day subscription.

I’m going to be a discussant on a panel on m-Learning on December 5th, 2006 in Chicago in an event sponsored by the Chicagoland Learning Leaders and Walgreens.

These events are designed for Learning Executives near Chicago and the midwest, but if that doesn’t exactly fit you, try to crash the party. These are intimate events I hear. Only 20 registrants allowed so that the discussion remains meaningful and focused.

To register for the event, click here.

To read some FAQ’s, click here.

At a recent Bank of America employee event, two employees sang a song to motivate employees to see the benefits in the Bank of America and MBNA merger.

You decide if this is a good learning opportunity. Click to see the video.

Warning: May cause laughter. Don’t forget to read the comments.

Actually, I’m guessing that the song worked well for people within the meeting, but damn does it fall apart when the rest of us get to peek through the windows. Suggests to me that motivational attempts like these in our e-learning programs and our business meetings must be lock-boxed to prevent a wider distribution. Even better, when we design we need to assume that our efforts might get posted on YouTube.

Advertisement Work-Learning Research

Lectures are widely reviled for putting learners in a passive mode. On the other hand, lectures are relatively easy to implement, even with large numbers of learners. And regardless of the pluses and minuses, lectures are ubiquitous. While there aren’t many lectures in kindergarten, by third grade teachers are talking a lot and learners are listening. The college classroom is dominated by lecture. So is the corporate training session, conference presentations, church sermons, public meetings, elder hostels, and the local library’s evening speaker series. Lectures aren’t going away anytime soon, nor should they. Like all tools for learning, they provide certain unique advantages and have certain unique limitations.

Lectures can be modified in different ways to increase the amount of active learning—to ensure that learners are more fully engaged, have a more robust understanding of the learning material,  are more likely to remember what they learned, are more likely to utilize the information at a later time.

One such method to increase active learning are "response cards." Response cards are provided to  students so that each one can respond to instructor questions. Two types of response cards are available, (1) those that enable each learner to write his or her answer on the card (for example with a dry-erase marker), and (2) those that enable learners to hold up preprinted answers (either True or False; or A, B, C, or D for example).

Research

While not a lot of good research has been done on response cards, the research seems to suggest that compared with the traditional method of having students raise their hands in response to questions, response cards improve learners’ classroom engagement, the amount they learn, and the amount they retain after a delay (Marmolejo, Wilder, & Bradley, 2004; Gardner, Heward, Grossi, 1994; Kellum, Carr, and Dozier, 2001; Narayan, Heward, Gardner, Courson, Omness, 1990; Christle & Schuster, 2003). Learners generally prefer response cards to simple hand-raising. Most of the research has focused on K-12 classrooms, with some research done in community college. The research has tended to focus on relatively low-level information and has not tested the value of response cards on higher-order thinking skills.

Recommendations

Getting learners to actively respond in lectures is certainly a worthwhile goal. Research has been fairly conclusive that learners learn better when they are actively engaged in learning (Bransford, Brown, & Cocking, 1999). Response cards may be one tool in the arsenal of methods to generate learner engagement. Of course, electronic keypads can be used in a similar way, at a significantly increased cost, with perhaps some added benefits as well. Still, at less than $30 a classroom, response cards may be worth a try.

Personally, I’m skeptical that audiences in adult training situations would be open to response cards. While 87% of college students rated the cards highly (Marmolejo, Wilder, & Bradley, 2004), the corporate audiences I’ve worked with over the years, might find them childish or unnecessary ("hey, why can’t we just raise our hands?"). On the other hand, electronic keypads are more likely to be accepted. Of course, such acceptance—whether we’re talking about response cards or electronic keypads—really depends on the relevance of the material and the questions used. If the questions are low-level rote memorization, adult audiences are likely to reject the instruction regardless of the technology employed.

Making lectures interactive has to be done with care. Adding questions and student responses can have negative consequences as well. When we ask questions, we signal to learners what to pay attention to. If we push our learners to think about low-level trivia, they will do that to the detriment of focusing on more important high-level concepts.

Advertisement Work-Learning Research

Limitations of the Research

The research on response cards tends to focus on low-level questions that are delivered all-to-frequently throughout lectures. Learners who have to answer a question every two minutes are being conditioned to focus on trivia, facts, and knowledge. Future research on response cards should focus on higher-level material in situations where more peer discussion are enabled.

Most of the research on response cards suffered from minor methodological difficulties (e.g., weaker than preferred comparison designs and a low level of learners actually tracked) and ambiguity (e.g., in reading the research articles, it was often difficult to tell whether the in-class questions were repeated on the final quizzes—those used as dependent variables; and no inferential statistics were available to test hypotheses).

References

Marmolejo, E. K., Wilder, D. A., & Bradley, L. (2004). A preliminary analysis of the effects of response cards on student performance and participation in an upper division university course. Journal of Applied Behavior Analysis, 37, 405-410.

Cristle, C. A., & Schuster, J. W. (2003). The effects of using response cards on student participation, academic achievement, and on-task behavior during whole-class, math instruction. Journal of Behavioral Education, 12(3), 147-165.

Gardner, R., Heward, W. L., & Grossi, T. A. (1994). Effects of response cards on student participation and academic achievement: A systematic replication with inner-city students during whole-class science instruction. Journal of Applied Behavior Analysis, 27, 63-71.

Kellum, K. K., Carr, J. E., & Dozier, C. L. (2001). Response-card instruction and student learning in a college classroom. Teaching of Psychology, 28(2), 101-104.

Narayan, J. S., Heward, W. L., Gardner, R., Courson, F. H., & Omness, C. K. (1990). Using response cards to increase student participation in an elementary classroom. Journal of Applied Behavior Analysis, 23, 483-490.

Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.

For decades, the US Food and Drug Administration (FDA) has been touting schemes to attempt—at least ostensibly—to help Americans eat more healthfully. From the 4-food-group concept to the current food pyramid, the idea has been to help us think more intelligently about what we eat. The development of these formulations have always been under influence by industry groups representing food companies and health advocates, and it has never been clear that the resulting compromises have been effective. Note the current obesity epidemic as evidence.

But now, the Hannaford Brothers supermarket chain is taking matters into their own hands. Hannaford’s nutritionists have developed a simple coding system to let shoppers know the relative health value of foods. Three stars is most healthy. Zero stars is least healthy. And while the coding system is simple, the underlying algorithm—the algorithm that assigns the ratings—is complex.

Listen to this: 77% of all the items on Hannaford’s shelves rated zero stars. Even most of the Hannaford Store brand items get zero stars. Our choices are not very nutritious!!

As you might imagine, some manufacturers are not happy.

Only time will tell whether shoppers will change their eating habits—or whether food manufacturers will change the formulas for their processed foods to get a better star rating, and Hannaford will be able to track this data very easily. It’s hard to tell whether the food pyramid has had an effect. Hannaford’s Guiding Star system will be much easier to assess. I hope they’ve got some control-group stores to make comparisons.

I love the simplicity of the system. It’s like a job aid on steroids. It’s simple. It’s provided exactly when needed (as shoppers make their selections), and it’s based on proven nutritional recommendations.

Those of us in the learning-and-performance field can learn a lot from this design.

  1. Ensure that learning designs impact performance situations.
  2. Simplify, to increase how much your learners/performers actually use your stuff.
  3. Base learning designs on proven content. For example, is the stuff you teach in your leadership classes really been proven to improve management performance?
  4. Utilize systems to track your success, so that you can make

To read the excellent NY Times article, click here.

It’s time to publicly vilify NTL Institute for Applied Behavioral Science for propagating the myth that learners remember 10% of what they read, 20% or what they see visually, etc. They continue to claim that they did this research and that it is accurate.

The research is NOT accurate, nor could it be. Even a casual observer can see that research results that end neatly in 5’s or 0’s (as in 5%, 10%, 20%, 30%) are extremely unlikely. To see a complete debunking of this hoax, click here.

Normally, I choose not to name names when it comes to the myths in our field. We all make mistakes, right? But NTL continues to harm our field by propagating this myth. Here is the document (Download NTL’s email)–the one they send to people who inquire about the percentages. At least five separate people have sent me this document after contacting NTL on their own initiative.

I have talked to NTL staff people and emailed them (over a year ago), and even with my charming personality, I have failed to persuade them of the problems they are causing.

The people who write me about this are outraged (and frankly confused) that an organization would propagate such an obvious falsehood. Are you?

Here are claims that NTL makes in its letter that are false:

NTL: We know that in 1954 a similar pyramid with slightly different numbers appeared on p. 43 of a book called Audio-Visual Methods in Teaching, published by the Edgar Dale Dryden Press in New York.

Why false? There are NO numbers on page 43 of Edgar Dale’s book.

NTL: We are happy to respond to your inquiry about The Learning Pyramid. Yes, it was developed and used by NTL Institute at our Bethel, Maine campus in the early sixties when we were still part of the National Education Association’s Adult Education Division.

Very Intriguing: How could NTL have developed the pyramid in the 1960’s, when a similar version was published by Edgar Dale in 1954? Professor Michael Molenda of Indiana University has found some evidence that the numbers first appeared in the 1940’s. Maybe NTL has a time machine.

NTL: Yet the Learning Pyramid as such seems to have been modified and always has been attributed to NTL Institute.

No. It wasn’t attributed to NTL by Dale. Dale thought it was his. And again, Dale did not use any numbers. Just a cone.

Okay, so now half of you hate NTL, and the other half of you hate me for being the “know-it-all kid” from 7th grade. Well, I’ll take the heat for that. But still, is this the kind of field you want to work in?

And what is the advantage for NTL to continue the big lie?

Here’s what NTL should write when people inquire:

Thanks for your inquiry to the NTL Institute. Yes, we once utilized the “Learning Pyramid” concept in our work, starting in the 1960’s. However, we can no longer locate the source of the original information and recent research tends to debunk those earlier recommendations. We apologize for any harm or confusion we may have caused.

I recently attended the eLearning Guild’s DevLearn06 conference, dedicated to e-learning developers. I gave a preconference workshop on the learning research and two conference sessions, one on the spacing effect and one on assessments (especially as they are illuminated by the learning research).

Here’s some quick highlights from the conference:

1. 50 developers showed off their latest and greatest e-learning interventions in the eLearning Guild’s Demo Fest. This 2.5 hour opportunity is a great addition to the typical conference fare. I noticed two things:

    • The production values and aesthetic quality of the entries were remarkable. Beautiful work. So, if you’re developing e-learning, you better start hiring talented graphic designers and managers who have an aesthetic sensibility.
    • The designs were generally much better than page turners, but they still weren’t completely consistent with research-based practices. There were good interactions and lots of decision scenarios, but still there were too many missed opportunities for authentic practice and decision-making.

Advertisement Work-Learning Research

2. The eLearning Guild’s Research Posse, led by the ingenious and tireless Steve Wexler, has developed an amazing data-gathering tool, that enables eLearning Guild members to find out the market penetration of all kinds of things, including authoring tools, simulation practices, m-Learning, etc., etc. With this foray into industry research, the Guild easily leapfrog’s the feeble attempts by ASTD and Bersin, both of which provide only broad strokes analysis of industry data.

3. Great session by Judy Brown on m-Learning, who showed (a) an amazing installed base of m-Learning tools that makes it clear that m-learning tsunami is on its way here, and (b) some interesting statistics showing that the US is way behind Asia and Western Europe in the amount of data that is utilized on cell phones (the more data, the more learning programs).