Robert Gagne's 1st event of instruction was "Gain Attention." Michael Allen's company, Allen Interactions, has been saying for years, "No More Boring e-Learning." We've all heard the stories of how often e-learning turns learners off. And yet, there is still a whole lot of boring e-learning out there.

An article from the eLearning Guild helps us avoid the trap, specifically by helping us start our learning interventions in ways that grab attention. Paul Clothier interviews Carmen Taran, author of the book Better Beginnings.

Dan Balzer and Susan Manning offer an excellent Podcast on the topic. You can find the link to the Guild article at their web page as well.

Google has a nice blog post out on its use of eye movement research.

I remember getting a tour of Fidelity a few years ago and learning that their eye movement studies on web browsing showed that people were beginning to ignore big dark chunks of graphics because they thought they were advertisements.

My dissertation advisor, Ernie Rothkopf did a classic study (with Billington) in 1979 using eye movement data to test whether learners actually paid more attention (had more and higher-quality eye movements) toward information in the learning material that was targeted by learning objectives than to information that was not so targeted. It turned out that learning objectives worked to boost learning because they prompted learners to pay more attention to the objective-relevant material and less attention to the rest of the information.

See: Rothkopf, E. Z., & Billington, M. J. (1979). Goal-guided learning from text: Inferring a descriptive processing model from inspection times and eye movements. Journal of Educational Psychology, 71, 310-327.

To answer the question I posed above. Yes, more of us should be using eye-movement research to support us as we do e-learning design.

And by the way, as web pages change their strategies to gain our attention, our learners may change their strategies to avoid things deemed irrelevant. Moreover, as our learners see more and more of our company’s e-learning, their eyes may learn where to go…In fact, a lot of them already have a well-learned capacity to find the NEXT key through a swarm of bees.

As predicted, Adobe is on the march to monopolize the e-learning development world.

Their new suite.

Have they really integrated intelligence about human learning in there? Not sure, but I'm skeptical. If you're a competitor, that's probably your only path to success. If you're Adobe, that may be your only weak link.

My services are available.

On January 30th, there will be an online conference, costing only $65, hosted in Second Life (so you can learn about Second Life as well), and it sounds intriguing with some solid speakers/hosts.

Check it out. It's called Stepping Into Virtual Worlds.

I’m writing you from the eLearning Guild’s annual conference. I went to a session presented by Silke Fleischer and colleagues at Adobe and was blown away by the work Adobe is doing to create products that support learning and related efforts. I then asked a number of industry thought leaders who confirmed my interpretation: Adobe is now a 500-pound Gorilla, likely to continue out-investing their competitors and thus creating better and better products for folks like us to use.
If you’re considering elearning tools, you owe it to your organization to consider Adobe products. I have no financial relationship with Adobe, by the way. This is not to say that other products aren’t worthy and/or do some things better than Adobe products. My thinking is this: Companies who invest in their products are often more likely to be there for you in the years to come. I’ve seen many clients who started using a particular tool five-to-ten years ago, and they are basically stuck with it because of their large installed base of learning courses.
Here are a few of the things that made me wake up and take notice:
  1. Adobe’s update cycle on Captivate seems to be shrinking, as they are aggressively moving forward in the development of Captivate 4.
  2. Captivate is being used for many purposes, including the development of Podcasts, Advertising, etc.
  3. You can embed a working Captivate file into Adobe connect and then have webinar or online-learning participants each interact with Captivate objects.
  4. PDF files can now include fully-functional interactive images. So documents are not static anymore!!
  5. Adobe is working on a new platform called AIR, which will enable the compilation of many types of objects for display and interaction.

I recently completed one of the most comprehensive work-learning audits I’ve ever been asked to do for a major U. S. retailer. The goal of the audit was to find out how their learning programs AND work-learning environment were supporting the stores in being successful. The audit involved (1) structured and unstructured interviewing with all levels of the organization, especially with store personnel, (2) focus groups held across the country with specific groups of store personnel (e.g., clerk, store managers, assistant managers, etc.), (3) task force meetings with senior line managers and representatives throughout the company, (4) learning audits of e-learning courses, (5) learning audits of a mission-critical classroom course, (6) review of company artifacts (CEO messages, publications, databases, intranet, etc.), (7) interviews with learning-and-performance professionals, (8) discussions of business strategy, (9) discussions regarding corporate information and data-gathering capabilities, (10) job shadowing, (11) store observations, etc.

Who/What Do Workers Learn From?

One of the most intriguing results came out of a relatively simple exercise I did with focus-group participants. The following is a rough approximation of those results.

What I did was ask focus-group participants who they learned from. I would hold up a large 6 x 8 index card with a position label on it, for example, "District Manager," "Clerks," or "Corporate." The group would shout out where they thought that card should go on a large diagram I had created on the wall. I would place it on the wall in a particular category based on the verbal responses and then we would negotiate as a group to determine it’s final positioning. So for example, participants could say that they learned the following amounts from that person/position, and we often compromised using in-between placement:

  • Learned Most
  • Learned a Lot
  • Learned Some
  • Learned a Little
  • Learned Least
  • Had Little/No Contact with

See the diagram below for a rough example. This one is actually a composite based on several focus groups and more than one position. It gives a fair representation for how frontline retail clerks responded. Note that the orange boxes represent fellow employees, while the blue boxes represent other groups of people or things that they learned from.

Whowhatdopeoplelearnfrom

There are several key insights from these results:

  • People learn the most from those who they work closely with.
  • People learn the most from their experience doing the job.
  • People learn the most from their self-initiated efforts at learning.
  • The more contact, the more learning (for the most part), however there are benefits from learning from experts (e.g., store managers, head clerks), though the worker has to have at least some signicant contact with them to create this benefit. You’ll notice that district staff have only a little impact and regional and corporate staff have none.
  • E-learning is seen as somewhat facilitative but not a place where workers learn the most. This result may be organization specific as different e-learning designs and implementations might easily move this result higher or lower.

Frontline clerks didn’t get much from company magazines and the like, but managers (not represented in the results above) did find value in these. Store managers also reported that networking with other store managers was on of the "Learn Most" entries for them. For this company, this network was even more important than learning from their district managers (their direct bosses). This makes sense because their network is more accessible throughout the heat of the daily grind.

These results were eye-opening for my client, and they are still wrangling with the implications. For example, district managers and district training staff seemed to produce very little learning benefits. So, should their roles for learning be de-emphasized or re-emphasized?

These types of result have to be understood in the larger data-gathering effort of course. Analyzed alone, they suffer from the problem of de-contextualized self-report data. Combined with multiple other data sources, they paint a really robust picture of an organization’s learning environment.

Informal Learning, Social Networks, etc.

Vendors are out and about in our field now selling the benefits of complicated and expensive analysis tools for looking at how people learn through so-called informal on-the-job mechanisms. The example above shows that if you don’t have the big bucks, there are simpler ways to get good data as well. 

I’ve just spent 3 wonderful days in San Jose at the eLearning Guild’s DevLearn 2007 conference. Here were some of the highlights for me:

  • Hanging out with Ruth Clark a few times during lunches, keynotes, etc. We had a blast discussing research, the state of the profession, and the joy and challenge of doing the research-to-practice thing.
  • Seeing Ruth Clark and Silke Fleischer (of Adobe Systems) present the research AND practice of Richard Mayer’s work on multimedia learning. Silke did a nice job of demonstrating e-learning examples in Adobe’s Captivate. Ruth did a wonderful job discussing the research, framing it in terms of practical application, and describing its limitations. Adobe deserves a ton of credit for supporting the dissemination of Ruth’s work and helping to distribute it to a wide audience. Three cheers for enlightened companies like Adobe and Questionmark who support research dissemination.
  • Having Google Maps change my behavior by including a public transportation option when I did a search in San Jose. I had actually made a reservation to rent a car so I could drive from my hotel to the downtown hotel where the conference was held. When I went to Google Maps to search for the best route, I was offered a public transportation search. I found out I could get downtown in 12 minutes for only $1.75. I cancelled my rental car and happily commuted. Saving me money (Google showed my gas savings), parking fees, aggravation, etc. Awesome!! Technology changes everything.
  • Seeing a great keynote by Paul Saffo who talked about technology innovation and reminded my how many failures are required before success is rewarded. It was one of those rare keynotes that was both well-delivered and superbly relevant for the conference. Way to go Heidi Fisk (of the eLearning Guild) for a great keynote selection.
  • Wonderful food. Yes. At a conference!! Healthy, fresh, tasty. Way to go Fairmont Hotel.
  • The now-famous DevLearn DemoFest where dozens of e-learning developers show off their wares. It’s a great way to take a snapshot of the state of the e-learning industry.
  • Playing tennis with a Wii remote. My wife and I are not TV people, so I never pay attention to all the new remotes, Xbox’s, etc. At the conference, I played tennis for about 2 minutes and had some fun. The cool thing about the Wii  is that it tracks the remote’s movement and simulates that on the screen. Actually, with the Wii my first serve percentage was about 100%, much better than real life.
  • My Wednesday Breakfast Bytes session on the intersection of e-Technology and Informal Learning. We had a great conversation and I learned some things.
  • My Thursday Breakfast Bytes session on Situation-Based Instructional Design. The basic nugget is that people behave by (1) Being in a situation, (2) Evaluating that situation to make sense of it, (3) Deciding what to do, and (4) taking Action. So, we ought to give our learners practice in doing the whole SEDA process (Situation-Evaluation-Decision-Action). And, we can benefit from asking the Magic Question. Yes, there is more to it than that. The bottom line for me is that my clients have found the concept very helpful in helping them design learning that goes beyond the typical topic-based designs.
  • The DevLearn Breakfast Bytes sessions do a great job in getting conversations going. As always, Guild members come to the conference with experience and are ready to share their insights and wisdom. I love Breakfast Bytes.
  • My regular session on Learning Measurement. Another great discussion with—what seemed to me—like lots of light bulbs going off. Fun, even after several days of conferencing.
  • I met a guy at the Demo Fest—John D’Amours—who had actually tried to do a control-group experiment comparing his e-learning design to a traditional design. Yes. Yes. Yes. We need more folks taking this kind of initiative.
  • Learning that Windows Vista NORMALLY runs slow with 2GB of memory. Glad it wasn’t just my machine.
  • AND so many other great conversations and sessions. Sorry if I failed to mention you!! Hugs to all. I really learn a lot at eLearning Guild events. And I have to say, I feel that my contribution is especially appreciated. Thanks Guild members and staff !!!!!!!!!!

The Carbon Offset idea works like this. We all pollute, but when we do so we can help limit the damaging effects by either (1) offsetting our damage by doing good in other ways (for example if we have to drive a large car we can replace all our light bulbs with energy-saving flourescents), or (2) we can donate money to projects that help support renewable energy, energy efficiency, and reforestation. For example, check out the not-for-profit organizations CarbonFund.org and The Clean Air Conservancy.

Here’s some ideas for those of us in the training and development field:

  1. Encourage the use of e-learning, which limits the carbon footprint of travel. And, make sure you build e-learning that is effective and engaging, so more folks will want to use e-learning.
  2. When calculating the "cost" of training, calculate carbon footprint costs as well. See for example, The Carbon Fund’s calculators or The Clean Air Conservancy’s calculators. Make these costs evident.
  3. Encourage your company to buy carbon offsets when utilizing training. It’s not just a good thing to do, but it may help your company attract business and recruit highly-educated employees.
  4. In your e-learning courses, provide an option for learners to calculate how many tons of carbon dioxide they would have utilized had they had to travel from their location to headquarters.

What other ideas can you think of?

In a webinar this month (December 2006), I asked a group of about 100 e-learning professionals, what was the highest level of assessment they did (based on Kirkpatrick’s Four Levels) on their most recent learning-intervention development project.

  • 11% said they did NO evaluation
  • 26% said they did Level 1 smile sheets
  • 48% said they measured Level 2 learning
  • 15% said they measured Level 3 on-the-job performance
  • 0% said they measured Level 4 business results (or ROI).

Unfortunately, smile sheets are very poor predictors of meaningful learning outcomes, being correlated at less than an r of .2 with learning and performance. See Alliger, Tannenbaum, Bennett, Traver, & Shotland (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-357.

Stunning: Even after all the hot air expelled, ink spilled, and electrons excited in the last 10 years regarding how we ought to be measuring business results, nobody is doing it !!!!!!!!!

——————————

When I asked them how they did their most recent assessment, in terms of WHEN they did the assessment—whether they did the assessment immediately at the end of learning or after a delay,

  • 77% said they did the assessment, "At the end of training."
  • 7% said they did the assessment, "After a delay."
  • 14% said they did the assessment, "At end—and after a delay."
  • 2% said, "Never done / Can’t remember."

Unfortunately, the 77% are biasing the results in a positive direction. They are measuring learning when it is top-of-mind, easily accessible from long-term memory. They are measuring the learning intervention’s ability to create short-term learning effects. They are not measuring its ability to support long-term remembering, or its ability to specifically minimize forgetting.

In the graphic depiction below, the top of the left axis (the y axis) represents more remembering, the bottom is less remembering. Consider what happens if we assess learning at the end (or the top) of the first (leftmost) "Learning" curve. If the learners utilize what they’ve learned on the job, such an assessment has a negative bias. However, what typically happens over time is more like the forgetting curve (depicted at the lower right). Unless learners regularly use what they’ve learned, any assessment at the end of the first learning curve is likely to be a poor predictor of future remembering—and show a definite positive bias.

Learningforgettingcurve

——————————

When I asked them how they did their most recent assessment, in terms of WHERE the learners were when they completed the assessment,

  • 70% said they did the assessment, "In the training room/context."
  • 26% said they did the assessment, "In a different room/context."
  • 5% said, "Never done / Can’t remember."

Unfortunately, the 70% are biasing the results of their assessments in a positive direction. When learners are in the same context during retrieval as during learning, they tend to recall more because the background context stimulates improved retrieval. So, providing our training assessments in the training room (or using the same background stimuli in an e-learning course), is not a fair way for us to get feedback on our performance as instructional designers.

For one example of this research paradigm, see the following example. Smith, S. M., Glenberg, A., & Bjork, R. A. (1978). Environmental context and human memory. Memory & Cognition, 6, 342-353.

Testing_in_out_of_contextjpeg

The Bottom Line

First, I’m not blaming these particular folks. This is a common reality. I have regularly failed—and continue to fail often—in validly assessing my own instructional-development efforts. I’m much better when people pay me to evaluate their learning interventions (see LearningAudit.com).

Almost all of us—as far as I can tell—are just not getting valid feedback on our instructional-development efforts. Note that though 48% said they did Level 2 learning evaluation on their most recent project, probably most of those folks delivered the assessment in a way that biased those results. This leaves very few of us who are getting valid feedback on our designs.

We’re in a dark fog about how we’re doing, and so we have massively impoverished information to use to make improvements.

Basically, we live in a shameful, self-imposed fog.

Bring on the fog lights!!

Here is an example of e-learning for e-learning’s sake.

I think a table would have been much more valuable, and I’d like a search capability. Also, what if I want to know whether to buy an organic tomato or not?

http://www.consumerreports.org/cro/food/organic-products-206/test-your-organic-iq/index.htm

The information may be good, but it’s hard to get to.