Be Careful Using Experts as e-Trainers.
I’ve been reading Richard E. Clark and Fred Estes’ recently released book, Turning research into results: A guide to selecting the right performance solutions. They recounted research that shows that an expert’s knowledge is largely "unconscious and automatic" to them. In other words, experts have retrieved their knowledge from memory so many times that they’ve forgotten how they do this and how the information all fits together—the knowledge just comes into their thoughts when they need it. This is helpful to them as they use their expertise, but it makes it difficult for them to explain to other people what they know. They forget to tell others about important information and fail to describe the links that help it all make sense.
In the learning-and-performance field we often use experts as trainers. Clark and Estes suggest that when experts teach, their courses ought to be pilot tested to work out the kinks. In my experience as a trainer, I’ve found that the first few deliveries always need significant improvements. I learn by seeing blank stares, sensing confusion, receiving questions, and watching exercises veer off in the wrong direction. This has me thinking about synchronous instructor-led web-based training.
If it’s hard to create fluent classes in face-to-face situations, it’s going to be more difficult to do this over the web. We humans are hardwired to look in people’s eyes and read their expressions. Should we avoid having experts teach our courses? Probably not. Only experts have the credibility and knowledge to teach best practices.
What does this insight say about using synchronous delivery for quick information transfer? It means that it may not be effective to hook our resident expert up to a microphone and have them start talking. If they’re talking with other experts, they’ll be fine. But we ought to be skeptical about our experts’ ability to do ad-hoc sessions without practice.
How are our expert trainers going to get the practice and feedback they need to fix the problems their expertise creates? I’m sure you can create your own list of recommendations. Here’s mine:
1. Teach the course in a face-to-face situation first to work out the major bugs. This should not be used as a substitute for realistic online pilot testing. A variation is to use focus-group rooms with the instructor behind a one-way mirror. The instructor will see how the audience reacts, but will have to use the e-learning platform to deliver the learning. If technology allows, perhaps a small number of learners can be the target of webcams that enable the instructor to see their progress.
2. Beta-test the course online (at least once or twice) with a small number of learners who are primed to give feedback. Make sure they are encouraged to register their confusion immediately and allow lots of extra time for the instructor, learners, and observers to write down their confusions, discomforts, and skepticisms.
3. Make sure the learning design provides lots of opportunities for the learners to register their confusion and ask for clarification, especially the first few times the course is run. This can involve some sort of questioning of the learners, but the questions should be meaningful, not just superficial regurgitations.