The Five Failures of Workplace Learning Professionals

,

To view this as a PDF, click here.

————————-

To improve, we must know our biggest failings.

In the training and development field, our five biggest failures are as follows:

  1. We forget to minimize forgetting and improve remembering.
  2. We don’t provide training follow-through.
  3. We don’t fully utilize the power of prompting mechanisms.
  4. We don’t fully leverage on-the-job learning.
  5. We measure so poorly that we don’t get good feedback to enable improvement.

1. Minimizing Forgetting, Improving Remembering

It is not enough to help people understand new concepts or even to motivate them to utilize those concepts. If they don’t remember concepts when they encounter situations in which those concepts would be useful, then previous understanding and motivation is for naught.

There are three powerful mechanisms that support long-term remembering, (a) aligning the learning and performance contexts, (b) providing retrieval practice, and (c) utilizing spaced repetitions. Most of our learning interventions do a poor job of providing these mechanisms—resulting in training that may create awareness but doesn’t support remembering or performance improvement.

We need to give our learners more realistic practice using scenarios and simulations. We also need to space repetitions of learning over time—much more than we do now. Instead of trying to teach everything at a basic awareness level, we need to cover less content—but not just present it—instead giving our learners opportunities for deliberate practice.

2. Training Follow-Through

Providing training but no effort to ensure that learners will apply what they’ve learned is the height of professional malpractice. If we assume that learners remember what they’ve learned (which as we just saw is not a given), learners still must (a) remain motivated to apply what they’ve learned, (b) feel that there is some benefit to applying the learning, (c) have the resources and time to put their learning into practice, (d) get feedback and guidance to improve their performance, and (e) be prepared to overcome obstacles and frustrations in applying the learning.

Note how the first two failures create an additive effect—both significantly lessen the likelihood of on-the-job application of the learning. If learners don’t remember, they’re not going to apply what they’ve learned. If learners don’t receive after-training follow-through support, they are unlikely to provide the continuous and persistent focus needed to apply the learning in a way that creates sustainable success.

To reach a credible level of training follow-through we need to (a) engage our learners managers to enlist their support, (b) provide reminders to apply the learning, (c) provide relearning opportunities for that which has been forgotten, (d) enable additional learning to improve and elaborate on the performance, (e) ensure our learners have the resources and time they need to apply the learning and integrate it into their behavioral repertoire, (f) provide coaching support to guide the learning-and-performance process, (g) ensure the learners are incentivized either tangibly with money or perks or intrinsically by aligning efforts with personal values and sense-of-identity, and (h) encourage persistence even in the face of obstacles and frustration.

3. Prompting Mechanisms

Prompting mechanisms rely on one particularly powerful foible of the human cognitive architecture—that our working memories are triggered easily by environmental stimuli. Prompting mechanisms include things like job aids, performance support tools, signage, intuitive cues in our tools and equipment, and some forms of management oversight. They work because they prompt certain strands of thinking, and thus performance. For example, a job aid that lists 5 key interview goals, 10 key interview questions and their rationales automatically triggers in the interviewer a certain way of thinking about interviewing. For example, an interview template might remind its user that interviews are more telling if interviewees are asked to perform a work task or describe how they would perform a work task. Without such a prompt, the interviewer might focus only on how well they think the person would fit into the work culture, etc.

While we are aware of these prompting mechanisms, we are not aggressive enough in their use. If we utilized prompting mechanisms more often with our training and more often as a replacement for training, we’d create better outcomes. If we went looking for grassroots prompting mechanisms already being used and helped spread their use, we’d be more effective. If we evaluated learning facilitators on their use of prompting mechanisms, we’d be more likely to encourage the use of prompting mechanisms. If we asked learners in training to practice with prompting mechanisms, we’d see more being used on the job—and our learners would remember more of what they learned.

4. On-the-Job Learning

We as learning professionals tend to focus almost exclusively on the creation and delivery of training interventions even when we know that our learners are doing a great deal of their learning on the job without any training. Employees learn through trial-and-practice, getting help from others, through social media, by reading task instructions, by using help systems, and so forth. While we have much less direct influence on on-the-job learning than on training, we do have some influence and we ought to use it if we are serious about getting results.

Often the biggest impact we can have is by accessing managers and encouraging them to actively promote learning. Managers can improve learning in their direct reports by (a) making it a point to monitor their employees’ competencies and guide them toward learning opportunities, (b) being approachable and available for questions and advice, (c) creating a culture of learning and information sharing, (d) encouraging data-driven decision-making instead of opinion-driven decision-making, (e) utilizing an experimental mindset, for example by encourage pilot-testing and rapid prototyping, and (f) giving direct reports time for learning and exploration.

We can also have an influence on on-the-job learning by creating and maintaining social-media mechanisms that can be tailored to particular needs. For example, wikis can be used by project teams to get input from various parties and blogs can be used by senior folks to lay out a compelling vision.
We can encourage better on-the-job learning by improving people’s ability to coach their fellow employees. Too often people asked to coach others do a poor job because they just don’t know what good coaching looks like.

We can utilize diagnostic tools to help people in the organization see things about themselves—or about the organization—that they might not otherwise see. For example, if the organization engages in an effort to improve coaching ability, those being coached can be asked to take a short diagnostic survey on how well their coach is doing in coaching them. If an organization wants to change its culture to one that is more flexible and creative, we can utilize a diagnostic to track progress. We can also use a diagnostic to get the organization talking about specifics—so that employees know what behaviors represent the past culture and which represent the new culture.

There are, of course, other things we can do to directly influence on-the-job learning. In addition, we can change our brand by stopping our tendency to be order takers for training. By changing the way we define our role, we can encourage the business side to be fuller partners in organizational learning.

5. Measurement and Feedback to Spur Improvement

We as learning professionals suck at measurement, creating a vacuum of information that pushes us to make poor decision after poor decision in our learning designs. By only seeking learner opinions about the learning, we encourage a bias toward entertainment and engagement and away from content validity, remembering, and application. By measuring only when the learners are in the training context, we don’t learn whether the learning intervention would generate remembering in a work context that is not like the training situation. By measuring only during the learning event, we measure the learning intervention’s ability to create understanding, but we do not measure the learning intervention’s ability to support long-term remembering. We also fail to examine whether any training follow-through is utilized. By utilizing only low-level questions in our tests of learning, we fail to measure the ability of our learners to make decisions that relate to workplace performance. In short, we don’t get the feedback we need to make good learning decisions.

Maintaining ourselves in a state of permanent darkness, we continue to make terrible decisions in regard to learning design, development, and deployment. We design primarily for engagement and understanding, while ignoring remembering, motivation, and application. We hire and promote trainers and training companies who get great ratings but who don’t help learners remember or apply what they’ve learned. Because our measurement is focused only on training, we fail to engage our business partners to ensure that they are adequately supporting learning application—we also never learn what obstacles and leverage points face our learners when they go to apply the learning in their jobs. We build e-learning programs that encourage learners to focus on low-level trivia instead of focusing on the main points. By abstaining from diagnostics, we leave employees blind to conditions from which they might benefit. Poor measurement enables the first four failures.

The bottom line on measurement is that measurement should provide us with valid feedback. Unfortunately, because we haven’t taken the human learning system into account in our measurement designs—and in our measurement models—we are getting biased information and drawing inappropriate conclusions from poor data.

The Five Failures are Fixable

We as learning professionals—as a whole—though working honorably and with good intentions, are too often failing to maximize our impact. Our job is work-performance improvement. We can start by improving our own work performance.

But instead of focusing on everything—which will certainly overwhelm us—we should focus on the things that really matter. We should focus on our five failures. Instead of following willy-nilly prescriptions that pop like fads from a popcorn popper—we should focus on five things that are fundamental—and inspired by the learning research. We should focus on the five failures.

In this brief article, I have provided strong hints about how to rethink and redirect each of the five failures. While such a brief synopsis is certainly not sufficient to enable you to completely redesign your learning efforts, it should, I hope, motivate you to get started.

————————-

To view this as a PDF, click here.

 

2 replies
  1. John Kleeman
    John Kleeman says:

    Will
    Thank you for a great blog article, though a bit dispiriting to read.
    My take on this is that there is a divergence between what people think they have learned (e.g. after a training session) and what they actually will retain and use productively.
    And we are focusing training to maximize the first and not the second. So there is this illusion that people think they’ve learned, which guides trainers, when really we need to identify what they actually learn and retain after time.
    But how do we get people to realize this and change? I guess it’s quite a cultural mind shift and is taking time, but thank you for your efforts to communicate the research evidence, which is significantly helpful.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply