Tag Archive for: Learning Professionals

At a recent industry conference, a speaker, offering their expertise on learning evaluation, said this:

“As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.”

Unfortunately, this is one of the most counterproductive memes in learning evaluation. It is counterproductive because it throws our profession under the bus. In this telling, we have no professional principles, no standards, no foundational ethics. We are servants, cleaning the floors the way we are instructed to clean them, even if we know a better way.

Year after year we hear from so-called industry thought leaders that our primary responsibility is to the organizations that pay us. This is a dangerous half truth. Of course we owe our organizations some fealty and of course we want to keep our jobs, but we also have professional obligations that go beyond this simple “tell-me-what-to-do” calculus.

This monomaniacal focus on measuring learning in terms of business outcomes reminds me of the management meme from the 1980s and 90s, that suggested that the goal of a business organization is to increase stakeholder value. This single-bottom-line focus has come under blistering attack for its tendency to skew business operations toward short-term results while ignoring long-term business results and for producing outcomes that harm employees, hurt customers, and destroy the environment.

If we give our business stakeholders the metrics they say that matter to them, but fail to capture the metrics that matter to our success as learning professionals in creating effective learning, then we not only fail ourselves and our learners but we fail our organization as well.

Evaluation What For?

To truly understand learning evaluation, we have to ask ourselves why we’re evaluating learning in the first place! We have to work backwards from the answer to this question.

Why does anyone evaluate? We evaluate to help us make better decisions and take better actions. It’s really that simple! So as learning professionals, we need information to help us make our most important decisions. We should evaluate to support these decisions!

What are our most important decisions? Here’s a few:

  • Which part of the content taught, if any, is relevant and helpful to supporting employees in doing their work? Which parts should be modified or discarded?
  • Which aspects of our learning designs are helpful in supporting comprehension, remembering, and motivation to learn? Which aspects should be modified or discarded?
  • Which after-training supports are helpful in enabling learning to be transferred and utilized by employees in their work? Which supports should be kept? Which need to be modified or discarded?

What are our organizational stakeholders’ most important decisions about learning? Here are a few:

  • Are our learning and development efforts creating optimal learning results? What additional support and resources should the organization supply that might improve learning results? What savings can be found in terms of support and resources—and are these savings worth the lost benefits?
  • Is the leadership of the learning and development function producing a cycle of continuous improvement, generating improved learning outcomes or generating learning outcomes optimized given their resource constraints? If not, can they be influenced to be better or should they be replaced?
  • Is the leadership of the learning and development function creating and utilizing evaluation metrics that enable the learning and development team to get valid feedback about the design factors that are most important in creating our learning results? If not, can they be influenced to use better metrics or should they be replaced?

Two Goals for Learning Evaluation

When we think of learning evaluation, we should have two goals. First, we should create learning-evaluation metrics that enable us to make our most important decisions regarding content, design components (i.e., focused at least on comprehension, remembering, motivation to apply learning), and after-training support. Second, we should do enough in our learning evaluations to gain sufficient credibility with our business stakeholders to continue our good work. Focusing only on the second of these is a recipe for disaster. 

Vanity Metrics

In the business start-up world there is a notion called “vanity metrics,” for example see warnings by Eric Ries, the originator of the lean startup movement. Vanity metrics are metrics that seem to be important, but that are not important. They are metrics that often make us look good even if the underlying data is not really meaningful.

Most calls to provide our business stakeholders with the metrics that matter to them result in beautiful visualizations and data dashboards that focus on vanity metrics. Ubiquitous vanity metrics in learning include the number of trainees trained, the cost per training, the estimates of learners for the value of the learning, complicated benefit/cost analyses of that utilize phantom measures of benefits, etc. By focusing only or primarily on these metrics we don’t have data to improve our learning designs, we don’t have data that enables us create cycles of improvement, we don’t have data that enables us to hold ourselves accountable.

The incomparable Jane Hart has a list of Learning Professionals on Twitter.