Today, industry luminary and social-media advocate Jane Hart wrote an incendiary blog post claiming that “the world of L&D [Learning and Development] is splitting in two.” According to Jane there are good guys and bad guys.

The bad guys are the “Traditionalists.” Here is some of what Jane says about them:

  • They cling onto 20th century views of Training & Development.”
  • They believe they know what is best for their people.”
  • They disregard the fact that most people are bored to tears sitting in a classroom or studying an e-learning course at their desktop.”
  • They miss the big picture – the fact that learning is much more than courses, but involves continuously acquiring new knowledge and skills as part of everyday work.”
  • They don’t understand that the world has changed.”

Fighting words? Yes! Insulting words? Yes! Painting with too broad a brush? Yes! Maybe just to make a point? Probably!

Still, Jane’s message is clear. Traditionalists are incompetent fools who must be eradicated because of the evil they are doing.

Fortunately, galloping in on white horses we have “Modern Workplace Learning (MWL) practitioners.” These enlightened souls are doing the following, according to Jane:

  • “They are rejecting the creation of expensive, sophisticated e-learning content and preferring to build short, flexible, modern resources (where required) that people can access when they need them. AND they are also encouraging social content (or employee-generated content) – particularly social video – because they know that people know best what works for them.”
  • They are ditching their LMS (or perhaps just hanging on to it to manage some regulatory training) – because they recognise it is a white elephant – and it doesn’t help them understand the only valid indicator of learning success, how performance has changed and improved.”
  • They are moving to a performance-driven world – helping groups find their own solutions to problems – ones that they really need, will value, and actually use, and recognise that these solutions are often ones they organise and manage themselves.”
  • They are working with managers to help them develop their people on the ground – and see the success of these initiatives in terms of impact on job performance.”
  • They are helping individuals take responsibility for their own learning and personal development – so that they continuously grow and improve, and hence become valuable employees in the workplace.”
  • They are supporting teams as they work together using enterprise social platforms – in order to underpin the natural sharing within the group, and improve team learning.” 

Points of Agreement

I agree with Jane in a number of ways. Many of the practices we use in workplace learning are ineffective.

Here are some points of agreement:

  1. Too much of our training is ineffective!
  2. Too often training and/or elearning are seen as the only answer!
  3. Too often we don’t think of how we, as learning professionals, can leverage on the job learning.
  4. Too often we default to solutions that try to support performance primarily by helping people learn — when performance assistance would be preferable.
  5. Too often we believe that we have to promote an approved organizational knowledge, when we might be better off to let our fellow workers develop and share their own knowledge.
  6. Too often we don’t utilize new technologies in an effort to provide more effective learning experiences.
  7. Too often we don’t leverage managers to support on-the-job learning.
  8. Too often we don’t focus on how to improve performance.

Impassioned Disagreement

As someone who has enjoyed the stage with Jane in the past, and who knows that she’s an incredibly lovely person, I doubt that she means to cast aspersions on a whole cohort of dedicated learning-and-performance professionals.

Where I get knocked off my saddle is the oversimplifications encouraged in the long-running debate between the traditionalist black hats and the informal-learning-through-social-media white hats! Pitting these groups against each other is besides the point!

I remember not too long ago when it was claimed that “training is dead,” that “training departments will disappear,” that “all learning is social,” that “social-media is the answer,” etc…

What is often forgotten is that the only thing that really matters is the human cognitive architecture. If our learning events and workplace situations don’t align with that architecture, learning will suffer.

Oversimplifications that Hurt the Learning Field

  1. Learners know how they learn best so we should let them figure it out.
    Learners, as research shows, often do NOT know how they learn best, so it may be counterproductive not to figure out ways to support them in learning.
  2. Learning can be shortened because all learners need to do is look it up.
    Sometimes learners have a known learning need that can be solved with a quick burst of information. BUT NOT ALL LEARNING is like this! Much of learning requires a deeper, longer experience. Much of learning requires more practice, more practical experience, etc. Because of these needs, much of learning requires support from honest-to-goodness learning professionals.
  3. All training and elearning is boring!
    Really? This is obviously NOT true, even if much of it could be lots better.
  4. That people can always be trusted to create their own content!
    This is sometimes true and sometimes not. Indeed, sometimes people get stuff wrong (sometimes dangerously wrong). Sometimes experts actually have expertise that us normal people don’t have.
  5. That using some sort of enterprise social platform is always effective, or is always more effective, or is easy to use to create successful learning.
    Really? Haven’t you heard more than one or two horror stories — or failed efforts? Wiki’s that weren’t populated. Blogs that fizzled. SharePoint sites that were isolated from users who could use the information. Forums where less than 1% of folks are involved. Et cetera… And let’s not forget, these social-learning platforms tend to be much better at just-in-time learning than in long-term deeper learning (not totally, but usually).
  6. That on-the-job learning is easy to leverage.
    Let’s face it, formal training is MUCH EASIER to leverage than on-the-job learning. On-the-job learning is messy and hard to reach. It’s also hard to understand all the forces involved in on-the-job learning. And what’s ironic is that there is already a group that is in a position to influence on-the-job learning. The technical term is “managers.”
  7. Crowds of people always have more wisdom than single individuals.
    This may be one of the stupidest memes floating around our field right now. Sounds sexy. Sounds right. But not when you look into the world around us. I might suggest recent presidential candidate debates here in the United States as evidence. Clearly, the smartest ideas don’t always rise to prominence!
  8. Traditional learning professionals have nothing of value to offer.
    Since I’m on the front lines in stating that our field is under-professionalized, I probably am the last one who should be critiquing this critique, but it strikes me as a gross simplification — if not grossly unfair. Human learning is exponentially more complex than rocket science, so none of us have a monopoly on learning wisdom. I’m a big proponent of research-based and evidence-based practice, and yet neither research nor other forms of evidence are always omniscient. Almost every time I teach, talk to clients, read a book, read a research article, or read the newspaper, I learn more about learning. I’ve learned a ton from traditional learning professionals. I’ve also learned a ton from social-learning advocates.

 

Summary

In today’s world, there are simply too many echo-chambers — places which are comfortable, which reinforce our preconceptions, which encourage us to demonize and close off avenues to our own improvement.

We in the learning field need to leave echo-chambers to our political brethren where they will do less damage (Ha!). We have to test our assumptions, utilize the research, and develop effective evaluation tools to really test the success of our learning interventions. We have to be open, but not too-easily hoodwinked by claims and shared perceptions.

Hail to the traditionalists and the social-learning evangelists!

 

Follow-up!

Clark Quinn wrote an excellent blog post to reconcile the visions promoted by Jane and Will.

 

Share!

If you want to share this discussion with others, here are the links:

  • Jane’s Provocative Blog Post:
    • http://www.c4lpt.co.uk/blog/2015/11/12/the-ld-world-is-splitting-in-two/
  • Will’s Spirited Critique:
    • http://www.willatworklearning.com/2015/11/the-two-world-theory-of-workplace-learning-critiqued.html
  • Clark’s Reconciliation:
    • http://blog.learnlets.com/?p=4655#comment-821615

 

 

John Medina, author of Brain Rules, and Development Molecular Biologist at University of Washington/ Seattle Pacific University, was today’s keynote speaker at PCMA’s Education Conference in Fort Lauderdale, Florida.

He did a great job in the keynote, well organized and with oodles of humor, but what struck me was that even though the guy is a real neuroscientist, he is very clear in stating the limitations of our understanding of the brain. Here are some direct quotes from his keynote, as I recorded them in my notes:

“I don’t think brain science has anything to say for business practice.”

“We still don’t really know how the brain works.”

“The state of our knowledge [of the brain] is childlike.”

“The human brain was not built to learn. It was built to survive.”

Very refreshing! Especially in an era where conference sessions, white papers, and trade-industry publications are oozing with brain science bromides, neuroscience snake oil, and unrepentant con artists who, in the interest of taking money from fools, corral the sheep of the learning profession into all manner of poor purchasing decisions.

The Debunker Club is working on a resource page to combat the learning myth, “Neuroscience (Brain Science) Trumps Other Sources of Knowledge about Learning,” and John Medina gives us more ammunition against the silliness.

In addition to John’s keynote, I enjoyed eating lunch with him. He’s a fascinating man, wicked knowledgeable about a range of topics, funny, and kind to all (as I found out as he developed a deep repartee with the guy who served our food). Thanks John for a great time at lunch!

One of the topics we talked about was the poor record researchers have in getting their wisdom shared with real citizens. John believes researchers, who often get research funding from taxpayer money, have a moral obligation to share what they’ve learned with the public.

I shared my belief that one of the problems is that there is no funding stream for research translators. The academy often frowns on professors who attempt to share their knowledge with lay audiences. Calls of “selling out” are rampant. You can read my full thoughts on the need for research translators at a blog post I wrote early this year.

Later in the day at the conference, John was interviewed in a session by Adrian Segar, an expert on conference and meeting design. Again, John shined as a deep and thoughtful thinker — and refreshingly, as I guy who is more than willing to admit when he doesn’t know and/or when the science is not clear.

To check out or buy the latest version of Brain Rules, click on the image below:

 

 

 

 

Today’s New York Times has a fascinating article on the mostly European concept of practice firms. As the name implies, practice firms give people practice in doing work.

This seems to align well with the research on learning that suggests that learning in a realistic context, getting lots of retrieval practice and feedback, and many repetitions spaced over time can be the most effective way to learn. Of course, the context and practice and feedback have to be well-designed and aligned with the future work of the learner.

Interestingly, there is an organization that is solely devoted to the concept. EUROPEN-PEN International is the worldwide practice enterprise network. The network consists of over 7,500 Practice Enterprises in more than 40 countries. It has a FaceBook page and a website.

I did a quick search to see if there was an scientific research on the use of practice firms, but I didn’t uncover anything definitive…If you know of scientific research, or other rigorous evidence, let me know…

 

 

Note: Pilot is Over… Post kept for historical reasons only…

 

Organizations Wanted to Pilot Leadership-Development Subscription Learning!!

I am looking for organizations who are interested in piloting subscription learning as a tool to aid in developing their managers and energizing their senior management’s strategic initiatives.

To read more about the benefits and possibilities for subscription learning and leadership development, read my article posted on the ATD (Association for Talent Development) website.

Potential Benefits

  • Reinforce concepts learned to ensure remembering and application.
  • Drive management behaviors through ongoing communications.
  • Utilize the scientifically-verified spacing effect to boost learning.
  • Enable dialogue between your senior leaders and your developing managers.
  • Inculcate organizational values through scenario-based reflection.
  • Prompt organizational initiatives through your management cadre.
  • Engage in organizational learning, promoting cycles of reinforcement.
  • Utilize and pilot test new technologies, boosting motivation.
  • Utilize the power of subscription learning before your competitors do.

Potential Difficulties

  • Pilot efforts may face technical difficulties and unforeseen obstacles.

Why Will Thalheimer and Work-Learning Research, Inc.?

  • Experienced leadership-development trainer
  • Previously ran leadership-development product line (Leading for Business Results)
  • Leader in the use of scenario-based questions
  • Experienced in using subscription learning
  • Devoted to evidence-based practices
  • Extensive experience in practical use of learning research

Why Now?

  • Subscription-learning tools are available.
  • Mobile-learning is gaining traction.
  • Substantial discounts for pilot organizations.

Next Steps!!

  • Sorry, the pilot is over…

 

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model’s recommendations leverageable for learning professionals). People keep asking me about the model, so I’m going to briefly describe it here. If you want to look at my original YouTube video about the model — which goes into more depth — you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here’s a graphic depiction:

 

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can’t make decisions (D), learners who can’t remember what they’ve learned (E), learners who can’t apply what they’ve learned (F), and learners who can’t persevere in their own learning (G); are learners who simply haven’t received leverageable benefits.

When we train or teach only to A, B, and C, we aren’t really helping our learners, we aren’t providing a return on the learning investments, we haven’t done enough to support our learners’ future performance.

 

 

Clark Quinn and I have started debating top-tier issues in the workplace learning field. In the first one, we debated who has the ultimate responsibility in our field. In the second one, we debated whether the tools in our field are up to the task.

In this third installment of the series, we’ve engaged in an epic battle about the worth of the 4-Level Kirkpatrick Model. Clark and I believe that these debates help elucidate critical issues in the field. I also think they help me learn. This debate still intrigues me, and I know I’ll come back to it in the future to gain wisdom.

And note, Clark and I certainly haven’t resolved all the issues raised. Indeed, we’d like to hear your wisdom and insights in the comments section.

————————–

Will:

I want to pick on the second-most renowned model in instructional design, the 4-Level Kirkpatrick Model. It produces some of the most damaging messaging in our industry. Here’s a short list of its treacherous triggers: (1) It completely ignores the importance of remembering to the instructional design process, (2) It pushes us learning folks away from a focus on learning—where we have the most leverage, (3) It suggests that Level 4 (organizational results) and Level 3 (behavior change) are more important than measuring learning—but this is an abdication of our responsibility for the learning results themselves, (4) It implies that Level 1 (learner opinions) are on the causal chain from training to performance, but two major meta-analyses show this to be false—smile sheets, as now utilized, are not correlated with learning results! If you force me, I’ll share a quote from a top-tier research review that damns the Kirkpatrick model with a roar. “Buy the ticket, take the ride.”

 

Clark:

I laud that you’re not mincing words!   And I’ll agree and disagree.  To address your concerns: 1) Kirkpatrick is essentially orthogonal to the remembering process. It’s not about learning, it’s about aligning learning to impact.  2) I also think that Kirkpatrick doesn’t push us away from learning, though it isn’t exclusive to learning (despite everyday usage). Learning isn’t the only tool, and we should be willing to use job aids (read: performance support) or any other mechanism that can impact the organizational outcome.  We need to be performance consultants! 3) Learning in and of itself isn’t important; it’s what we’re doing with it that matters. You could ensure everyone could juggle chainsaws, but unless it’s Cirque de Soleil, I wouldn’t see the relevance.

So I fully agree with Kirkpatrick on working backwards from the org problem and figuring out what we can do to improve workplace behavior.  Level 2 is about learning, which is where your concerns are, in my mind, addressed.  But then you need to go back and see if what they’re able to do now is what is going to help the org!  And I’d counter that the thing I worry about is the faith that if we do learning, it is good.  No, we need to see if that learning is impacting the org.  4) Here’s where I agree, that Level 1 (and his numbering) led people down the garden path: people seem to think it’s ok to stop at level 1!  Which is maniacal, because what learners think has essentially zero correlation with whether it’s working (as you aptly say)).  So it has led to some really bad behavior, serious enough to make me think it’s time for some recreational medication!

 

Will:

Actually, I’m flashing back to grad school. “Orthogonal” was one of the first words I remember learning in the august halls of my alma mater. But my digression is perpendicular to this discussion, so forget about it! Here’s the thing. A model that is supposed to align learning to impact ought to have some truth about learning baked into its DNA. It’s less than half-baked, in my not-so-humble opinion.

As they might say in the movies, the Kirkpatrick Model is not one of God’s own prototypes! We’re responsible people, so we ought to have a model that doesn’t distract us from our most important leverage points. Working backward is fine, but we’ve got to go all the way through the causal path to get to the genesis of the learning effects. Level 1 is a distraction, not a root. Yes, Level 2 is where the K-Model puts learning, but learning back in 1959 is not the same animal that it is today. We actually have a pretty good handle on how learning works now. Any model focused on learning evaluation that omits remembering is a model with a gaping hole.

 

Clark:

Ok, now I’m confused.  Why should a model of impact need to have learning in its genes?  I don’t care whether you move the needle with performance support, formal learning, or magic jelly beans; what K talks about is evaluating impact.  What you measure at Level 2 is whether they can do the task in a simulated environment.  Then you see if they’re applying it at the workplace, and whether it’s having an impact.

No argument that we have to use an approach to evaluate whether we’re having the impact at level 2 that we should, but to me that’s a separate issue.  Kirkpatrick just doesn’t care what tool we’re using, nor should it.  Kirkpatrick doesn’t care whether you’re using behavioral, cognitive, constructivist, or voodoo magic to make the impact, as long as you’re trying something.

We should be defining our metric for level 2, arguably, to be some demonstrable performance that we think is appropriate, but I think the model can safely be ignorant of the measure we choose at level 2 and 3 and 4.  It’s about making sure we have the chain.  I’d be worried, again, that talking about learning at level 2 might let folks off the hook about level 3 and 4 (which we see all too often) and make it a matter of faith. So I’m gonna argue that including the learning into the K model is less optimal than keeping it independent. Why make it more complex than need be?  So, now, what say you?

 

Will:

Clark! How can you say the Kirkpatrick model is agnostic to the means of obtaining outcomes? Level 2 is “LEARNING!” It’s not performance support, it’s not management intervention, it’s not methamphetamine. Indeed, the model was focused on training.

The Kirkpatricks (Don and Jim) have argued—I’ve heard them live and in the flesh—that the four levels represent a causal pathway from 1 to 4. In addition, the notion of working backward implies that there is a causal connection between the levels. The four-level model implies that a good learner experience is necessary for learning, that learning is necessary for on-the-job behavior, and that successful on-the-job behavior is necessary for positive organizational results. Furthermore, almost everybody interprets it this way.

The four levels imply impact at each level, but look at all the factors that they are missing! For example, learners need to be motivated to apply what they’ve learned. Where is that in the model? Motivation can be an impact too! We as learning professionals can influence motivation. There are other impacts we can make as well. We can make an impact on what learners remember, whether learners are supported back on the job, etc.

Here’s what a 2012 seminal research review from a top-tier scientific journal concluded: “The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… (p. 91). That’s pretty damning!

 

Clark:

I don’t see the Kirkpatrick model as an evaluation of the learning experience, but instead of the learning impact.   I see it as determining the effect of a programmatic intervention on an organization.  Sure, there are lots of other factors: motivation, org culture, effective leadership, but if you try to account for everything in one model you’re going to accomplish nothing.  You need some diagnostic tools, and Kirkpatrick’s model is one.

If they can’t perform appropriately at the end of the learning experience (level 2), that’s not a Kirkpatrick issue, the model just lets you know where the problem is. Once they can, and it’s not showing up in the workplace (level 3), then you get into the org factors. It is about creating a chain of impact on the organization, not evaluating the learning design.  I agree that people misuse the model, so when people only do 1 or 2, they’re wasting time and money. Kirkpatrick himself said he should’ve numbered it the other way around.

Now if you want to argue that that, in itself, is enough reason to chuck it, fine, but let’s replace it with another impact model with a different name, but the same intent of focusing on the org impact, workplace behavior changes, and then intervention. I hear a lot of venom directed at the Kirkpatrick model, but I don’t see it ‘antithetical to learning’.

And I worry the contrary; I see too many learning interventions done without any consideration of the impact on the organization.  Not just compliance, but ‘we need a course on X’ and they do it, without ever looking to see whether a course on X will remedy the biz problem. What I like about Kirkpatrick is that it does (properly used) put the focus on the org impact first.

 

Will:

Sounds like you’re holding on to Kirkpatrick because you like its emphasis on organizational performance. Let’s examine that for a moment. Certainly, we’d like to ensure that Intervention X produces Outcome Y. You and I agree. Hugs all around. Let’s move away from learning for a moment. Let’s go Mad Men and look at advertising. Today, advertising is very sophisticated, especially online advertising because companies can actually track click-rates, and sometimes can even track sales (for items sold online). So, in a best-case scenario, it works this way:

  • Level 1 – Web surfers says they like the advertisement
  • Level 2 – Web surfers show comprehension by clicking on link.
  • Level 3 – Web surfers spend time reading/watching on splash page.
  • Level 4 – Web surfers buy the product offered on the splash page.

A business person’s dream! Except that only a very small portion of sales actually happen this way (although, I must admit, the rate is increasing). But let’s look at a more common example. When a car is advertised, it’s impossible to track advertising through all four levels. People who buy a car at a dealer can’t be definitively tracked to an advertisement.

So, would we damn our advertising team? Would we ask them to prove that their advertisement increased car sales? Certainly, they are likely to be asked to make the case…but it’s doubtful anybody takes those arguments seriously… and shame on folks who do!

In case, I’m ignorant of how advertising works behind the scenes—which is a possibility, I’m a small “m” mad man—let me use some other organizational roles to make my case.

  • Is our legal team asked to prove that their performance in defending a lawsuit is beneficial to the company? No, everyone appreciates their worth.
  • Do our recruiters have to jump through hoops to prove that their efforts have organizational value? They certainly track their headcounts, but are they asked to prove that those hires actually do the company good? No!
  • Do our maintenance staff have to get out spreadsheets to show how their work saves on the cost of new machinery? No!
  • Do our office cleaning professionals have to utilize regression analyses to show how they’ve increased morale and productivity? No again!

There should be a certain disgust in feeling we have to defend our good work every time…when others don’t have to.

I use the Mad Men example to say that all this OVER-EMPHASIS on proving that our learning is producing organizational outcomes might be a little too much. A couple of drinks is fine, but drinking all day is likely to be disastrous.

Too many words is disastrous too…But I had to get that off my chest…

 

Clark:

I do see a real problem in communication here, because I see that the folks you cite *do* have to have an impact. They aren’t just being effective, but they have to meet some level of effectiveness. To use your examples: the legal team has to justify its activities in terms of the impact on the business. If they’re too tightened down about communications in the company, they might stifle liability, but they can also stifle innovation. And if they don’t provide suitable prevention against legal action, they’re turfed out.   Similarly, recruiters have to show that they’re not interviewing too many, or too few people, and getting the right ones. They’re held up against retention rates and other measures.  The maintenance staff does have to justify headcount against the maintenance costs, and those costs against the alternative of replacement of equipment (or outsourcing the servicing).  And the office cleaning folks have to ensure they’re meeting environmental standards at an efficient rate.  There are standards of effectiveness everywhere in the organization except L&D.  Why should we be special?

Let’s go on: sales has to estimate numbers for each quarter, and put that up against costs. They have to hit their numbers, or explain why (and if their initial estimates are low, they can be chastised for not being aggressive enough). They also worry about the costs of sales, hit rates, and time to a signature. Marketing, too, has to justify expenditure. To use your example, they do care about how many people come to the site, how long they stay, how many pages they hit, etc. And they try to improve these. At the end of the day, the marketing investment has to impact the sales. Eventually, they do track site activity to dollars. They have to. If we don’t, we get boondoggles. If you don’t rein in marketing initiatives, you get these shenanigans where existing customers are boozed up and given illegal gifts that eventually cause a backlash against the company. Shareholders get a wee bit stroppy when they find that investments aren’t paying off, and that the company is losing unnecessary money.

It’s not a case of ‘if you build it, it is good’! You and I both know that much of what is done in the name of formal learning (and org L&D activity in general) isn’t valuable. People take orders and develop courses where a course isn’t needed. Or create learning events that don’t achieve the outcomes. Kirkpatrick is the measure that tracks learning investments back to impact on the business.  and that’s something we have to start paying attention to. As someone once said, if you’re not measuring, why bother? Show me the money! And if you’re just measuring your efficiency, that your learning is having the desired behavioral change, how do you know that behavior change is necessary to the organization? And until we get out of the mode where we do the things we do on faith,  and start understanding have a meaningful impact on the organization, we’re going to continue to be the last to have an influence on the organization, and the first to be cut when things are tough. Yet we have the opportunity to be as critical to the success of the organization as IT! I can’t stand by seeing us continue to do learning without knowing that it’s of use. Yes, we do need to measure our learning for effectiveness as learning, as you argue, but we have to also know that what we’re helping people be able to do is what’s necessary. Kirkpatrick isn’t without flaws, numbering, level 1, etc. But it’s a clear value chain that we need to pay attention to. I’m not saying in lieu of measuring our learning effectiveness, but in addition. I can’t see it any other way.

 

Will:

Okay, I think we’ve squeezed the juice out of this tobacco. I would have said “orange” but the Kirkpatrick Model has been so addictive for so long…and black is the new orange anyway…

I want to pick up on your great examples of individuals in an organizations needing to have an impact. You noted, appropriately, that everyone must have an impact. The legal team has to prevent lawsuits, recruiters have to find acceptable applicants, maintenance has to justify their worth compared to outsourcing options, cleaning staff have to meet environmental standards, sales people have to sell, and so forth.

Here is the argument I’m making: Employees should be held to account within their circles of maximum influence, and NOT so much in their circles of minimum influence.

So for example, let’s look at the legal team.

Doesn’t it make sense that the legal team should be held to account for the number of lawsuits and amount paid in damages more than they should be held to account for the level of innovation and risk taking within the organization?

What about the cleaning professionals?

Shouldn’t we hold them more accountable for measures of perceived cleanliness and targeted environmental standards than for the productivity of the workforce?

What about us learning-and-performance professionals?

Shouldn’t we be held more accountable for whether our learners comprehend and remember what we’ve taught them more than whether they end up increasing revenue and lowering expenses?

I agree that we learning-and-performance professionals have NOT been properly held to account. As you say, “There are standards of effectiveness everywhere in the organization except L&D.” My argument is that we, as learning-and-performance professionals, should have better standards of effectiveness—but that we should have these largely within our maximum circles of influence.

Among other things, we should be held to account for the following impacts:

  • Whether our learning interventions create full comprehension of the learning concepts.
  • Whether they create decision-making competence.
  • Whether they create and sustain remembering.
  • Whether they promote a motivation and sense-of-efficacy to apply what was learned.
  • Whether they prompt actions directly, particularly when job aids and performance support are more effective.
  • Whether they enable successful on-the-job performance.
  • Et cetera.

Final word, Clark?

 

Clark:

First, I think you’re hoist by your own petard.  You’re comparing apples and your squeezed orange. Legal is measured by lawsuits, maintenance by cleanliness, and learning by learning. Ok that sounds good, except that legal is measured by lawsuits against the organization. And maintenance is measured by the cleanliness of the premises.  Where’s the learning equivalent?  It has to be: impact on decisions that affect organizational outcomes.  None of the classic learning evaluations evaluate whether the objectives are right, which is what Kirkpatrick does. They assume that, basically, and then evaluate whether they achieve the objective.

That said, Will, if you can throw around diagrams, I can too. Here’s my attempt to represent the dichotomy. Yes, you’re successfully addressing the impact of the learning on the learner. That is, can they do the task. But I’m going to argue that that’s not what Kirkpatrick is for. It’s to address the impact of the intervention on the organization. The big problem is, to me, whether the objectives we’ve developed the learning to achieve are objectives that are aligned with organizational need. There’s plenty of evidence it’s not.

 

So here I’m trying to show what I see K doing. You start with the needed business impact: more sales, lower compliance problems, what have you. Then you decide what has to happen in the workplace to move that needle.  Say, shorter time to sales, so the behavior is decided to be timeliness in producing proposals. Let’s say the intervention is training on the proposal template software. You design a learning experience to address that objective, to develop ability to use the software. You use the type of evaluation you’re talking about to see if it’s actually developing their ability. Then you use K to see if it’s actually being used in the workplace (are people using the software to create proposals), and then to see if it’d affecting your metrics of quicker turnaround. (And, yes, you can see if they like the learning experience, and adjust that.)

And if any one element isn’t working: learning, uptake, impact, you debug that.  But K is evaluating the impact process, not the learning design. It should flag if the learning design isn’t working, but it’s not evaluating your pedagogical decisions, etc. It’s not focusing on what the Serious eLearning Manifesto cares about, for instance. That’s what your learning evaluations do, they check to see if the level 2 is working. But not whether level 2 is affecting level 4, which is what ultimately needs to happen. Yes, we need level 2 to work, but then the rest has to fall in line as well.

My point about orthogonality is that K is evaluating the horizontal, and you’re saying it should address the vertical. That, to me, is like saying we’re going to see if the car runs by ensuring the engine runs. Even if it does, but if the engine isn’t connected through the drivetrain to the wheels, it’s irrelevant. So we do want a working, well-tuned, engine, but we also want a clutch or torque converter, transmission, universal joint, driveshaft, differential, etc. Kirkpatrick looks at the drive train, learning evaluations look at the engine.

We don’t have to come to a shared understanding, but I hope this at least makes my point clear.

 

Will:

Okay readers! Clark and I have fought to a stalemate… He says that the Kirkpatrick model has value because it reminds us to work backward from organizational results. I say the model is fatally flawed because it doesn’t incorporate wisdom about learning. Now it’s your turn to comment. Can you add insights? Please do!

 

Mary V. Spiers, professor of psychology (and neuropsychologist) at one of my Alma Maters, Drexel University, has created a brilliant website to clarify the psychological science depicted in the movies.

You must check this out:

I love the review of Finding Nemo. Do you remember Dory, the amnesic sidekick? http://www.neuropsyfi.com/reviews/finding-nemo

For another movie I really enjoyed, Memento, the reviewers point out both the ways in which the movie is accurate in reporting on anterograde amnesia, and inaccurate. http://www.neuropsyfi.com/reviews/memento

The website also has a page devoted to brain science, with information you can actually trust — instead of some of the hyped stuff you might have seen from vendors in the learning field.

 

I had the great pleasure of being interviewed recently by Brent Schlenker, long-time elearning advocate. We not only had a ton of fun talking, but Brent steered us into some interesting discussions.

———-

He's created a three-part video series of our discussion:

———-

Brent is a great interviewer–and he gets some top-notch folks to join him. Check out his blog.

 

Sports is sometimes a great crucible for life lessons. Players learn teamwork, the benefits of hard work and practice, and how to act in times of success and failure.

Learning professionals can learn a lot from sports as well. The 2015 Superbowl is a case in point.

Interception

With 27 seconds to go, the Seattle Seahawks were on the New England Patriots one yard line. Only one more yard to go for victory. They called a pass play, rather controversial in the court of public opinion, but not a bad call according to statisticians.

The Seahawks quarterback, Russell Wilson, thought he had a touchdown. “I thought it was going to be a touchdown when I threw it.” Unfortunately for Wilson and the Seahawks, Malcolm Butler, a Patriots rookie cornerback, was prepared.

This is where the science of learning comes in. Butler was prepared for a number of reasons–many having to do with the science of learning. For an explanation of the 12 most important learning factors, you can review my work on the Decisive Dozen.

  1. Butler, despite being a rookie, had played a lot of football before. He had a lot of prior knowledge, which enable him to quickly learn what to do.
  2. He was given tools and resources to help him learn. He got a playbook, he was able to view videotape of Seahawks' plays, he was surrounded by experienced players and coaches, he was motivated and encouraged.
  3. He was given feedback on his performance–but not just general feedback, very specific feedback on what to do.
  4. He got many practice opportunities to refine his knowledge and performance.
  5. Perhaps most importantly, Butler was prompted to make a link between a particular situation and a particular action to take.

Here's the formation prior to the interception. Notice on the bottom of the image that the receivers for Seattle are "stacked" two deep–that is, one is lined up on the line of scrimmage, one is behind the other.

Interception-Set

  

Here is what Butler saw just as the play was getting started.

What Butler Saw

Here's what Butler said:

“I saw Wilson looking over there. He kept his head still and just looked over there, so that gave me a clue. And the stacked receivers; I just knew they were going to throw. I don’t know how I knew. I just knew. I just beat him to the point and caught the ball.”

In a separate interview he restated what he saw:

“I remembered the formation they were in, two receivers stacked, I just knew they were going to [a] pick route.”

From a science of learning perspective, what Butler did was link a particular SITUATION (two receivers stacked) with a particular ACTION he was supposed to take (move first to where the ball would be thrown). It's this cognitive linking that was so crucial to the Superbowl victory–and to human performance more generally.

While we human beings like to think of ourselves as proactive–and we are sometimes–most of our moment-to-moment actions are triggered by environmental cues. SITUATION-ACTION! It's the way the world works. When we are served food on smaller plates, we eat less–because the small plates make the food look bigger, triggering us to feel full more quickly. When we drive on a narrow street, we drive more slowly. When we see someone dressed in a suit, we think more highly of that person than if they were dressed shabbily. We can't help ourselves. What's more, these reactions are largely automatic, unintended, subconsciously triggered. Indeed, notice Butler's first quote. He wasn't sure what made him react as he did.

In the Decisive Dozen, I refer to this phenomenon as "Context Alignment." The notion is that the learning situation ought to mimic or simulate the anticipated performance situation. Others have similar notions about the importance of context, including Bransford's transfer-appropriate processing, Tulving's encoding-specificity, and Smith's context-dependent memory.

Indeed, recently a meta-analysis (a study of many studies) by Gollwitzer and Sheeran  found that "implementation intentions"–what I prefer to call "triggers"–had profound effects, often improving performance compliance by twice as much as having people set a goal to accomplish something. That is, creating a cognitive link between SITUATION and ACTION was often twice as potent as prompting people to have a goal to take a particular action.

Butler was successful because he had a trigger linking the SITUATION of stacked receivers with the ACTION of bolting to the point where the ball would be thrown.

Situation-Action

Listen to football players talk and you'll know that the best teams understand this phenomenon deeply. They talk about "picking up the keys," which is really a way of saying noticing what situation they're in on the field. Once they understand the situation, then they know what action to take. Moreover, if they can automate the situation-action link–through repeated practice–they can take actions more quickly, which can make all the difference!!

Here's how Butler talks about his preparation. When asked in an interview, "You said you knew that play was coming. How did you know that play was coming?" Butler said:

"Preparation in the [fan?] room, looking over my play book, looking over their plays, studying my opponent. I got beat on it at practice … last week, and Bill [Coach of New England Patriots] told me I got to be on it. And what I did wrong at practice I gave ground instead of just planting and going. And during game time I just put my foot in the ground, broke on the ball and beat him to the point."

For those of us working in the learning field, we should use this truth regarding the human cognitive architecture to design our learning programs.

  1. Don't just teach content.
  2. Give them tools to help them link situations and actions.
  3. Give your learners realistic practice, that is practice set in real-world situations.
  4. Give them feedback, then give them additional practice.
  5. Continually emphasize the noticing of situations, and the actions to be taken.
  6. Provide varied practice situations, without hints, to simulate real-world conditions.
  7. For critical situations, give additional practice to automate your learners' responses.
  8. Collect Lombardi Trophy or similar…

As a resident of New England, I have to add one more nugget of wisdom…

 

  

  

   

Go Patriots!

Patriots

  

Sources of Football Information, Images, Videos:

  1. Reuters
  2. NBC
  3. Boston Globe
  4. New York Times
  5. http://nflbreakdowns.com/malcolm-butlers-interception-wilson-superbowl/
  6. http://www.nfl.com/videos/new-england-patriots/0ap3000000467843/Butler-and-Edelman-go-to-Disneyland

 

 

There are so many confusions and mythologies on learning objectives that I thought I’d create a video to help disambiguate some of the worst misinformation.

Here is the video. Below the video, I have created a quiz so you can challenge and reinforce your knowledge. Watch the video first, then a day or more later–if you can manage it–take the quiz. Or, take the quiz first, then immediately watch the video–only later, after a few days, look at the quiz feedback.

——


 

 

——

Take the Quiz —

Before or After Watching the Video

——