Tag Archive for: quinn

You won’t believe what a vendor said about a speaker at a conference—when that speaker spoke the truth.

 

Conferences are big business in the workplace learning field.

Conferences make organizers a ton of money. That’s great because pulling off a good conference is not as easy as it looks. In addition to finding a venue and attracting people to come to your event, you also have to find speakers. Some speakers are well-known quantities, but others are unknown.

In the learning field, where we are inundated with fads, myths, and misconceptions; finding speakers who will convey the most helpful messages, and avoid harmful messages, is particularly difficult. Ideally, as attendees, we’d like to hear truth from our speakers rather than fluff and falsehoods.

On the other hand, vendors pay big money to exhibit their products and services at a conference. Their goal is connecting with attendees who are buyers or who can influence buyers. Even conferences that don’t have exhibit halls usually get money from vendors in one way or another.

So, conference owners have two groups of customers to keep happy: attendees and vendors. In an ideal world, both groups would want the most helpful messages to be conveyed. Truth would be a common goal. So for example, let’s say new research is done that shows that freep learning is better than traditional elearning. A speaker at a conference shares the news that freep learning is great. Vendors in the audience hear the news. What will they do?

  • Vendor A hires a handsome and brilliant research practitioner to verify the power of freep learning with the idea of moving forward quickly and providing this powerful new tool to their customers.
  • Vendor B jumps right in and starts building freep learning to ensure their customers get the benefits of this powerful new learning method.
  • Vendor C pulls the conference organizers aside and tells them, “If you ever use that speaker again, we will not be back; you will not get our money any more.”

Impossible you say!

Would never happen you think!

You’re right. Not enough vendors are hiring fadingly-good-lookingly brilliant research-to-practice experts!

Here’s a true story from a conference that took place within the last year or so.

Clark Quinn spoke about learning myths and misconceptions during his session, describing the findings from his wonderful book. Later when he read his conference evaluations he found the following comment among the more admiring testimonials:

“Not cool to debunk some tools that exhibitors pay a lot of money to sell at [this conference] only to hear from a presenter at the conference that in his opinion should be debunked. Why would I want to be an exhibitor at a conference that debunks my products? I will not exhibit again if this speaker speaks at [conference name]” (emphasis added).

This story was recounted by Clark and captured by Jane Bozarth in an article on the myth of learning styles she wrote as the head of research for the eLearning Guild. Note that the conference in question was NOT an eLearning Guild conference.

What can we do?

Corruption is everywhere. Buyer beware! As adults, we know this! We know politicians lie (some more than others!!). We know that we have to take steps not to be ripped off. We get three estimates when we need a new roof. We ask for personal references. We look at the video replay. We read TripAdvisor reviews. We look for iron-clad guarantees that we can return products we purchased.

We don’t get flustered or worried, we take precautions. In the learning field, you can do the following:

  • Look for conference organizers who regularly include research-based sessions (scientific research NOT opinion research).
  • Look for the conferences that host the great research-to-practice gurus. People like Patti Shank, Julie Dirksen, Clark Quinn, Mirjam Neelen, Ruth Clark, Karl Kapp, Jane Bozarth, Dick Clark, Paul Kirschner, and others.
  • Look for conferences that do NOT have sessions—or have fewer sessions—that propagate common myths and misinformation (learning styles, the learning pyramid, MBTI, DISC, millennials learn differently, people only use 10% of their brains, only 10% of learning transfers, neuroscience as a panacea, people have the attention span of a goldfish, etc.).
  • If you want to look into Will’s Forbidden Future, you might look for the following:
    • conferences and/or trade organizations that have hired a content trustee, someone with a research background to promote valid information and cull bad information.
    • conferences that point speakers to a list of learning myths to avoid.
    • conferences that evaluate sessions based on the quality of the content.

Being exposed to false information isn’t just bad for us as professionals. It’s also bad for our organizations. Think of all the wasted effort—the toil, the time, the money—that was flushed down the toilet trying to redesign all our learning to meet the so-called learning-styles approach. Egads! If you need to persuade your management about the danger of learning myths you might try this.

In a previous blog post, I talked about what we can do as attendees of conferences to avoid learning bad information. That’s good reading as well. Check it out here.

Who Will Rule Our Conferences? Truth or Bad-Faith Vendors?

That’s a damn good question!

 

 

Clark Quinn and I have started debating top-tier issues in the workplace learning field. In the first one, we debated who has the ultimate responsibility in our field. In the second one, we debated whether the tools in our field are up to the task.

In this third installment of the series, we’ve engaged in an epic battle about the worth of the 4-Level Kirkpatrick Model. Clark and I believe that these debates help elucidate critical issues in the field. I also think they help me learn. This debate still intrigues me, and I know I’ll come back to it in the future to gain wisdom.

And note, Clark and I certainly haven’t resolved all the issues raised. Indeed, we’d like to hear your wisdom and insights in the comments section.

————————–

Will:

I want to pick on the second-most renowned model in instructional design, the 4-Level Kirkpatrick Model. It produces some of the most damaging messaging in our industry. Here’s a short list of its treacherous triggers: (1) It completely ignores the importance of remembering to the instructional design process, (2) It pushes us learning folks away from a focus on learning—where we have the most leverage, (3) It suggests that Level 4 (organizational results) and Level 3 (behavior change) are more important than measuring learning—but this is an abdication of our responsibility for the learning results themselves, (4) It implies that Level 1 (learner opinions) are on the causal chain from training to performance, but two major meta-analyses show this to be false—smile sheets, as now utilized, are not correlated with learning results! If you force me, I’ll share a quote from a top-tier research review that damns the Kirkpatrick model with a roar. “Buy the ticket, take the ride.”

 

Clark:

I laud that you’re not mincing words!   And I’ll agree and disagree.  To address your concerns: 1) Kirkpatrick is essentially orthogonal to the remembering process. It’s not about learning, it’s about aligning learning to impact.  2) I also think that Kirkpatrick doesn’t push us away from learning, though it isn’t exclusive to learning (despite everyday usage). Learning isn’t the only tool, and we should be willing to use job aids (read: performance support) or any other mechanism that can impact the organizational outcome.  We need to be performance consultants! 3) Learning in and of itself isn’t important; it’s what we’re doing with it that matters. You could ensure everyone could juggle chainsaws, but unless it’s Cirque de Soleil, I wouldn’t see the relevance.

So I fully agree with Kirkpatrick on working backwards from the org problem and figuring out what we can do to improve workplace behavior.  Level 2 is about learning, which is where your concerns are, in my mind, addressed.  But then you need to go back and see if what they’re able to do now is what is going to help the org!  And I’d counter that the thing I worry about is the faith that if we do learning, it is good.  No, we need to see if that learning is impacting the org.  4) Here’s where I agree, that Level 1 (and his numbering) led people down the garden path: people seem to think it’s ok to stop at level 1!  Which is maniacal, because what learners think has essentially zero correlation with whether it’s working (as you aptly say)).  So it has led to some really bad behavior, serious enough to make me think it’s time for some recreational medication!

 

Will:

Actually, I’m flashing back to grad school. “Orthogonal” was one of the first words I remember learning in the august halls of my alma mater. But my digression is perpendicular to this discussion, so forget about it! Here’s the thing. A model that is supposed to align learning to impact ought to have some truth about learning baked into its DNA. It’s less than half-baked, in my not-so-humble opinion.

As they might say in the movies, the Kirkpatrick Model is not one of God’s own prototypes! We’re responsible people, so we ought to have a model that doesn’t distract us from our most important leverage points. Working backward is fine, but we’ve got to go all the way through the causal path to get to the genesis of the learning effects. Level 1 is a distraction, not a root. Yes, Level 2 is where the K-Model puts learning, but learning back in 1959 is not the same animal that it is today. We actually have a pretty good handle on how learning works now. Any model focused on learning evaluation that omits remembering is a model with a gaping hole.

 

Clark:

Ok, now I’m confused.  Why should a model of impact need to have learning in its genes?  I don’t care whether you move the needle with performance support, formal learning, or magic jelly beans; what K talks about is evaluating impact.  What you measure at Level 2 is whether they can do the task in a simulated environment.  Then you see if they’re applying it at the workplace, and whether it’s having an impact.

No argument that we have to use an approach to evaluate whether we’re having the impact at level 2 that we should, but to me that’s a separate issue.  Kirkpatrick just doesn’t care what tool we’re using, nor should it.  Kirkpatrick doesn’t care whether you’re using behavioral, cognitive, constructivist, or voodoo magic to make the impact, as long as you’re trying something.

We should be defining our metric for level 2, arguably, to be some demonstrable performance that we think is appropriate, but I think the model can safely be ignorant of the measure we choose at level 2 and 3 and 4.  It’s about making sure we have the chain.  I’d be worried, again, that talking about learning at level 2 might let folks off the hook about level 3 and 4 (which we see all too often) and make it a matter of faith. So I’m gonna argue that including the learning into the K model is less optimal than keeping it independent. Why make it more complex than need be?  So, now, what say you?

 

Will:

Clark! How can you say the Kirkpatrick model is agnostic to the means of obtaining outcomes? Level 2 is “LEARNING!” It’s not performance support, it’s not management intervention, it’s not methamphetamine. Indeed, the model was focused on training.

The Kirkpatricks (Don and Jim) have argued—I’ve heard them live and in the flesh—that the four levels represent a causal pathway from 1 to 4. In addition, the notion of working backward implies that there is a causal connection between the levels. The four-level model implies that a good learner experience is necessary for learning, that learning is necessary for on-the-job behavior, and that successful on-the-job behavior is necessary for positive organizational results. Furthermore, almost everybody interprets it this way.

The four levels imply impact at each level, but look at all the factors that they are missing! For example, learners need to be motivated to apply what they’ve learned. Where is that in the model? Motivation can be an impact too! We as learning professionals can influence motivation. There are other impacts we can make as well. We can make an impact on what learners remember, whether learners are supported back on the job, etc.

Here’s what a 2012 seminal research review from a top-tier scientific journal concluded: “The Kirkpatrick framework has a number of theoretical and practical shortcomings. [It] is antithetical to nearly 40 years of research on human learning, leads to a checklist approach to evaluation (e.g., ‘we are measuring Levels 1 and 2, so we need to measure Level 3’), and, by ignoring the actual purpose for evaluation, risks providing no information of value to stakeholders… (p. 91). That’s pretty damning!

 

Clark:

I don’t see the Kirkpatrick model as an evaluation of the learning experience, but instead of the learning impact.   I see it as determining the effect of a programmatic intervention on an organization.  Sure, there are lots of other factors: motivation, org culture, effective leadership, but if you try to account for everything in one model you’re going to accomplish nothing.  You need some diagnostic tools, and Kirkpatrick’s model is one.

If they can’t perform appropriately at the end of the learning experience (level 2), that’s not a Kirkpatrick issue, the model just lets you know where the problem is. Once they can, and it’s not showing up in the workplace (level 3), then you get into the org factors. It is about creating a chain of impact on the organization, not evaluating the learning design.  I agree that people misuse the model, so when people only do 1 or 2, they’re wasting time and money. Kirkpatrick himself said he should’ve numbered it the other way around.

Now if you want to argue that that, in itself, is enough reason to chuck it, fine, but let’s replace it with another impact model with a different name, but the same intent of focusing on the org impact, workplace behavior changes, and then intervention. I hear a lot of venom directed at the Kirkpatrick model, but I don’t see it ‘antithetical to learning’.

And I worry the contrary; I see too many learning interventions done without any consideration of the impact on the organization.  Not just compliance, but ‘we need a course on X’ and they do it, without ever looking to see whether a course on X will remedy the biz problem. What I like about Kirkpatrick is that it does (properly used) put the focus on the org impact first.

 

Will:

Sounds like you’re holding on to Kirkpatrick because you like its emphasis on organizational performance. Let’s examine that for a moment. Certainly, we’d like to ensure that Intervention X produces Outcome Y. You and I agree. Hugs all around. Let’s move away from learning for a moment. Let’s go Mad Men and look at advertising. Today, advertising is very sophisticated, especially online advertising because companies can actually track click-rates, and sometimes can even track sales (for items sold online). So, in a best-case scenario, it works this way:

  • Level 1 – Web surfers says they like the advertisement
  • Level 2 – Web surfers show comprehension by clicking on link.
  • Level 3 – Web surfers spend time reading/watching on splash page.
  • Level 4 – Web surfers buy the product offered on the splash page.

A business person’s dream! Except that only a very small portion of sales actually happen this way (although, I must admit, the rate is increasing). But let’s look at a more common example. When a car is advertised, it’s impossible to track advertising through all four levels. People who buy a car at a dealer can’t be definitively tracked to an advertisement.

So, would we damn our advertising team? Would we ask them to prove that their advertisement increased car sales? Certainly, they are likely to be asked to make the case…but it’s doubtful anybody takes those arguments seriously… and shame on folks who do!

In case, I’m ignorant of how advertising works behind the scenes—which is a possibility, I’m a small “m” mad man—let me use some other organizational roles to make my case.

  • Is our legal team asked to prove that their performance in defending a lawsuit is beneficial to the company? No, everyone appreciates their worth.
  • Do our recruiters have to jump through hoops to prove that their efforts have organizational value? They certainly track their headcounts, but are they asked to prove that those hires actually do the company good? No!
  • Do our maintenance staff have to get out spreadsheets to show how their work saves on the cost of new machinery? No!
  • Do our office cleaning professionals have to utilize regression analyses to show how they’ve increased morale and productivity? No again!

There should be a certain disgust in feeling we have to defend our good work every time…when others don’t have to.

I use the Mad Men example to say that all this OVER-EMPHASIS on proving that our learning is producing organizational outcomes might be a little too much. A couple of drinks is fine, but drinking all day is likely to be disastrous.

Too many words is disastrous too…But I had to get that off my chest…

 

Clark:

I do see a real problem in communication here, because I see that the folks you cite *do* have to have an impact. They aren’t just being effective, but they have to meet some level of effectiveness. To use your examples: the legal team has to justify its activities in terms of the impact on the business. If they’re too tightened down about communications in the company, they might stifle liability, but they can also stifle innovation. And if they don’t provide suitable prevention against legal action, they’re turfed out.   Similarly, recruiters have to show that they’re not interviewing too many, or too few people, and getting the right ones. They’re held up against retention rates and other measures.  The maintenance staff does have to justify headcount against the maintenance costs, and those costs against the alternative of replacement of equipment (or outsourcing the servicing).  And the office cleaning folks have to ensure they’re meeting environmental standards at an efficient rate.  There are standards of effectiveness everywhere in the organization except L&D.  Why should we be special?

Let’s go on: sales has to estimate numbers for each quarter, and put that up against costs. They have to hit their numbers, or explain why (and if their initial estimates are low, they can be chastised for not being aggressive enough). They also worry about the costs of sales, hit rates, and time to a signature. Marketing, too, has to justify expenditure. To use your example, they do care about how many people come to the site, how long they stay, how many pages they hit, etc. And they try to improve these. At the end of the day, the marketing investment has to impact the sales. Eventually, they do track site activity to dollars. They have to. If we don’t, we get boondoggles. If you don’t rein in marketing initiatives, you get these shenanigans where existing customers are boozed up and given illegal gifts that eventually cause a backlash against the company. Shareholders get a wee bit stroppy when they find that investments aren’t paying off, and that the company is losing unnecessary money.

It’s not a case of ‘if you build it, it is good’! You and I both know that much of what is done in the name of formal learning (and org L&D activity in general) isn’t valuable. People take orders and develop courses where a course isn’t needed. Or create learning events that don’t achieve the outcomes. Kirkpatrick is the measure that tracks learning investments back to impact on the business.  and that’s something we have to start paying attention to. As someone once said, if you’re not measuring, why bother? Show me the money! And if you’re just measuring your efficiency, that your learning is having the desired behavioral change, how do you know that behavior change is necessary to the organization? And until we get out of the mode where we do the things we do on faith,  and start understanding have a meaningful impact on the organization, we’re going to continue to be the last to have an influence on the organization, and the first to be cut when things are tough. Yet we have the opportunity to be as critical to the success of the organization as IT! I can’t stand by seeing us continue to do learning without knowing that it’s of use. Yes, we do need to measure our learning for effectiveness as learning, as you argue, but we have to also know that what we’re helping people be able to do is what’s necessary. Kirkpatrick isn’t without flaws, numbering, level 1, etc. But it’s a clear value chain that we need to pay attention to. I’m not saying in lieu of measuring our learning effectiveness, but in addition. I can’t see it any other way.

 

Will:

Okay, I think we’ve squeezed the juice out of this tobacco. I would have said “orange” but the Kirkpatrick Model has been so addictive for so long…and black is the new orange anyway…

I want to pick up on your great examples of individuals in an organizations needing to have an impact. You noted, appropriately, that everyone must have an impact. The legal team has to prevent lawsuits, recruiters have to find acceptable applicants, maintenance has to justify their worth compared to outsourcing options, cleaning staff have to meet environmental standards, sales people have to sell, and so forth.

Here is the argument I’m making: Employees should be held to account within their circles of maximum influence, and NOT so much in their circles of minimum influence.

So for example, let’s look at the legal team.

Doesn’t it make sense that the legal team should be held to account for the number of lawsuits and amount paid in damages more than they should be held to account for the level of innovation and risk taking within the organization?

What about the cleaning professionals?

Shouldn’t we hold them more accountable for measures of perceived cleanliness and targeted environmental standards than for the productivity of the workforce?

What about us learning-and-performance professionals?

Shouldn’t we be held more accountable for whether our learners comprehend and remember what we’ve taught them more than whether they end up increasing revenue and lowering expenses?

I agree that we learning-and-performance professionals have NOT been properly held to account. As you say, “There are standards of effectiveness everywhere in the organization except L&D.” My argument is that we, as learning-and-performance professionals, should have better standards of effectiveness—but that we should have these largely within our maximum circles of influence.

Among other things, we should be held to account for the following impacts:

  • Whether our learning interventions create full comprehension of the learning concepts.
  • Whether they create decision-making competence.
  • Whether they create and sustain remembering.
  • Whether they promote a motivation and sense-of-efficacy to apply what was learned.
  • Whether they prompt actions directly, particularly when job aids and performance support are more effective.
  • Whether they enable successful on-the-job performance.
  • Et cetera.

Final word, Clark?

 

Clark:

First, I think you’re hoist by your own petard.  You’re comparing apples and your squeezed orange. Legal is measured by lawsuits, maintenance by cleanliness, and learning by learning. Ok that sounds good, except that legal is measured by lawsuits against the organization. And maintenance is measured by the cleanliness of the premises.  Where’s the learning equivalent?  It has to be: impact on decisions that affect organizational outcomes.  None of the classic learning evaluations evaluate whether the objectives are right, which is what Kirkpatrick does. They assume that, basically, and then evaluate whether they achieve the objective.

That said, Will, if you can throw around diagrams, I can too. Here’s my attempt to represent the dichotomy. Yes, you’re successfully addressing the impact of the learning on the learner. That is, can they do the task. But I’m going to argue that that’s not what Kirkpatrick is for. It’s to address the impact of the intervention on the organization. The big problem is, to me, whether the objectives we’ve developed the learning to achieve are objectives that are aligned with organizational need. There’s plenty of evidence it’s not.

 

So here I’m trying to show what I see K doing. You start with the needed business impact: more sales, lower compliance problems, what have you. Then you decide what has to happen in the workplace to move that needle.  Say, shorter time to sales, so the behavior is decided to be timeliness in producing proposals. Let’s say the intervention is training on the proposal template software. You design a learning experience to address that objective, to develop ability to use the software. You use the type of evaluation you’re talking about to see if it’s actually developing their ability. Then you use K to see if it’s actually being used in the workplace (are people using the software to create proposals), and then to see if it’d affecting your metrics of quicker turnaround. (And, yes, you can see if they like the learning experience, and adjust that.)

And if any one element isn’t working: learning, uptake, impact, you debug that.  But K is evaluating the impact process, not the learning design. It should flag if the learning design isn’t working, but it’s not evaluating your pedagogical decisions, etc. It’s not focusing on what the Serious eLearning Manifesto cares about, for instance. That’s what your learning evaluations do, they check to see if the level 2 is working. But not whether level 2 is affecting level 4, which is what ultimately needs to happen. Yes, we need level 2 to work, but then the rest has to fall in line as well.

My point about orthogonality is that K is evaluating the horizontal, and you’re saying it should address the vertical. That, to me, is like saying we’re going to see if the car runs by ensuring the engine runs. Even if it does, but if the engine isn’t connected through the drivetrain to the wheels, it’s irrelevant. So we do want a working, well-tuned, engine, but we also want a clutch or torque converter, transmission, universal joint, driveshaft, differential, etc. Kirkpatrick looks at the drive train, learning evaluations look at the engine.

We don’t have to come to a shared understanding, but I hope this at least makes my point clear.

 

Will:

Okay readers! Clark and I have fought to a stalemate… He says that the Kirkpatrick model has value because it reminds us to work backward from organizational results. I say the model is fatally flawed because it doesn’t incorporate wisdom about learning. Now it’s your turn to comment. Can you add insights? Please do!

 

Clark Quinn and I have been grappling with FUN-da-mental issues in the learning space over the years, and we finally decided to publish some of our dialogue.

In the latest conversation, Clark and I discuss how the tools in the learning field often don't send the right messages about how to design learning–that they unintentionally push us toward poor instructional designs.

You can read the discussion on Clark's world-renowned blog by CLICKING HERE.

——

——

——

Or, read an earlier discussion on how professionalized we are by clicking here.

 

Will:

Yo Clark, I really liked your new book, Revolutionize Learning and Development, but there’s one thing I’m not sure I’m fully behind—your recommendation that we as learning professionals kowtow to the organization—that we build our learning interventions aimed solely to meet organizational needs. I grew up near Philadelphia, so I’m partial to Rocky Balboa, using the interjection “Yo,” and rooting for the little guy. What are you thinking? Isn’t revolution usually aimed against the powerful?

Clark:

Will, what is powerful are the forces against needed change.  L&D appears to be as tied to an older age as Rocky is!  I’m not saying a complete abdication to the organization, but we certainly can’t be oblivious to it either.  The organization doesn’t know learning, to be sure, and should be able to trust us on performance support and informal learning too.  But do you really think that most of what is happening under the guise of L&D is a good job on the formal learning side?

Will:

Clark, Of course not. Much of L&D is like Rocky’s brother-in-law Paulie, having an inner heart of gold, but not living up to full effectiveness. I’ve written about the Five Failures of Workplace Learning Professionals three years ago, so I’m on the record that we could do better. And yes, there are lots of forces allied against us, so I’m glad you’re calling for revolution. But back to the question Apollo! To whom do we have more responsibility, the organizations we work for or our profession? To whom should we give our Creed?

Clark:

Will, your proposed bout is a non-starter!  It’s not either/or; we need to honor both our organization and our profession (and, I’ll argue, we’re currently doing neither).   When we’re building our interventions, they should be to serve the organizations needs, not just their wants. We can’t be order takers, we need to go to the mat (merrily mixing my metaphors) to find out the real problem, and use all solutions (not just courses).   Mickey’d tell you; you got to have heart, but also do the hard yards.  Isn’t the real tension between what we know we should be doing and what we’re actually doing?

Will:

I am so much in agreement! Why are we always order takers? You want fries with that? Here’s where I think some in our profession go overboard on the organization-first approach. First, like you say, many don’t have a training-request process that pushes their organizations to look beyond training as the singular leverage point to performance improvement. Second, some measurement “gurus” claim that what’s most important is to measure organizational results—while reneging on our professional responsibility to measure what we have the most control over—like whether people can make good work-related decisions after we train them or even remember what we taught them. Honestly, if the workplace learning field was a human being, it would be a person you wouldn’t want to have as a friend—someone who didn’t have a core set of values, someone who would be prone to following any fad or phony demigod, someone who would shift allegiances with the wind.

Clark:

Now you’re talking; I love the idea of a training-request process! I recall an organization where the training head had a cost/benefit form for every idea that was brought to him.  It’s not how much it costs per bum per seat per hour, but is that bum per seat per hour making a difference!  And we can start with the ability to make those decisions, but ultimately we ought to also care that making those decisions is impacting the organization too.  I certainly agree we have to be strong and fight for what’s right, not what’s easy or expedient.  Serious elearning for the win!

Will:

We seem to be coming to consensus, however, you inspired another question. We agree that we have two responsibilities, one to our professional values and one to our organization’s needs. But should we add another stakeholder to this mix? I have my own answer, inherent in one of my many below-the-radar models, but I’d like your wisdom. Here’s the question, do we have a responsibility to our learners/performers? If we do have responsibilities to them, what are those responsibilities? And here is perhaps the hardest question–in comparison to the responsibility we have to our organizations, is our level of responsibility to our learners/performers higher, lower, or about the same? Remember, the smaller the ring, the harder it is to run…the more likely we get hit by a haymaker. Good luck with these questions…

Clark:

Bringing in a ringer, eh?  I suppose you could see it as either of two ways: it’s our obligation to our profession and our organization to consider our learners, or they’re another stakeholder. I kinda like the former, as there’re lots of stakeholders: society, learners, ‘clients', SMEs, colleagues, profession, and more.  In fact, I’m inclined to go back to my proposition that’s it’s not either/or. Our obligation as professionals is to do the job that needs to be done in ways that responsibly address our learners, our organizations, and all stakeholders.  To put it in other words, designing interventions in ways that optimally equip learners to meet the needs of the organization is an integration of responsibilities, not a tradeoff.  We need to unify our approach  like boxing needs to unify the different titles!

Will:

From what I hear, boxing is dying as a spectator sport precisely because of all the discord and multiple sanctifying bodies. We in the learning-and-performance field might take this as a warning—we need to get our house in order, follow research-based best practices, and build a common body of knowledge and values. It starts with knowing who our stakeholders are and knowing that we have a responsibility to the values and goals of our profession. I like to give our learners a privileged place—at the same level of priority as the organization. It’s not that I think this is an easy argument in an economic sense, because the organization is paying the bills after all. But too often we forget our learners, so I like to keep them front and center in our learning-to-performance models.

Thanks Clark for the great discussion. And thanks for agreeing to host the next one on your blog

Article in New York Times discusses research on group creativity. One thing the research has shown is that brainstorming may not be as beneficial as once thought–because individuals working alone come up with better ideas, AND the group needs to improve those ideas.

Did you know Einstein's original calculations around e=mc2 needed to be refined by others?

Nice article. Note: I was clued in to this article by reviewing my Twitter page, where Clark Quinn had tweeted about this.