We are Professionals, Aren’t We?

Last year I was asked by Michael Allen–one of our industry's most influential creators and most successful entrepreneurs–to contribute a chapter to his first e-Learning Annual, which Pfeiffer had urged him to manage and edit.

Michael introduced my chapter as follows:

"In this article, one of the learning and performance field’s leading
visionaries looks back on his twenty-two years in the field with both love
and regret, while looking forward to the future by challenging all of us in
the field who see ourselves as learning-and-performance professionals.
Dr. Thalheimer’s challenge is simple. He asks every person in the field
to understand the forces that control their thinking and influence their
decision making. It’s as if the author wants to say: the unexamined
profession is not worth having."

I'm still thrilled to hear Sir Michael call me a visionary–though I'm sure he was talking about hallucinations of some sort. Because the article still resonates for me, I thought I'd share it with you.

Download We_Are_Professionals_by_Will_Thalheimer_

I encourage you to take a look at the whole book. Michael Allen’s 2008 e-Learning Annual does a great job covering the historic and institutional foundations of the learning-technology field–with chapters from more than 20 luminaries who have been at the heart of the learning field for a long time, including folks like Thiagi, David Merrill, Allison Rossett, and Greg Kearsley, and so many more.

Here are some of my conclusions in the chapter:

  1. Our graduate schools prepare
    technicians, not thoughtful
    scientist-practitioners who
    understand learning, think
    critically, and build wisdom over
  2. We don’t measure the outcome of
    our work in ways that enable us
    to build effective feedback loops
    and make improvements that will
    lead to better learning, on-the-job
    performance, and business results.
  3. The work pressures we face
    (for example, Internet-induced
    information overload and business
    demands for cheaper, faster
    results)—combined with our
    tendency toward professional
    arrogance—don’t predispose
    us to keep learning, to test our
    conjectures, to build a rich and
    complex knowledge base over time.
  4. Our trade associations, magazines,
    and conferences provide us
    with information that sells, not
    information that necessarily tells
    the truth of how we should better
    design our products and services.
  5. Our consultants and vendors are
    a large source of our information,
    and we tend to think uncritically
    about their offerings.
  6. Learning-and-performance
    research is not utilized when it
    might provide substantial benefits.
  7. Industry research is severely
    flawed, but we rely on it anyway.
  8. Contests, awards, and best-of lists
    grab our attention and distort
    our thinking about what is most

Okay, those were the list of our failures. I also add a list that begs for hope for our profession.

What do you think of our current practices?

Of our future?

14 replies
  1. Julie Dirksen
    Julie Dirksen says:

    Hey Will — enjoyed the chapter a lot, and agree with the points above (#1? Oh heck yeah), but wanted to specifically comment on your second point.
    I think a big part of the problem of evaluation is one of incentives. Not only is there little incentive to do evaluation, there are active incentives NOT to do it.
    Those disincentives include the fact that it’s costly (which takes budget away from building new stuff). It’s difficult to get good data on anything once the learners leave the room (physically or virtually). Instructional designers may not want to know the ugly truth about the effectiveness of their designs (a problem for which I have some sympathy — in many cases they’ve been flying blind for *years*). And, it’s ridiculously easy to continue building learning applications without bothering with evaluation.
    Not sure what the answer is (getting people to do what right/good for them is hard – it’s right up there with getting people to exercise more).
    I do think we should demand more from the technology, though. I think we have accepted the incredibly tiny data box that is standards-compliant LMSs for way longer than we should have. It wouldn’t fix the problem, but it good go a long way towards making evaluation more cost effective, and feasible for the training groups who have right intentions, but lack the resources or the tools.

  2. Robert Bacal
    Robert Bacal says:

    Great thought provoking post, Will, as always. By and large I agree, although there are a few generalizations that are somewhat dangerous. The one on graduate schools, in particular. On that particular one it depends on where you go, and above all, what the learner wants, and is willing to commit to.
    Schools, sadly, in the USA are responding to what learners want, which is quick, dirty means of getting employed. BUT, not all schools.
    Generally speaking, the field of training/learning is a disgrace, something I’ve been saying for a good decade, and one reason is there are no significant barriers or requirements to enter. It gets worse every year, from what I see.
    It’s faddish, and by and large, the majority of people in the field have NO understand about how people learn, except for the old saws (often wrong) that get repeated over and over again.
    The problem is that the poor practitioners in the field predominate, and marketing trumps knowledge. From experience I’d suggest that railing about these issues does not endear oneself to trainers.

  3. Seth
    Seth says:

    Oh please, grad students are everyone’s favorite whipping boy(or girl). With increased competition for tenure-track positions, today’s students have conducted and published more research than their predecessors.
    The problem with lists like these is that they always lack introspection; just broad sweeping comments about how everyone else is part of the problem.

  4. Mark Cody
    Mark Cody says:

    Will, great article. As for our current practices, what I’ve seen is that the default in eLearning tends toward text on screen with irrelevant graphics and a dearth of interaction. I have sympathy for the management I’ve worked under, as it seems the pace and volumes leaves little time to control quality. What knowledge there may be isn’t leveraged. Simple things, like moving up Dale’s cone of experience and simulating the job context in the training situation, just aren’t happening. I see and hear lots of enthusiasm and complaints about tools and technology (and I have opinions about gaps in the eLearning software development tools), but learning developers aren’t utilizing what they have. As for the future, well, I see an opportunity for content management solutions to bridge the gaps between print, web, and eLearning, and I hope that the best learning scientists are there when that technological leap happens.

  5. Will Thalheimer
    Will Thalheimer says:

    Thanks everybody for your thoughts. I had thought I sent comments earlier responding to specific people, but my blog tool didn’t do what it said it would, so here are some specific responses.
    I agree. It is difficult to fight tradition and disincentives at the same time.
    I do think that many of us don’t even realize HOW BAD our current evaluation system is right now. It actually biases results to make our learning interventions look good. So a little education on this can’t hurt. Then, I agree, we ought to make evaluation easier—through technology if possible. Finally, for all this talk over the last 20 years that training-and-development is supposed to support business results, why don’t business leaders ask more of us?
    What does quantity of research have to do with anything?
    Many dissertations in our field that are VERY poorly constructed. They are basically surveys of people. Just awful stuff. There are also those who are pigeon-holed in one research or theoretical tradition and don’t look beyond it, prompting them to completely send wrong signals about what is truly effective. There certainly is some good research out there, but much of it is terrible. I think many dissertation committee members either lack the introspection you talk about or just don’t care. Why do they allow research of such poor methodological rigor?
    I’ve been talking about research-based learning for over 11 years now, speaking at several conferences every year, working with clients at all levels of sophistication—so I’ve got some evidence on this—and much introspection.
    By the way, grad students aren’t really the whipping boy/girl. What I’m complaining about is that portion of the academic culture that allows such poor understanding and use of research. The problem isn’t the grad students, it’s some professors, some deans, and some administrators.
    –Will Thalheimer

  6. henrylow
    henrylow says:

    Influence can be defined as the power exerted over the minds and behavior of others. A power that can affect, persuade and cause changes to someone or something. In order to influence people, you first need to discover what is already influencing them. What makes them tick? What do they care about? We need some leverage to work with when we’re trying to change how people think and behave.

  7. thomas sabo
    thomas sabo says:

    In order to influence people, you first need to discover what is already influencing them. What makes them tick? What do they care about? We need some leverage to work with when we’re trying to change how people think and behave.

  8. car loan Troy
    car loan Troy says:

    Generally speaking, the field of training/learning is a disgrace, something I’ve been saying for a good decade, and one reason is there are no significant barriers or requirements to enter. It gets worse every year, from what I see.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply