Donald Kirkpatrick (1924-2014) was a giant in the workplace learning and development field, widely known for creating the four-level model of learning evaluation. Evidence however contradicts this creation myth and points to Raymond Katzell, a distinguished industrial-organizational psychologist, as the true originator. This, of course, does not diminish Don Kirkpatrick’s contribution to framing and popularizing the four-level framework of learning evaluation.

The Four-Levels Creation Myth

The four-level model is traditionally traced back to a series of four articles Donald Kirkpatrick wrote in 1959 and 1960, each article covering one of the four levels, Reaction, Learning, Behavior, Results. These articles were published in the magazine of ASTD (then called the American Society of Training Directors). Here’s a picture of the first page of the first article:

In June of 1977, ASTD (known by then as the American Society of Training and Development, now ATD, the Association for Talent Development) reissued Kirkpatrick’s original four articles, combining them into one article in the Training and Development Journal. The story has always been that it was those four articles that introduced the world to the four-level model of training evaluation.

In 1994, in the first edition of his book, Evaluating Training Programs: The Four Levels, Donald Kirkpatrick wrote:

“In 1959, I wrote a series of four articles called ‘Techniques for Evaluating Training Programs,’ published in Training and Development, the journal of the American Society for Training and Development (ASTD). The articles described the four levels of evaluation that I had formulated. I am not sure where I got the idea for this model, but the concept originated with work on my Ph.D. dissertation at the University of Wisconsin, Madison.” (p. xiii). [Will’s Note: Kirkpatrick was slightly inaccurate here. At the time of his four articles, the initials ASTD stood for the American Society of Training Directors and the four articles were published in the Journal of the American Society of Training Directors. This doesn’t diminish Kirkpatrick’s central point: that he was the person who formulated the four levels of learning evaluation].

In 2011, in a tribute to Dr. Kirkpatrick, he is asked about how he came up with the four levels. This is what he said in that video tribute:

“[after I finished my dissertation in 1954], between 54 and 59 I did some research on behavior and results. I went into companies. I found out are you using what you learned and if so what can you show any evidence of productivity or quality or more sales or anything from it. So I did some research and then in 1959 Bob Craig, editor of the ASTD journal, called me and said, ‘Don, I understand you’ve done some research on evaluation would you write an article?’ I said, ‘Bob, I’ll tell you what I’ll do, I’ll write four articles, one on reaction, one on learning, one on behavior, and one on results.'”

In 2014, when asked to reminisce on his legacy, Dr. Kirkpatrick said this:

“When I developed the four levels in the 1950s, I had no idea that they would turn into my legacy. I simply needed a way to determine if the programs I had developed for managers and supervisors were successful in helping them perform better on the job. No models available at that time quite fit the bill, so I created something that I thought was useful, implemented it, and wrote my dissertation about it.” (Quote from blog post published January 22, 2014).

As recently as this month (January 2018), on the Kirkpatrick Partners website, the following is written:

“Don was the creator of the Kirkpatrick Model, the most recognized and widely used training evaluation model in the world. The four levels were developed in the writing of his Ph.D. dissertation, Evaluating a Human Relations Training Program for Supervisors.

Despite these public pronouncements, Kirkpatrick’s legendary 1959-1960 articles were not the first published evidence of a four-level evaluation approach.

Raymond Katzell’s Four-Step Framework of Evaluation

In an article written by Donald Kirkpatrick in 1956, the following “steps” were laid out and were attributed to “Raymond Katzell, a well known authority in the field [of training evaluation].”

  1. To determine how the trainees feel about the program.
  2. To determine how much the trainees learn in the form of increased knowledge and understanding.
  3. To measure the changes in the on-the-job behavior of the trainees.
  4. To determine the effects of these behavioral changes on objective criteria such as production, turnover, absenteeism, and waste.

These four steps are the same as Kirkpatrick’s four levels, except there are no labels.

Raymond Katzell went on to a long and distinguished career as an industrial-organizational psychologist, even winning the Society for Industrial and Organizational Performance’s Distinguished Scientific Contributions award.

Raymond Katzell. Picture used by SIOP (Society for Industrial and Organizational Psychology) when they talk about The Raymond A. Katzell Media Award in I-O Psychology.

The first page of Kirkpatrick’s 1956 article—written three years before his famous 1959 introduction to the four levels—is pictured below:

And here is a higher-resolution view of the quote from that front page, regarding Katzell’s contribution:

So Donald Kirkpatrick mentions Katzell’s four-step model in 1956, but not in 1959 when he—Kirkpatrick—introduces the four labels in his classic set of four articles.

It Appears that Kirkpatrick Never Mentions Katzell’s Four Steps Again

As far I can tell, after searching for and examining many publications, Donald Kirkpatrick never mentioned Katzell’s four steps after his 1956 article.

Three years after the 1956 article, Kirkpatrick did not mention Katzell’s taxonomy when he wrote his four famous articles in 1959. He did mention an unrelated article where Katzell was a co-author (Merrihue & Katzell, 1955), but he did not mention Katzell’s four steps.

Neither did Kirkpatrick mention Katzell in his 1994 book, Evaluating Training Programs: The Four Levels.

Nor did Kirkpatrick mention Katzell in the third edition of the book, written with Jim Kirkpatrick, his son.

Nor was Katzell mentioned in a later version of the book written by Jim and Wendy Kirkpatrick in 2016. I spoke with Jim and Wendy recently (January 2018), and they seemed as surprised as I was about the 1956 article and about Raymond Katzell.

Nor did Donald Kirkpatrick mention Katzell in any of the interviews he did to mark the many anniversaries of his original 1959-1960 articles.

To summarize, Katzell, despite coming up with the four-step taxonomy of learning evaluation, was only given credit by Kirkpatrick once, in the 1956 article, three years prior to the articles that introduced the world to the Kirkpatrick Model’s four labels.

Kirkpatrick’s Dissertation

Kirkpatrick did not introduce the four-levels in his 1954 dissertation. There is not even a hint at a four-level framework.

In his dissertation, Kirkpatrick cited two publications by Katzell. The first, was an article from 1948, “Testing a Training Program in Human Relations.” That article studies the effect of leadership training, but makes no mention of Katzell’s four steps. It does, however, hint at the value of measuring on-the-job performance, in this case the value of leadership behaviors. Katzell writes, “Ideally, a training program of this sort [a leadership training program] should be evaluated in terms of the on-the-job behavior of those with whom the trainees come in contact.

The second Katzell article cited by Kirkpatrick in his dissertation was an article entitled, “Can We Evaluate Training?” from 1952. Unfortunately, it was a mimeographed article published by the Industrial Management Institute at the University of Wisconsin, and seems to be lost to history. Even after several weeks of effort (in late 2017), the University of Wisconsin Archives could not locate the article. Interestingly, in a 1955 publication entitled, “Monthly Checklist of State Publications” a subtitle was added to Katzell’s Can We Evaluate Training? The subtitle was:A summary of a one day Conference for Training Managers” from April 23, 1952.

To be clear, Kirkpatrick did not mention the four levels in his 1954 dissertation. The four levels notion came later.

How I Learned about Katzell’s Contribution

I’ve spent the last several years studying learning evaluation, and as part of these efforts, I decided to find Kirkpatrick’s original four articles and reread them. ATD (The Association for Talent Development) in 2017 had a wonderful archive of the articles it had published over the years. As I searched for “Kirkpatrick,” several other articles—besides the famous four—came up, including the 1956 article. I was absolutely freaking stunned when I read it. Donald Kirkpatrick had cited Katzell as the originator of the four level notion!!!

I immediately began searching for more information on the Kirkpatrick-Katzell connection and found that I wasn’t the first person to uncover the connection. I found an article by Stephen Smith who acknowledged Kazell’s contribution in 2008, also in an ASTD publication. I communicated with Smith recently (December 2017) and he had nothing but kind words to say about Donald Kirkpatrick, who he said coached him on training evaluations. Here is a graphic taken directly from Smith’s 2008 article:

Smith’s article was not focused on Katzell’s contribution to the four levels, which is probably why it wasn’t more widely cited. In 2011, Cynthia Lewis wrote a dissertation and directly compared the Katzell and Kirkpatrick formulations. She appears to have learned about Katzell’s contribution from Smith’s 2008 article. Lewis’s (2011) comparison chart is reproduced below:

In 2004, four years before Smith wrote his article with the Katzell sidebar, ASTD republished Kirkpatrick’s 1956 article—the one in which Kirkpatrick acknowledges Katzell’s four steps. Here is the front page of that article:

In 2016, an academic article appeared in a book that referred to the Katzell-Kirkpatrick connection. The book is only available in French and the article appears to have had little impact in the English-speaking learning field. Whereas neither Kirkpatrick’s 2004 reprint nor Smith’s 2008 article offered commentary about Katzell’s contribution except to acknowledge it, Bouteiller, Cossette, & Bleau (2016) were clear in stating that Katzell deserves to be known as the person who conceptualized the four levels of training evaluation, while Kirkpatrick should get credit for popularizing it. The authors also lamented that Kirkpatrick, who himself recognized Katzell as the father of the four-level model of evaluation in his 1956 article, completely ignored Katzell for the next 55 years and declared himself in all his books and on his website as the sole inventor of the model. I accessed their chapter through Google Scholar and used Google Translate to make sense of it. I also followed up with two of the authors (Bouteiller and Cossette in January 2018) to confirm I was understanding their messaging clearly.

Is There Evidence of a Transgression?

Raymond Katzell seems to be the true originator of the four-level framework of learning evaluation and yet Donald Kirkpatrick on multiple occasions claimed to be the creator of the four-level model.

Of course, we can never know the full story. Kirkpatrick and Katzell are dead. Perhaps Katzell willingly gave his work away. Perhaps Kirkpatrick asked Katzell if he could use it. Perhaps Kirkpatrick cited Katzell because he wanted to bolster the credibility of a framework he developed himself. Perhaps Kirkpatrick simply forgot Katzell’s four steps when he went on to write his now-legendary 1959-1960 articles. This last explanation may seem a bit forced given that Kirkpatrick referred to the Merrihue and Katzell work in the last of his four articles—and we might expect that the name “Katzell” would trigger memories of Katzell’s four steps, especially given that Katzell was cited by Kirkpatrick as a “well known authority.” This forgetting hypothesis also doesn’t explain why Kirkpatrick would continue to fail to acknowledge Katzell’s contribution after ASTD republished Kirkpatrick’s 1956 article in 2004 or after Steven Smith’s 2008 article showed Katzell’s four steps. Smith was well-known to Kirkpatrick and is likely to have at least mentioned his article to Kirkpatrick.

We can’t know for certain what transpired, but we can analyze the possibilities. Plagiarism means that we take another person’s work and claim it as our own. Plagiarism, then, has two essential features (see this article for details). First, an idea or creation is copied in some way. Second, no attribution is offered. That is, no credit is given to the originator. Kirkpatrick had clear contact with the essential features of Katzell’s four-level framework. He wrote about them in 1956! This doesn’t guarantee that he copied them intentionally. He could have generated the four levels subconsciously, without knowing that Katzell’s ideas were influencing his thinking. Alternatively, he could have spontaneously created them without any influence from Katzell’s ideas. People often generate similar ideas when the stimuli they encounter are similar. How many people claim that they invented the term, “email?” Plagiarism does not require intent, but intentional plagiarism is generally considered a higher-level transgression than sloppy scholarship.

A personal example of how easy it is to think you invented something: In the 1990’s or early 2000’s, I searched for just the right words to explain a concept. I wrangled on it for several weeks. Finally, I came up with the perfect wording, with just the right connotation. “Retrieval Practice.” It was better than the prevailing terminology at the time—the testing effect—because people could retrieve without being tested. Eureka I thought! Brilliant I thought! It was several years later, rereading Robert Bjork’s 1988 article, “Retrieval practice and the maintenance of knowledge,” that I realized that my label was not original to me, and that even if I did generate it without consciously thinking of Bjork’s work, that my previous contact with the term “retrieval practice” almost certainly influenced my creative construction.

The second requirement for plagiarism is that the original creator is not given credit. This is evident in the case of the four levels of learning evaluation. Donald Kirkpatrick never mentioned Katzell after 1956. He certainly never mentioned Katzell when it would have been most appropriate, for example when he first wrote about the four levels in 1959, when he first published a book on the four levels in 1994, and when he received awards for the four levels.

Finally, one comment may be telling, Kirkpatrick’s statement from his 1994 book: “I am not sure where I got the idea for this model, but the concept originated with work on my Ph.D. dissertation at the University of Wisconsin, Madison.” The statement seems to suggest that Kirkpatrick recognized that there was a source for the four-level model—a source that was not Kirkpatrick himself.

Here is the critical timeline:

  • Katzell was doing work on learning evaluation as early at 1948.
  • Kirkpatrick’s 1954 dissertation offers no trace of a four-part learning-evaluation framework.
  • In 1956, the first reference to a four-part learning evaluation framework was offered by Kirkpatrick and attributed to Raymond Katzell.
  • In 1959, the first mention of the Kirkpatrick terminology (i.e., Reaction, Learning, Behavior, Results) was published, but Katzell was not credited.
  • In 1994, Kirkpatrick published his book on the four levels, saying specifically that he formulated the four levels. He did not mention Katzell’s contribution.
  • In 2004, Kirkpatrick’s 1956 article was republished, repeating Kirkpatrick’s acknowledgement that Katzell invented the four-part framework of learning evaluation.
  • In 2008, Smith published the article where he cited Katzell’s contribution.
  • In 2014, Kirkpatrick claimed to have developed the four levels in the 1950s.
  • As far as I’ve been able to tell—corroborated by Bouteiller, Cossette, & Bleau (2016)—Donald Kirkpatrick never mentioned Katzell’s four-step formulation after 1956.

Judge Not Too Quickly

I have struggled writing this article, and have rewritten it dozens of times. I shared an earlier version with four trusted colleagues in the learning field and asked them if I was being fair. I’ve searched exhaustively for source documents. I reached out to key players to see if I was missing something.

It is not a trifle to curate evidence that impacts other people’s reputations. It is a sacred responsibility. I as the writer have the most responsibility, but you as a reader have a responsibility too to weigh the evidence and make your own judgments.

First we should not be too quick to judge. We simply don’t know why Donald Kirkpatrick never mentioned Katzell after the original 1956 article. Indeed, perhaps he did mention Katzell in his workshops and teachings. We just don’t know.

Here are some distinct possibilities:

  • Perhaps Katzell and Kirkpatrick had an agreement that Kirkpatrick could write about the four levels. Let’s remember the 1959-1960 articles were not written to boost Kirkpatrick’s business interests. He didn’t have any business interests at that time—he was an employee—and his writing seemed aimed specifically at helping others do better evaluation.
  • Perhaps Kirkpatrick, being a young man without much of résumé in 1956, had developed a four-level framework but felt he needed to cite Katzell in 1956 to add credibility to his own ideas. Perhaps later in 1959 he dropped this false attribution to give himself the credit he deserved.
  • Perhaps Kirkpatrick felt that citing Katzell once was enough. Where many academics and researchers see plagiarism as one of the deadly sins, others have not been acculturated into the strongest form of this ethos. Let’s remember that in 1959 Kirkpatrick was not intending to create a legendary meme, he was just writing some articles. Perhaps at the time it didn’t seem important to acknowledge Katzell’s contribution. I don’t mean to dismiss this lightly. All of us are raised to believe in fairness and giving credit where credit is due. Indeed, research suggests that even the youngest infants have a sense of fairness. Kirkpatrick earned his doctorate at a prestigious research university. He should have been aware of the ethic of attribution, but perhaps because the 1959-1960 articles seemed so insignificant at the time, it didn’t seem important to site Katzell.
  • Perhaps Kirkpatrick intended to cite Katzell’s contribution in his 1959-1960 articles but the journal editor talked him out of it or disallowed it.
  • Perhaps Kirkpatrick realized that Katzell’s four steps were simply not resonant enough to be important. Let’s admit that Kirkpatrick’s framing of the four levels into the four labels was a brilliant marketing masterstroke. If Kirkpatrick believed this, he might have seen Katzell’s contribution as minimal and not deserving of acknowledgement.
  • Perhaps Kirkpatrick completely forget Katzell’s four-step taxonomy. Perhaps it didn’t influence him when he created his four labels, that he didn’t think of Katzell’s contribution when he wrote about Katzell’s article with Merrihue, that for the rest of his life he never remembered Katzell’s formulation, that he never saw the 2004 reprinting of his 1956 article, that he never saw Smith’s 2008 article, and that he never talked with Smith about Katzell’s work even though Smith has claimed a working relationship. Admittedly, this last possibility seems unlikely.

Let us also not judge Jim and Wendy Kirkpatrick, proprietors of Kirkpatrick Partners, a global provider of learning-evaluation workshops and consulting. None of this is on them! They were genuinely surprised to hear the news when I told them. They seemed to have no idea about Katzell or his contribution. What is past is past, and Jim and Wendy bear no responsibility for the history recounted here. What they do henceforth is their responsibility. Already, since we spoke last week, they have updated their website to acknowledge Katzell’s contribution!

Article Update (two days after original publication of this article): Yesterday, on the 31st of January 2018, Jim and Wendy Kirkpatrick posted a blog entry (copied here for the historic record) that admitted Katzell’s contribution but ignored Donald Kirkpatrick’s failure to acknowledge Katzell’s contribution as the originator of the four-level concept.

What about our trade associations and their responsibilities? It seems that ASTD bears a responsibility for their actions over the years, not only as the American Society of Training Directors who published the 1959-1960 articles without insisting that Katzell be acknowledged even though they themselves had published the 1956 articles where Katzell’s four-step framework was included on the first page; but also as the American Society of Training and Development who republished Kirkpatrick’s 1956 article in 2004 and republished the 1959-1960 articles in 1977. Recently rebranded as ATD (Association for Talent Development), the organization should now make amends. Other trade associations should also help set the record straight by acknowledging Katzell’s contribution to the four-level model of learning evaluation.

Donald Kirkpatrick’s Enduring Contribution

Regardless of who invented the four-level model of evaluation, it was Donald Kirkpatrick who framed it to perfection with the four labels and popularized it, helping it spread worldwide throughout the workplace learning and performance field.

As I have communicated elsewhere, I think the four-level model has issues—that it sends messages about learning evaluation that are not helpful.

On the other hand, the four-level model has been instrumental in pushing the field toward a focus on performance improvement. This shift—away from training as our sole responsibility, toward a focus on how to improve on-the-job performance—is one of the most important paradigm shifts in the long history of workplace learning. Kirkpatrick’s popularization of the four levels enabled us—indeed, it pushed us—to see the importance of focusing on work outcomes. For this, we owe Donald Kirkpatrick a debt of gratitude.

And we owe Raymond Katzell our gratitude as well. Not only did he originate the four levels, but he also put forth the idea that it was valuable to measure the impact learners have on their organizations.

What Should We Do Now?

What now is our responsibility as workplace learning professionals? What is ethical? The preponderance of the evidence points to Katzell as the originator of the four levels and Donald Kirkpatrick as the creator of the four labels (Reaction, Learning, Behavior, Results) and the person responsible for the popularization of the four levels. Kirkpatrick himself in 1956 acknowledged Katzell’s contribution, so it seems appropriate that we acknowledge it too.

Should we call them Katzell’s Four Levels of Evaluation? Or, the Katzell-Kirkpatrick Four Levels? I can’t answer this question for you, but it seems that we should acknowledge that Katzell was the first to consider a four-part taxonomy for learning evaluation.

For me, for the foreseeable future, I will either call it the Kirkpatrick Model and then explain that Raymond Katzell was the originator of the four levels, or I’ll simply call it the Kirkpatrick-Katzell Model.

Indeed, I think in fairness to both men—Kirkpatrick for the powerful framing of his four labels and his exhaustive efforts to popularize the model and Katzell for the original formulation—I recommend that we call it the Kirkpatrick-Katzell Four-Level Model of Training Evaluation. Or simply, the Kirkpatrick-Katzell Model.

Research Cited

Bjork, R. A. (1988). Retrieval practice and the maintenance of knowledge. In M. M. Gruneberg, P. E. Morris, R. N. Sykes (Eds.), Practical Aspects of Memory: Current Research and Issues, Vol. 1., Memory in Everyday Life (pp. 396-401). NY: Wiley.

Bouteiller, D., Cossette, M., & Bleau, M-P. (2016). Modèle d’évaluation de la formation de Kirkpatrick: retour sur les origins et mise en perspective. Dans M. Lauzier et D. Denis (éds.), Accroître le transfert des apprentissages: Vers de nouvelles connaissances, pratiques et expériences. Presses de l’Université du Québec, Chapitre 10, 297-339. [In English: Bouteiller, D., Cossette, M., & Bleau, M-P. (2016). Kirkpatrick training evaluation model: back to the origins and put into perspective. In M. Lauzier and D. Denis (eds.), Increasing the Transfer of Learning: Towards New Knowledge, Practices and Experiences. Presses de l’Université du Québec, Chapter 10, 297-339.]

Katzell, R. A. (1948). Testing a training program in human relations. Personnel Psychology, 1, 319-329.

Katzell, R. A. (1952). Can we evaluate training? A summary of a one day conference for training managers. A publication of the Industrial Management Institute, University of Wisconsin, April, 1952.

Kirkpatrick, D. L. (1956). How to start an objective evaluation of your training program. Journal of the American Society of Training Directors, 10, 18-22.

Kirkpatrick, D. L. (1959a). Techniques for evaluating training programs. Journal of the American Society of Training Directors, 13(11), 3-9.

Kirkpatrick, D. L. (1959b). Techniques for evaluating training programs: Part 2—Learning. Journal of the American Society of Training Directors, 13(12), 21-26.

Kirkpatrick, D. L. (1960a). Techniques for evaluating training programs: Part 3—Behavior. Journal of the American Society of Training Directors, 14(1), 13-18.

Kirkpatrick, D. L. (1960b). Techniques for evaluating training programs: Part 4—Results. Journal of the American Society of Training Directors, 14(2), 28-32.

Kirkpatrick, D. L. (1956-2004). A T+D classic: How to start an objective evaluation of your training program. T+D, 58(5), 1-3.

Lewis, C. J. (2011). A study of the impact of the workplace learning function on organizational excellence by examining the workplace learning practices of six Malcolm Baldridge Quality Award recipients. San Diego: CA. Available at http://sdsu-dspace.calstate.edu/bitstream/handle/10211.10/1424/Lewis_Cynthia.pdf.

Merrihue, W. V., & Katzell, R. A. (1955). ERI: Yardstick of employee relations. Harvard Business Review, 33, 91-99.

Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74–101.

Smith, S. (2008). Why follow levels when you can build bridges? T+D, September 2008, 58-62.

 

 

 

 

 

 

15th December 2017

Neon Elephant Award Announcement

Dr. Will Thalheimer, President of Work-Learning Research, Inc., announces the winner of the 2017 Neon Elephant Award, given to Patti Shank for writing and publishing two research-to-practice books this year, Write and Organize for Deeper Learning and Practice and Feedback for Deeper Learning—and for her many years advocating for research-based practices in the workplace learning field.

Click here to learn more about the Neon Elephant Award…

 

2017 Award Winner – Patti Shank, PhD

Patti Shank, PhD, is an internationally-recognized learning analyst, writer, and translational researcher in the learning, performance, and talent space. Dr. Shank holds a doctorate in Educational Leadership and Innovation, Instructional Technology from the University of Colorado, Denver and a Masters degree in Education and Human Development from George Washington University. Since 1996, Patti has been consulting, researching, and writing through her consulting practice, Learning Peaks LLC (pattishank.com). As the best research-to-practice professionals tend to do, Patti has extensive experience as a practitioner, including roles such as training specialist, training supervisor, and manager of training and education. Patti has also played a critical role collaborating with the workplace learning’s most prominent trade associations—working, sometimes quixotically, to encourage the adoption of research-based wisdom for learning.

Patti is the author of numerous books, focusing not only on evidence-based practices, but also on online learning, elearning, and learning assessment. The following are her most recent books:

In addition to her lifetime of work, Patti is honored for the two research-to-practice books she published this year!

Write and Organize for Deeper Learning provides research-based recommendations for instructional designers and others who write instructional text. Writing is fundamental to instructional design, but too often, instructional designers don’t get the guidance they need. As I wrote for the back cover of the book, “Write and Organize for Deeper Learning is the book I wish I had back when I was recruiting and developing instructional writers. Based on science, crafted in a voice from hard-earned experience, [the] book presents clear and urgent advice to help instructional writing practitioners.

Practice and Feedback for Deeper Learning also provides research-based recommendations. This time, Patti’s subject are two of the most important, but too often neglected, learning approaches: practice and feedback. As learning practitioners, we still too often focus on conveying information. As a seminal review in a top tier scientific journal put it, “we know from the body of research that learning occurs through the practice and feedback components.” (Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012, p. 86). As I wrote for the book jacket, Patti’s book “is a research-to-practice powerhouse! …A book worthy of being in the personal library of every instructional designer.

Patti has worked many years in the trenches, pushing for research-based practices, persevering against lethargic institutions, unexamined traditions, and commercial messaging biased toward sales not learning effectiveness. For her research, her grit, and her Sisyphean determination, we in the learning field owe Patti Shank our most grateful thanks!

 

 

Click here to learn more about the Neon Elephant Award…

I added these words to the sidebar of my blog, and I like them so much that I’m sharing them as a blog post itself.

Please seek wisdom from research-to-practice experts — the dedicated professionals who spend time in two worlds to bring the learning field insights based on science. These folks are my heroes, given their often quixotic efforts to navigate through an incomprehensible jungle of business and research obstacles.

These research-to-practice professionals should be your heroes as well. Not mythological heroes, not heroes etched into the walls of faraway mountains. These heroes should be sought out as our partners, our fellow travelers in learning, as people we hire as trusted advisors to bring us fresh research-based insights.

The business case is clear. Research-to-practice experts not only enlighten and challenge us with ideas we might not have considered — ideas that make our learning efforts more effective in producing business results — research-to-practice professionals also prevent us from engaging in wasted efforts, saving our organizations time and money, all the while enabling us to focus more productively on learning factors that actually matter.

 

21st December 2016

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2016 Neon Elephant Award, given this year to Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof for their book, Urban Myths about Learning and Education. Pedro, Paul, and Casper provide a research-based reality check on the myths and misinformation that float around the learning field. Their incisive analysis takes on such myths as learning styles, multitasking, discovery learning, and various and sundry neuromyths.

Urban Myths about Learning and Education is a powerful salve on the wounds engendered by the weak and lazy thinking that abounds too often in the learning field — whether on the education side or the workplace learning side. Indeed, in a larger sense, De Bruyckere, Kirschner, and Hulshof are doing important work illuminating key truths in a worldwide era of post-truth communication and thought. Now, more than ever, we need to celebrate the truth-tellers!

Click here to learn more about the Neon Elephant Award…

2016 Award Winners – Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof

Pedro De Bruyckere (1974) is an educational scientist at Arteveldehogeschool, Ghent since 2001. He co-wrote two books with Bert Smits in which they debunk popular myths on GenY and GenZ, education and pop culture. He co-wrote a book on girls culture with Linda Duits. And, of course, he co-wrote the book for which he and his co-authors are being honored, Urban Myths about Learning and Education. Pedro is an often-asked public speaker, one of his strongest points is that he “is funny in explaining serious stuff.”

Paul A. Kirschner (1951) is University Distinguished Professor at the Open University of the Netherlands as well as Visiting Professor of Education with a special emphasis on Learning and Interaction in Teacher Education at the University of Oulu, Finland. He is an internationally recognized expert in learning and educational research, with many classic studies to his name. He has served as President of the International Society for the Learning Sciences, is an AERA (American Education Research Association) Research Fellow (the first European to receive this honor). He is chief editor of the Journal of Computer Assisted Learning, associate editor of Computers in Human Behavior, and has published two very successful books: Ten Steps to Complex Learning and Urban Legends about Learning and Education. His co-author on the Ten-Steps book, Jeroen van Merriënboer, won the Neon-Elephant award in 2011.

Casper D. Hulshof is a teacher (assistant professor) at Utrecht University where he supervises bachelors and masters students. He teaches psychological topics, and is especially intrigued with the intersection of psychology and philosophy, mathematics, biology and informatics. He uses his experience in doing experimental research (mostly quantitative work in the areas of educational technology and psychology) to inform his teaching and writing. More than once he has been awarded teaching honors.

Why Honored?

Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof are honored this year for their book Urban Myths about Learning and Education, a research-based reality check on the myths and misinformation that float around the learning field. With their research-based recommendations, they are helping practitioners in the education and workplace-learning fields make better decisions, create more effective learning interventions, and avoid the most dangerous myths about learning.

For their efforts in sharing practical research-based insights on learning design, the workplace learning-and-performance field owes a grateful thanks to Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof.

Book Link:

Click here to learn more about the Neon Elephant Award…

Dr. Karl Kapp is one of the learning field’s best research-to-practice gurus! Legendary for his generosity and indefatigable energy, it is my pleasure to interview him for his wisdom on games, gamification, and their intersection.

His books on games and gamification are wonderful. You can click on the images below to view them on Amazon.com.

 

 

The following is a master class on games and learning:

 

Will (Question 1):

Karl, you’ve written a definitive exploration of Gamification in your book, The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. As I read your book I was struck by your insistence that Gamification “is not the superficial addition of points, rewards, and badges to learning experiences.” What the heck are you talking about? Everybody knows that gamification is all about leaderboards, or so the marketplace would make us believe… [WINK, WINK] What are you getting at in your repeated warning that gamification is more complex than we might think?

Karl:

If you examine why people play games, the reasons are many, but often players talk about the sense of mastery, the enjoyment of overcoming a challenge, the thrill of winning and the joy of exploring the environment. They talk about how they moved from one level to another or how they encountered a “boss level” and defeated the boss after several attempts or how they strategized a certain way to accomplish the goal of winning the game. Or they describe how they allocated resources so they could defeat a difficult opponent. Rarely, if ever, do people who play games talk about the thrill of earning a point or the joy of being number seven on the leaderboard or the excitement of earning a badge just for showing up.

The elements of points, badges and leaderboards (PBLs) are the least exciting and enticing elements of playing games. So there is no way we should lead with those items when gamifying instruction. Sure PBLs play a role in making a game more understandable or in determining how far away a player is from the “best” but they do little to internally motivate players by themselves. Reliance solely on the PBL elements of games to drive learner engagement is not sustainable and not even what makes games motivational or engaging. It’s the wrong approach to learning and motivation. It’s superficial; it’s not deep enough to have lasting meaning.

Instead, we need to look at the more intrinsically motivating and deeper elements of games such as: challenge, mystery, story, constructive feedback (meaningful consequences) strategy, socialization and other elements that make games inherently engaging. We miss a large opportunity when we limit our “game thinking” to points, badges and leaderboards. We need to expand our thinking to include elements that truly engage a player and draw them into a game. These are the things that make games fun and frustrating and worth our investment in time.

 

Will (Question 2):

You wrote that “too many elements of reality and the game ceases to be engaging,”—and I’m probably taking this out of context—but I wonder if that is true in all cases? For example, I can imagine a realistic flight simulator for fighter pilots that creates an almost perfect replica of the cockpit, g-forces, and more, that would be highly engaging… On the other hand, my 13-year-daughter got me hooked on Tanki, an online tank shot-em-up game, and there are very few elements of reality in the game—and I, unfortunately, find it very engaging. Is it different for novices and experts? Are the recommendations for perceptual fidelity different for different topic areas, different learning goals, et cetera?

Karl:

A while ago, I read a fake advertisement for a military game. It was a parody. The fake game description described how the “ultra-realistic” military game would be hours of fun because it was just like actually being in the military. The description told the player that he or she would have hours of fun walking to the mess hall, maintaining equipment, getting gasoline for the jeep, washing boots, patrolling and zigging and cleaning latrines. Now none of these things are really fun, in fact, they are boring but they are part of the life of being in the military. Military games don’t include these mundane activities. Instead, you are always battling an enemy or strategizing what to do next. The actions that a military force performs 95% of the time are not included in the game because they are too boring.

 If games where 100% realistic, they would not be fun. So, instead, games are an abstraction of reality because they focus on things within reality that can be made engaging or interesting. If a game reflected reality 100%, there would be boring game play. Now certainly, games can be designed to “improve” reality and make it more fun. In the game, The Sims, you wake up, get dressed and go to work which all seems pretty mundane. However, these realistic activities in The Sims are an abstraction of the tasks you actually perform. The layer of abstraction makes the game more exciting, engaging and fun. But in either the military game case or The Sims, too much reality is not fun.

The flight simulator needs to be 100% realistic because it’s not really a game (although people do play it as a game) but the real purpose of a simulation is training and perfection of skills. A flight simulator can be fun for some people to “play” but in a 100% realistic simulator, if you don’t know what you are doing, it’s boring because you keep crashing. For someone who doesn’t know how to fly, like me. If you made a World War II air battle game which had 100% realistic controls for my airplane, it wouldn’t be fun. In game design, we need to balance elements of reality with the learning goal and the element of engagement.

For some people, a simulator can be highly engaging because the learner is performing the task she would do on the job. So there needs to be a balance in games and simulations to have the right amount of reality for the goals you are trying to achieve.

 

Will (Question 3):

In developing a learning game, what should come first, the game or the goals (of learning)?

Karl:

Learning goals must come first and must remain at the forefront of the game design process. Too often I see the mistake of a design team becoming too focused on games elements and losing site of the learning goals. In our field, we are paid to help people learn, not to entertain them. Learning first.

Having said that, you can’t ignore or treat the game elements as second class citizens, you can’t bolt-on a points system and think you have now developed a fun game—you haven’t. The best process involves simultaneously integrating game mechanics and learning elements. It’s tricky and not a lot of instructional designers have experience or training in this area but it’s critical to have integration of game and learning elements, the two need to be designed together. Neither can be an afterthought.

 

Will (Question 4):

Later we’ll talk about the research you’ve uncovered about the effectiveness of games. As I peruse the literature on games, the focus is mostly on the potential benefits of games. But what about drawbacks? I, for one, “waste” a ton of time playing games. Opportunity costs are certainly one issue, but maybe there are other drawbacks as well, including addiction to the endorphins and adrenaline; a heightened state of engagement during gaming that may make other aspects of living – or learning – seem less interesting, engaging. What about learning bad ideas, being desensitized to violence, sexual predation, or other anti-social behaviors? Are there downsides to games? And, in your opinion, has the research to date done enough to examine negative consequences of games?

Karl:

Yes, games can have horrible, anti-social content. They can also have wonderful, pro-social content. In fact, a growing area of game research focuses on possible pro-social aspects of games. The answer really is the content. A “game” per-say is neither pro- or anti-social like any other instructional medium. Look at speeches. Stalin gave speeches filled with horrible content and Martin Luther King, Jr. gave speeches filled with inspiring content. Yet we never seem to ask the question “Are speeches inherently good or bad?”

Games, like other instructional media, have caveats that good instructional designers need to factor when deciding if a game is the right instructional intervention. Certainly time is a big factor. It takes time to both develop a game and to play a game. So this is a huge downside. You need to weigh the impact you think the game will have on learner retention or knowledge versus another instructional intervention. Although, I can tell you there are at least two meta-analysis studies that indicate that games are more effective for learning than traditional, lecture-based instruction. But the point is not to blindly choose game over lecture or discussion. The decision regarding the right instructional design needs to be thoughtful. Knowing the caveats should factor into the final design decision.

Another caveat is that games should not be “stand-alone.” It’s far better for a learning game to be included as part of a larger curriculum rather than developed without any sense of how it fits into the larger pictures. Designers need to make sure they don’t lose site of the learning objective. If you are considering deploying a game within your organization, you have to make sure it’s appropriate for your culture. Another big factor to consider is how the losers are handled in the game. If a person is not successful at a game, what are the repercussions? What if she gets mad and shuts down? What if he walks away halfway through the experience because he is so frustrated? These types of contingencies need to be considered when developing a game. So, yes, there are downsides to games as there are downsides to other types of instruction. Our job, as instructional designers, is to understand as many downsides and upsides as possible for many different design possibilities and make an informed, evidence-based decision.

 

Will (Question 5):

As you found in your research review, feedback is a critical element in gaming. I’ve anointed “feedback” as one of the most important learning factors in my Decisive Dozen – as feedback is critical in all learning. The feedback research doesn’t seem definitive in recommending immediate versus delayed feedback, but the wisdom I take from the research suggests that delayed feedback is beneficial in supporting long-term remembering, whereas immediate feedback is beneficial in helping people “get” or comprehend key learning points or contingencies. In some sense, learners have to build correct mental models before they can (or should) reinforce those understandings through repetitions, reinforcement, and retrieval practice.

Am I right that most games provide immediate feedback? If not, when is immediate feedback common in games, when is delayed feedback common? What missed opportunities are there in feedback design?

Karl:

You are right; most games provide immediate, corrective feedback. You know right-away if you are performing the right action and, if not, the consequences of performing the wrong action. A number of games also provide delayed feedback in the form of after-action reviews. These are often seen in games using branching. At the end of the game, the player is given a description of choices she made versus the correct choices. So, delayed feedback is common in some types of games. In terms of what is missing in terms of feedback, I think that most learning games do a poor job of layering feedback. In well-designed video games, at the first level of help, a player can receive a vague clue. If this doesn’t work or too much time passes, the game provides a more explicit clue and finally, if that doesn’t work, the player receives step-by-step instructions. Most learning games are too blunt. They tend to give the player the answer right away rather than layers choices or escalating the help. I think that is a huge missed opportunity.

 

Will (Question 6):

By the way, your book does a really nice job in describing the complexity and subtlety of feedback, including Robin Hunicke’s formulation for what makes feedback “juicy.” What subtleties around feedback do most of us instructional designers or instructors tend to miss?

Karl:

Our feedback in learning games and even elearning modules is just too blunt. We need more subtlety. Hunicke describes the need for feedback to have many different attributes including the need for the feedback to be tactile and coherent. She refers to tactile feedback as creating an experience where the player can feel the feedback as it is occurring on screen so that it’s not forced or unnatural within the game play. Instructional designers typically don’t create feedback the player or learner feels, typically, they create feedback that is “in your face” such as “Nice job!” or “Sorry, try again.” She describes coherent feedback as feedback that stays within the context of the game. It is congruent with on screen actions and activities as well as with the storyline unfolding as the interactions occur. Our learning games seem to fail at including both of these elements in our feedback. In general, our field needs to focus on feedback that is more naturally occurring and within the flow of the learning.

 

Will (Question 7):

Do learners have to enjoy the game to learn from it? What are the benefits of game pleasure? Are there drawbacks at all?

Karl:

Actually, research by Tracy Sitzmann indicates (2011) that a learner doesn’t have to indicate that he or she was “entertained” to learn from a serious game. So fun should not be the standard by which we measure the success of game. Instead, she found that what makes a game effective for learning is the level of engagement. Engagement should be the goal when designing a learning game. However, there are a number of studies that indicate that games are motivational. Although, one meta-analysis on games indicated that motivation was not a factor. So, I am not sure if pleasure is a necessary factor for learning. Instead, I tend to focus more on building engagement and having learners make meaningful decisions and less on learner enjoyment and fun. This tends to run counter to why most people want a learning game but the reason we should want learning games is to encourage engagement and higher order thinking and not to simply make boring learning fun. Engagement, mastery and tough decision making might not always be fun but, as you indicated in your questions about simulations, it can be engaging and learning results from engagement and then understanding the consequences of actions taken during that engagement.

 

Will (Question 8):

As I was perusing research on games, one of my surprises was that games seemed to be used for health-behavior change at least as much as learning. What they heck’s going on?

Karl:

Games are great tools for promoting health. We all know that we should focus on health and wellness but we often let other life elements get in the way. Making staying healthy a game provides, in many cases that little bit of extra motivation to make you stay on course. I think games for health work so well because they capitalize on our already existing knowledge that we need to stay healthy and then provide tracking of progress, earning of points and other incentives to help us give that extra boost that makes us take the extra 100 steps needed to get our 10,000 for the day. Ironically, I find games used in many life and death situations.

 

Will (Question 9):

In your book you have a whole chapter devoted to research on games. I really like your review. Of course, with all the recent research, maybe we’ve learned even more. Indeed, I just did a search of PsycINFO (a database of scientific research in the social sciences). When I searched for “games” in the title, I found 110 articles in peer-reviewed journals in this year (2016) alone. That’s a ton of research on games!!

Let’s start with the finding in your book that the research methodology of much of the research is not very rigorous. You found that concern from more than one reviewer. Is that still true today (in 2016)? If the research base is not yet solid, what does that mean for us as practitioners? Should we trust the research results or should we be highly skeptical — OR, where in-between these extremes should we be?

Karl:

The short answer, as with any body of research, is to be skeptical but not paralyzed. Waiting for the definitive decision on games is a continually evolving process. Research results are rarely a definitive answer; they only give us guidance. I am sure you remember when “research” indicated that eggs were horrible for you and then “research” revealed that eggs were the ultimate health food. We need to know that research evolves and is not static. And, we need to keep in mind that some research indicated that smoking had health benefits so I am always somewhat skeptical. Having said that, I don’t let skepticism stop me from doing something. If the research seems to be pointing in a direction but I don’t have all the answers, I’ll still “try it out” to see for myself.

That said the research on games, even research done today, could be much more rigorous. There are many flaws which include small sample sizes, no universal definition of games and too much focus on comparing the outcomes of games with the outcomes of traditional instruction. One would think that argument would be pretty much over but decade after decade we continue to compare “traditional instruction” with radio, television, games and now mobile devices. After decades of research the findings are almost always the same. Good design, regardless of the delivery medium, is the most crucial aspect for learning. Where the research really needs to go, and it’s starting to go in that direction, is toward comparing elements of games to see which elements lead to the most effective and deep learning outcomes. So, for example, is the use of a narrative more effective in a learning game than the use of a leaderboard or is the use of characters more critical for learning than the use of a strategy-based design? I think the blanket comparisons are bad and, in many cases, misleading. For example, Tic-Tac-Toe is a game but so is Assassin’s Creed IV. So to say that all games teach pattern recognition because Tic-Tac-Toe teaches pattern recognition is not good. As Clark Aldrich stated years ago, the research community needs some sort of taxonomy to help identify different genres of games and then research into the learning impact of those genres.

So, I am always skeptical of game research and try to carefully describe limitations of the research I conduct and to carefully review research that has been conducted by others. I tend to like meta-analysis studies which are one method of looking at the body of research in the field and then drawing conclusions but even those aren’t perfect as you have arguments about what studies were included and what studies were not included.

At this point I think we have some general guidelines about the use of games in learning. We know that games are most effective in a curriculum when they are introduced and described to the learners, then the learners play the game and then there is a debrief. I would like to focus more on what we know from the research on games and how to implement games effectively rather than the continuous, and in my opinion, pointless comparison of games to traditional instruction. Let’s just focus on what works when games do provide positive learning outcomes.

 

Will (Question 10):

A recent review of serious games (Tsekleves, Cosmas, & Aggoun, 2014, 2016) concluded that their benefits were still not fully supported. “Despite the increased use of computer games and serious games in education, there are still very few empirical studies with conclusive results on the effectiveness of serious games in education.” This seems a bit strong given other findings from recent meta-analyses, for example the moderate effect sizes found in a meta-analysis from Wouters, van Nimwegen, van Oostendorp, & van der Spek (2013).

Can you give us a sense of the research? Are serious games generally better, sometimes better, or rarely better than conventional instruction? Or, are they better in some circumstance, for some learners, for some topics – rather than others? How should us practitioners think about the research findings?

Karl:

Wouters et al. (2013) found that games are more effective than traditional instruction as did Stizmann (2011). But, as you indicated, other meta-analysis studies have not come to that conclusion. So, again, I think the real issue is that the term “games” is way too broad for easy comparisons and we need to focus more on the elements of games and how the individual elements intermingle and combine to cause learning to occur. One major problem with research in the field of games is that to conduct effective and definitive research we often want to isolate one variable and then keep all other variables that same. That processes is extremely difficult to do with games. New research methods might need to be invented to effectively discover how game variables interact with one another. I even saw an article that declared that all games are situational learning and should be studied in that context rather than in an experimental context. I don’t know the answer but there are few simple solutions to game-based research and definitive declarations of the effectiveness of games.

However, having said all that, here are some things we do know from the research related to using games for learning:

  • Games should be embedded in instructional programs. The best learning outcomes from using a game in the classroom occur when a three-step embedding process is followed. The teacher should first introduce the game and explain its learning objectives to the students. Then the students play the game. Finally, after the game is played, the teacher and students should debrief one another on what was learned and how the events of the game support the instructional objectives. This process helps ensure that learning occurs from playing the game (Hays, 2005; Sitzmann, 2011).
  • Ensure game objectives align with curriculum objectives. Ke (2009) found that the learning outcomes achieved through computer games depend largely on how educators align learning (i.e., learning subject areas and learning purposes), learner characteristics, and game-based pedagogy with the design of an instructional game. In other words, if the game objectives match the curriculum objectives, disjunctions are avoided between the game design and curricular goals (Schifter, 2013). The more closely aligned curriculum goals and game goals, the more likely the learning outcomes of the game will match the desired learning outcomes of the student.
  • Games need to include instructional support. In games without instructional support such as elaborative feedback, pedagogical agents, and multimodal information presentations (Hays, 2005; Ke, 2009; Wouters et al., 2013)., students tend to learn how to play the game rather than learn domain-specific knowledge embedded in the game. Instructional support that helps learners understand how to use the game increases the effectiveness of the game by enabling learners to focus on its content rather than its operational rules.
  • Games do not need to be perceived as being “entertaining” to be educationally effective. Although we may hope that Maria finds the game entertaining, research indicates that a student does not need to perceive a game as entertaining to still receive learning benefits. In a meta-analysis of 65 game studies, Sitzmann (2011) found that although “most simulation game models and review articles propose that the entertainment value of the instruction is a key feature that influences instructional effectiveness, entertainment is not a prerequisite for learning,” that entertainment value did not impact learning (see also Garris et al., 2002; Tennyson & Jorczak, 2008; Wilson et al., 2009). Furthermore, what is entertaining to one student may not be entertaining to another. The fundamental criterion in selecting or creating a game should be the learner’s actively engagement with the content rather than simply being entertained (Dondling, 2007; Sitzmann, 2011).

 

Will (Question 11):

If the research results are still tentative, or are only strong in certain areas, how should we as learning designers think about serious games? Is there overall advice you would recommend?

Karl:

First of all, I’d like to point to the research that exists indicating that lectures are not as effective for learning as some believe. So practitioners, faculty members and others have defaulted to lectures and held them up as the “holy grail” of learning experiences when the literature clearly doesn’t back up the use of lectures as the best method for teaching higher level thinking skills. If one wants to be skeptical of learning designs, start with the lecture.

Second, I think the guidelines outlined above are a good start. We are literally learning more all the time so keep checking to see the latest. I try to publish research on my blog (karlkapp.com) and at the ATD Science of Learning blog and, of course, the Will at Work blog for all things learning research are good places to look.

Third, we need to take more chances. Don’t be paralyzed waiting for research to tell you what to do. Try something, if you fail, try something else. Sure you can spend your career creating safe PowerPoint-based slide shows where you hit next to continue but that doesn’t really move your career or the field forward. Take what is known from reading books and from vetted and trusted internet sources and make professionally informed decisions.

 

Will (Question 12):

Finally, if we decide to go ahead and develop or purchase a serious game, what are the five most important things people should know?

Karl:

  1. First clearly define your goals. Why are you designing or purchasing a serious game and what do you expect as the outcome? After the learners play the game what should they be able to do? How should they think? What result do you desire? Without a clearly defined outcome, you will run into problems.
  2. Determine how the game fits into your overall learning curriculum. Games should not be stand-alone; they really should be an integral part of a larger instructional plan. Determine where the serious game fits into the bigger picture.
  3. Consider your corporate culture. So cultures will allow a fanciful game with zombies or strange characters and some will not. Know what your culture will tolerate in terms of game look and feel and then work within those parameters.
  4. If the game is electronic, get your information technology (IT) folks involved early. You’ll need to look at things like download speed, access, browser compatibility and a host of other technical issues that you need to consider.
  5. Think carefully and deeply before you decide to develop a game internally. Developing good, effective serious games is tough. It’s not a two-week project. Partner with a vendor to obtain the desired result.
  6. (A bonus) Don’t neglect the power of card games or board games for teaching. If you have the opportunity to bring learners together, consider low-tech game solutions. Sometimes those are the most impactful.

 

Will (Question 13):

One of your key pieces of advice is for folks to play games to learn about their power and potential. What kind of games should we choose to play? How should we prioritize our game playing? What kind of games should we avoid because they’ll just be a waste of time or might give us bad ideas about games for learning?

Karl:

I think you should play all types of games. First, pick different types of games from a delivery perspective so pick some card games, board games, causal games on your smartphone and video games on a game console. Mix it up. Then play different types of games such as role-play games, cooperative games, matching games, racing games, games where you collect items (like Pokémon Go). The trick is to not just play games that you like but to play a variety of games. You want to build a “vocabulary” of game knowledge. Once you’ve built a vocabulary, you will have a formidable knowledge base on which to draw when you want to create a new learning game.

Also, you can’t just play the games. You need to play and critically evaluate the games. Pay attention to what is engaging about the game, what is confusing, how the rules are crafted, what game mechanics are being employed, etc.? Play games with a critical eye. Of course, you will run the danger of ruining the fun of games because you will dissect any game you are playing to determine what about the game is good and what is bad but, that’s ok, you need that skill to help you design games. You want to think like a game designer because when you create a serious game, you are a game designer. Therefore, the greater the variety of game you the play and dissect, the better game designer you will become.

 

Will (Question 14):

If folks are interested, where can they get your book?

Karl:

Amazon.com is a great place to purchase my book or at the ATD web site. Also, if people have access to Lynda.com, I have several courses on Lynda including “The Gamification of Learning”. And I have a new book coming out in January co-authored by my friend Sharon Boller called “Play to Learn” where we walk readers through the entire serious game design process from conceptualization to implementation. We are really excited about that book because we think it will be very helpful for people who want to create learning games.

 

You can click on the images below to view Karl’s Gamification books on Amazon.com.

 

 

 

 

Research

Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games. Personnel Psychology, 64(2), 489–528.

Tsekleves, E., Cosmas, J., & Aggoun, A. (2016). Benefits, barriers and guideline recommendations for the implementation of serious games in education for stakeholders and policymakers. British Journal of Educational Technology, 47(1), 164-183. Available at: http://onlinelibrary.wiley.com/doi/10.1111/bjet.12223/pdf

Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology, 105(2), 249-265. http://dx.doi.org/10.1037/a0031311

In my ongoing research interviewing learning executives, I occasionally come across stories or ideas that just can't wait until the full set of data is collected.

This week, I interviewed a director of employee development and training at a mid-sized distribution company. She expressed many of the frustrations I've heard before in my consulting work with L&D (Learning and Development) leaders. For example:

  • Lack of good learning measurement, causing poor feedback to L&D stakeholders.
  • Too many task requirements to allow for strategic thinking in L&D.
  • While some SME's are great trainers, too many deliver poorly-designed sessions.
  • Lack of some sort of competency testing of learners.
  • Lack of follow-through after training, limiting likelihood of successful application to the job.

There were so many changes to make that it appeared overwhelming — as if making positive change was going to take forever.

Then she got an idea. Her organization had begun to train its customers (in addition to training its employees), and they began to search for ways to demonstrate the value and credibility of the customer-focused courses.

What this director realized was that accreditation might serve multiple purposes — if it provided a rigorous evaluation scheme; one that demanded living up to certain standards.

She found an accrediting agency that fit the bill. IACET, the International Association of Continuing Education and Training would certify her organization, but the organization would have to prove that it engaged in certain practices.

This turned out to be a game changer. The requirements, more often than not, propelled her organization in directions she had hoped they would travel anyway. The accreditation process had become a powerful lever in the director's change-management efforts.

Some of things that the accreditation required:

  • The L&D organization had to demonstrate training needs, not just take orders for courses.
  • They had to map learning evaluations back to learning objectives, ensuring relevance in evaluations.
  • They had to have objectives that tied into learning outcomes for each course.
  • Trainers had to be certified in training skills (aligned to research-based best practices).
  • Trainers had to be regularly trained to maintain their certifications.
  • Et cetera…

While before it was difficult for her to get some of her SME's to take instructional design seriously, now accreditation constraints propelled them in the right direction. Whereas before, SME's balked at creating tests of competence, now the accreditation requirements demanded compliance. Whereas before, her SME's could skip out on training on evidence-based learning practice, now they were compelled to take it seriously — otherwise they may lose their accreditation; thus losing the differentiation their training provides to customers.

The accreditation process was a catalyst, but it wouldn't work on it's own — and it's not a panacea. The director acknowledges that a full and long-term change management effort is required, but accreditation has helped her move the needle toward better learning practices.

Connie Malamed is The eLearning Coach, an intriguing podcaster, and the author of two fantastic books on visual design. Here I interview her in regards to her most recent book, Visual Design Solutions.

 

Here is the book:

Here is Connie:

1.

Will:
Connie, in your book, Visual Design Solutions: Principles and Creative Inspiration for Learning Professionals, your goal seems to be to help learning professionals utilize effective visuals to improve their learning outcomes. Indeed, you dedicate the book to “the hard-working creative learning professionals who want to make a difference.”

Tell me about your hope for the book and the importance visual design has for learning professionals?

 

Connie:
My three goals in writing this book were to: 1) prove that it is possible to improve one’s visual design skills without being an artist, 2) demonstrate the benefits of using visuals to enhance and amplify learning, and 3) raise awareness about the importance of aesthetics in a learner’s experience.

For those with normal vision, the brain processes more sensory information from the eyes than from any other sense. So learning professionals should expect that the visual aspect of instructional materials would be of great importance to comprehension, retention and the user experience. The good news is that anyone can become more competent in visual design by learning, applying and practicing the foundation principles.

 

2.

Will:
Many learning professionals enter the field with little or no background or experience in graphic arts, visualization principles, or aesthetics—and yet you declare in your book that “you do not need drawing talent to work as a visual designer.” First, let me ask you, “Why not?” Second, let me ask you what key skills people do need to be effective at the visual aspects of learning design?

 

Connie:
Early in my career, I met an excellent designer who didn’t know how to draw. He told me he wasn’t an illustrator. I was shocked. Since then, I’ve met and read about many designers who do not illustrate. Visual design involves the arrangement of images and text in graphic space. To be able to do this, one doesn’t need to render with a pen or pencil. Of course, it’s always nice to brainstorm ideas with a pencil and sketchpad, but visual concepts can be communicated using geometric shapes and stick figures.

The skills that I think people need for visual design competence can be learned. Here is my list:

  • An understanding of how to think about and solve visual problems.
  • Foundation principles of design, such as use of white space, establishing a visual hierarchy and appreciating typography.
  • Awareness of design in the world around you to see how others have solved visual problems.

 

3.

Will:
I noticed in your book that you begin with lots of research supporting the benefits of using visuals. But certainly visuals can also be used in such a way that causes harm. What are some of the problems involved in using visuals? What are some of the most common mistakes learning professionals make?


Connie:

Right. Like anything else, it takes thoughtfulness to come up with an effective visual design. I think common problems are: cluttering a layout with too many flourishes, using irrelevant graphics that are distracting, and splitting attention so that the visuals and text or activity are not well integrated. One way to avoid common mistakes is to get your work critiqued by peers, sponsors and potential users.

 

4.

Will:
In the book, you emphasize white space, and yet I’ll bet this is one of the hardest things for us instructional designers to get. Just as we tend to cram content into our curriculums and don’t leave enough time for learning, I’ve often seen designs that cram content into our visuals without thinking about white space. What is white space and what are the top three things learning designers should realize about it?

 

Connie:
White space is also known as negative space. It’s the area in a visual that does not contain any images or text. It even includes the area in between letters. Here are three tips about working with white space.

  1. Think of white space as another element, just like text and images. These three building blocks of design have to work together to create a clear communication. Without white space, you can’t have form and without form, you can’t have white space.
  2. As you design, if you begin to focus on the shape of the white space, you begin to bring it into the foreground perceptually. Become conscious of the white space and make sure the shape is pleasing and that it’s not broken up into tiny little pieces.
  3. White space gives a viewer’s eyes a place to rest and allows a design to breathe. So don’t be stingy with your white space. Let your designs have some spaciousness.

 

 

 

5.

Will:
I put a ton of time into creating PPT slides for my workshops and presentations—and I’ve developed some beliefs over the years that may or may not be true. Would you comment and critique my visual-design prejudices?

  1. Never use clip art; photos are cheap, easily available, and convey more credibility.
  2. Using a transparency fade (for example when you take a photo but gradually fade one side of it into the background) is good looking, adds credibility, and enables room for pertinent text.
  3. It is better to have one major learning point per slide with a nice supporting visual, than to offer many ideas on the same slide.
  4. Using objects with some gradient is almost always preferred to no gradient.


Connie:

  1. It depends on what you mean by clip art. Although you want to avoid the silly smiling characters, there are wonderful collections of illustrated and simplified vector drawing that you can use to represent concepts and objects. You can make an entire presentation or eLearning course using this minimalist and distilled style.
  2. Adding a transparency fade is a good way to be able to add text. But it’s not the only way and if it looks very feathered, it could look dated. Other ways to add text are to use a large 1024×768 photo (the size of the entire slide) and then overlay a slightly transparent rectangle on part of the photo where you want the text. Place the text within that rectangle and ensure there is enough contrast to read it.
  3. I think you’re probably right about one point per slide. Don’t tell anyone but sometimes I might put three related points on a slide. Maybe I’m being lazy.
  4. A gradient can give an object a 3D appearance, but it’s not always necessary. The flat design trend has moved away from full gradients and you will now probably see more designs with flat or solid-looking objects. I don’t think there’s a wrong or a right way to fill in objects, but as trends change, a viewer’s idea of what is aesthetically pleasing will change.

 

6.

Will:
You mention perspective in your book. First, tell us what it is. Second, because you recommend it “to add realism to a story or scenario,” could you tell us if there is a secret to searching for photos with perspective in photo databases (so we don’t have to search through an endless array of photos)?

 

Connie:
Perspective is a way to trick the eyes into perceiving three dimensions on what is really a flat surface. I don’t know of any great way to find images with perspective other than to type that word into your photo database. Such as “street perspective,” for example.

7.

Will:
You mentioned distilled graphics. What are they and when should we use them?

Connie:
Distilled graphics are simplified, schematic or iconic visuals that represent objects or concepts. We perceive and understand them quickly, similar to the images on road signs. I think it’s a good idea to use these when you want to get an idea across quickly. Also, using a distilled graphic like a silhouette works like a visual suggestion of what it represents without getting into the detail. Another suggestion is to use distilled graphics instead of bullet points, placed near the text to represent the concept or fact. It’s tough to explain without a visual accompaniment!

 

8.

Will:
Connie, maybe you can help me. When I look for photos, sometimes I find myself spending half an hour or more just to find an image I deem acceptable. Am I nuts? Please help me! How long should I spend looking for an image?

Connie:
Will, I’m going to guess that you are a little nuts, but not because you take so long to find photos. But yes, searching for photos is one of the most time-consuming aspects of this career. Most stock photo sites that weren’t specifically made for eLearning seem to have an advertising/marketing focus. The photographers still have that mindset, where rather than showing people in realistic situations, they show people smiling at the camera or cheering. There’s not enough diversity in the image choices either. I got so frustrated one day, I sat down and wrote an article about this problem: 21 Reasons Why Stock Photo Sites Make Me Cry.

 

9.

Will:
I have a new favorite font, but while it’s in Microsoft Office, when some of the online meeting tools convert it, they replace my beautiful artsy font with a boring font often of the wrong size. Are there any ways to do a work around? Can I search-and-replace fonts for example?

Connie:
For those situations, you could make a second version of the presentation for online meeting tools and change the font to a similar but more common one in your template. Even though it might be boring, it everything will line up the way you want it to. If the online meeting tool is hosted by a professional association or a company, you can see if they would be willing to install the font on the hosting computer.

As to replacing fonts, I’ve always replaced the font in the Slide Master and that usually works. You may have to choose the Master Layout again in your slide though.

 

10.

Will:
Connie, I love the section in your book on creating a visual hierarchy. As you describe it, visual hierarchies send unconscious signals to our brains that prompt us to look at certain parts of the visual before other parts. I didn’t really know this until I read your book. Thank you! I’m a big believer in PowerPoint (or KeyNote, etc.) of revealing aspects of our visuals one at a time, which is probably cheating to get a similar effect to hierarchies—but sometimes there would be a huge benefit in having a visual hierarchy. Please educate us on how to create a visual hierarchy, and tell us why it’s so important.

 

Connie:
A visual hierarchy indicates where the viewer should look first, second and perhaps third. Make your most important element the first thing that people will look at. You can do this through contrast.  Make it larger, place it in the upper left or top of the screen, make the element more colorful or brighter or add movement (if appropriate to the learning). There are other ways, but that’s a good place to start.

 

11.

Will:
I notice you added humor into your book. I laughed out loud when you told me that to transform myself into an “expert designer” that I’d need to wear all black. LOL. Your book is extremely helpful—and one doesn’t even have to change their wardrobe.

What’s your most important message for us learning professionals? Besides reading your book, what else can we do to be more effective? And, are there any methods you’ve seen for getting evaluation feedback on our visual designs?

Connie:
There are a lot of things learning professionals can do in addition to reading my book, dressing in all black and getting piercings in weird places:

  • Analyze the visual design in your environment and see what works and what doesn’t. Think about what the designer was trying to achieve. Notice the layout, typography, color palette and focal point. This means studying the design of websites, apps, magazines, brochures, posters, books, catalogs, packaging, billboards, subway ads, store interiors, videos, icons and junk mail. You get the idea.
  • Start an online collection (via Pinterest, bookmark sites, etc.) of designs that you like. Then use these for inspiration the next time you are stuck.
  • Read design books. Although most design books focus on advertisements and branding, they still offer a lot of sound principles and inspiration.

 

12.

Will:
Finally, what’s the best way for people to get your book?

Connie:
Thanks for asking. My book is available on Amazon, Barnes and Noble, and the ATD online bookstore, where it is discounted for members. I hope your readers understand that there are around 130 color graphics, which makes it a little more on the expensive side due to printing costs.

 

Will’s Note:
You can view the book on Amazon
by clicking the image below:

 

 

21st December 2015

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2015 Neon Elephant Award, given this year to Julie Dirksen for her book, Design for How People Learn — just recently released in its second edition. Julie does an incredible job bridging the gap between research and learning practice. Based on decades of working with clients in building learning interventions, Julie utilizes her practical experience  to draw wisdom from the learning research. Her book is wonderfully written and illustrated, utilizes research in a practical way, and covers the most critical leverage points for learning effectiveness. Julie speaks with a voice that is authentic and experienced, providing a soothing guidebook for those who dare to learn the truth and complexities of learning design.

Click here to learn more about the Neon Elephant Award…

 

2015 Award Winner – Julie Dirksen

Julie Dirksen is the principal at Usable Learning, providing consulting services in learning strategy and design. For almost two decades, Julie has been working in the workplace learning-and-performance field; playing roles such as instructional designer, elearning developer, university instructor, learning strategist, keynote speaker, and consultant. Julie is one of the leading voices in our field in recommending research-based learning design and is one of the authors of the Serious eLearning Manifesto.

 

Why Honored?

Julie Dirksen is honored this year for her book, Design for How People Learn, and for her ongoing work bringing research wisdom to learning design. By creating this book, and updating it just this month, Julie has built a foundational platform to help people get a full and accurate view of learning design. Amazon reviewers speak warmly about how valuable and accessible they find the book.

Like last year’s award winners — Brown, Roediger, and McDaniel; authors of Make it Stick: The Science of Successful Learning — Dirksen excels in the difficult work of research translation. Julie’s unique value-add is that she speaks from years of experience as an instructional designer and learning strategist. When we read her book we feel led by a wise and experienced savant — someone who has an incredible depth of practical experience.

For her efforts sharing practical research-based insights on learning design, the workplace learning-and-performance field owes a grateful thanks to Julie Dirksen.

 

Some Key Links:

 

Click here to learn more about the Neon Elephant Award…

I had the great pleasure of being interviewed recently by Brent Schlenker, long-time elearning advocate. We not only had a ton of fun talking, but Brent steered us into some interesting discussions.

———-

He's created a three-part video series of our discussion:

———-

Brent is a great interviewer–and he gets some top-notch folks to join him. Check out his blog.

 

 

21st December 2014

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2014 Neon Elephant Award, given this year to Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel for their book, Make it Stick: The Science of Successful Learning—a book that brilliantly conveys scientific principles of learning in prose that is easy to digest, comprehensive and true in its recommendations, highly-credible, and impossible to ignore or forget.

Roediger and McDaniel are highly-respected learning researchers and Brown is an author and former management consultant. The book is singularly successful because it brings together researchers with a person who is highly skilled in conveying complex concepts to the public. Where too often important scientific research never leaves the darkened halls of the academy, Roediger and McDaniel demonstrate incredible wisdom and humility in collaborating with Peter C. Brown.

Click here to learn more about the Neon Elephant Award…

2014 Award Winners –
Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel

Peter C. Brown is an author and retired management consultant. He’s written non-fiction books and even a novel, which was reviewed favorably by many of the top media outlets. Indeed, the Washington Post said this: “Peter C. Brown’s sure and often lyrical evocation of the wild Alaskan coast speaks not only of knowledge but also of love.” His contribution to Make It Stick surely was in his skill in taking cold steely knowledge and bringing warmth and relevance to it.

Henry L. Roediger is the James S. McDonnell Distinguished University Professor of Psychology at Washington University in St. Louis. He’s had a long and distinguished career as a learning-and-memory researcher. His bio highlights his research background:  “Roediger’s research has centered on human learning and memory and he has published on many different topics within this area. He has published over 200 articles and chapters on various aspects of memory.” Roediger has served as an editor on numerous scientific journals and helped found the journal, Psychological Science in the Public Interest, which reviews research and makes it available and accessible to the public. He was President of the American Psychological Society (now the Association for Psychological Science), the largest psychological organization dedicated to scientific psychology. He’s held a Guggenheim fellowship. He has been named one of the most highly-cited researchers in psychology.

Mark A. McDaniel is a Professor of Psychology at Washington University in St. Louis. He’s also had a long and distinguished career as a learning-and-memory researcher. As capture on his faculty webpage, “His most significant lines of work encompass several areas: prospective memory, encoding processes in enhancing memory, retrieval processes and mnemonic effects of retrieval, functional and intervening concept learning, and aging and memory. One unifying theme in this research is the investigation of factors and processes that lead to memory and learning failures. In much of this work, he has extended his theories and investigations to educationally relevant paradigms.” He has been a Fellow of the Society of Experimental Psychologists and President of the American Psychological Association, Division 3.

 

Why Honored?

Brown, Roediger, and McDaniel are being honored this year for their book, Make it Stick: The Science of Successful Learning. By creating this wonderful work, they have reached thousands and will continue to influence many teachers, professors, trainers, instructional designers, and elearning developers for years to come. Already, within the same year of publication, the book has over 100 Amazon reviews!!

It is difficult work to synthesize research into digestible chunks for public consumption. Brown, Roediger, and McDaniel have done an absolutely superlative job in making the research relevant, in engaging the reader, in conveying deeply complex concepts in a manner that makes sense, and in motivating readers to feel urgency to make learning-design improvements.

I know they’ve already made a difference in the workplace learning-and-performance field because my clients have told me how valuable they’ve found Make It Stick. I’ve even seen senior managers (non-learning professionals) get a new religion for learning by reading Make It Stick. By seeing the gaps between ideal learning practices and current learning practices, I’ve seen a senior military leader engage his folks in an intense learning audit to determine how well their current learning was aligned with the learning research. It’s only when research creates action like this that its full benefits are realized.

For bringing potent learning research to the public, the workplace learning-and-performance field owes a grateful thanks to Peter C. Brown, Henry L. Roediger III, and Mark A. McDaniel.

 

Some Key Links:

 

 

 

Click here to learn more about the Neon Elephant Award…