Those of us in the learning professions are naturally enamored with the power of learning. This is all fine and good–learning is necessary for human survival and for our most enlightened achievements–but too narrow a focus on learning misses a key responsibility. Indeed, learning without sustained behavior change is like feeding a man who's planning to jump off a bridge. Nice, but largely besides the point. 

The bottom line is that we learning professionals must not only look to the science of learning, but also the science of behavior change.

My friend and colleague Julie Dirksen has been thinking about behavior change for years. Here's a recent article she wrote:

Here is another recent resource on behavior change:

It's good to keep this all in perspective. Science often moves slowly and in fits and starts. There is great promise in the many and varied research areas under study. You can see this most fully in the health-behavior-change field. There's a ton of research being done. Here's a quick list of research being done on behavior change:

  • Cardiac health
  • Obesity
  • Clean cooking
  • Asthma
  • HIV and STD prevention
  • College-student drinking
  • Cancer prevention
  • Child survival
  • Recycling
  • Use of hotel towels
  • Young-driver distraction
  • Hand washing
  • Encourage walking and cycling
  • Smoking cessation
  • Healthy pregnancy behaviors
  • Promoting physical activity

Okay, the list is almost endless.

One of the findings is not surprising. Lasting behavior change is very difficult. Think how hard it is to lose weight and keep it off, or stop an internet addition. So it's great that researchers are looking into this.

In the learning field, we have our own version of behavior-change research. It's called transfer. We've already learned a lot about how to get people to transfer what they've learned back to their jobs or into their lives. We're not done learning, of course.

One thing we do know is that training by itself is rarely sufficient to produce lasting change. Sometimes our learners will take what they've learned, put it immediately into practice, deepen their own learning and continue to learn and engage and use what they've learned over time. Too often, they forget, they get distracted, they get no support.

 

Summary

My four messages to you are these:

  1. Keep your eyes open for Behavior Change Research.
  2. Keep your eyes open for Transfer-of-Learning Research.
  3. Don't be a fool in thinking that training/learning is enough.
  4. We learning professionals have a responsibility to enable usable behavior change.

 

Dr. Karl Kapp is one of the learning field’s best research-to-practice gurus! Legendary for his generosity and indefatigable energy, it is my pleasure to interview him for his wisdom on games, gamification, and their intersection.

His books on games and gamification are wonderful. You can click on the images below to view them on Amazon.com.

 

 

The following is a master class on games and learning:

 

Will (Question 1):

Karl, you’ve written a definitive exploration of Gamification in your book, The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. As I read your book I was struck by your insistence that Gamification “is not the superficial addition of points, rewards, and badges to learning experiences.” What the heck are you talking about? Everybody knows that gamification is all about leaderboards, or so the marketplace would make us believe… [WINK, WINK] What are you getting at in your repeated warning that gamification is more complex than we might think?

Karl:

If you examine why people play games, the reasons are many, but often players talk about the sense of mastery, the enjoyment of overcoming a challenge, the thrill of winning and the joy of exploring the environment. They talk about how they moved from one level to another or how they encountered a “boss level” and defeated the boss after several attempts or how they strategized a certain way to accomplish the goal of winning the game. Or they describe how they allocated resources so they could defeat a difficult opponent. Rarely, if ever, do people who play games talk about the thrill of earning a point or the joy of being number seven on the leaderboard or the excitement of earning a badge just for showing up.

The elements of points, badges and leaderboards (PBLs) are the least exciting and enticing elements of playing games. So there is no way we should lead with those items when gamifying instruction. Sure PBLs play a role in making a game more understandable or in determining how far away a player is from the “best” but they do little to internally motivate players by themselves. Reliance solely on the PBL elements of games to drive learner engagement is not sustainable and not even what makes games motivational or engaging. It’s the wrong approach to learning and motivation. It’s superficial; it’s not deep enough to have lasting meaning.

Instead, we need to look at the more intrinsically motivating and deeper elements of games such as: challenge, mystery, story, constructive feedback (meaningful consequences) strategy, socialization and other elements that make games inherently engaging. We miss a large opportunity when we limit our “game thinking” to points, badges and leaderboards. We need to expand our thinking to include elements that truly engage a player and draw them into a game. These are the things that make games fun and frustrating and worth our investment in time.

 

Will (Question 2):

You wrote that “too many elements of reality and the game ceases to be engaging,”—and I’m probably taking this out of context—but I wonder if that is true in all cases? For example, I can imagine a realistic flight simulator for fighter pilots that creates an almost perfect replica of the cockpit, g-forces, and more, that would be highly engaging… On the other hand, my 13-year-daughter got me hooked on Tanki, an online tank shot-em-up game, and there are very few elements of reality in the game—and I, unfortunately, find it very engaging. Is it different for novices and experts? Are the recommendations for perceptual fidelity different for different topic areas, different learning goals, et cetera?

Karl:

A while ago, I read a fake advertisement for a military game. It was a parody. The fake game description described how the “ultra-realistic” military game would be hours of fun because it was just like actually being in the military. The description told the player that he or she would have hours of fun walking to the mess hall, maintaining equipment, getting gasoline for the jeep, washing boots, patrolling and zigging and cleaning latrines. Now none of these things are really fun, in fact, they are boring but they are part of the life of being in the military. Military games don’t include these mundane activities. Instead, you are always battling an enemy or strategizing what to do next. The actions that a military force performs 95% of the time are not included in the game because they are too boring.

 If games where 100% realistic, they would not be fun. So, instead, games are an abstraction of reality because they focus on things within reality that can be made engaging or interesting. If a game reflected reality 100%, there would be boring game play. Now certainly, games can be designed to “improve” reality and make it more fun. In the game, The Sims, you wake up, get dressed and go to work which all seems pretty mundane. However, these realistic activities in The Sims are an abstraction of the tasks you actually perform. The layer of abstraction makes the game more exciting, engaging and fun. But in either the military game case or The Sims, too much reality is not fun.

The flight simulator needs to be 100% realistic because it’s not really a game (although people do play it as a game) but the real purpose of a simulation is training and perfection of skills. A flight simulator can be fun for some people to “play” but in a 100% realistic simulator, if you don’t know what you are doing, it’s boring because you keep crashing. For someone who doesn’t know how to fly, like me. If you made a World War II air battle game which had 100% realistic controls for my airplane, it wouldn’t be fun. In game design, we need to balance elements of reality with the learning goal and the element of engagement.

For some people, a simulator can be highly engaging because the learner is performing the task she would do on the job. So there needs to be a balance in games and simulations to have the right amount of reality for the goals you are trying to achieve.

 

Will (Question 3):

In developing a learning game, what should come first, the game or the goals (of learning)?

Karl:

Learning goals must come first and must remain at the forefront of the game design process. Too often I see the mistake of a design team becoming too focused on games elements and losing site of the learning goals. In our field, we are paid to help people learn, not to entertain them. Learning first.

Having said that, you can’t ignore or treat the game elements as second class citizens, you can’t bolt-on a points system and think you have now developed a fun game—you haven’t. The best process involves simultaneously integrating game mechanics and learning elements. It’s tricky and not a lot of instructional designers have experience or training in this area but it’s critical to have integration of game and learning elements, the two need to be designed together. Neither can be an afterthought.

 

Will (Question 4):

Later we’ll talk about the research you’ve uncovered about the effectiveness of games. As I peruse the literature on games, the focus is mostly on the potential benefits of games. But what about drawbacks? I, for one, “waste” a ton of time playing games. Opportunity costs are certainly one issue, but maybe there are other drawbacks as well, including addiction to the endorphins and adrenaline; a heightened state of engagement during gaming that may make other aspects of living – or learning – seem less interesting, engaging. What about learning bad ideas, being desensitized to violence, sexual predation, or other anti-social behaviors? Are there downsides to games? And, in your opinion, has the research to date done enough to examine negative consequences of games?

Karl:

Yes, games can have horrible, anti-social content. They can also have wonderful, pro-social content. In fact, a growing area of game research focuses on possible pro-social aspects of games. The answer really is the content. A “game” per-say is neither pro- or anti-social like any other instructional medium. Look at speeches. Stalin gave speeches filled with horrible content and Martin Luther King, Jr. gave speeches filled with inspiring content. Yet we never seem to ask the question “Are speeches inherently good or bad?”

Games, like other instructional media, have caveats that good instructional designers need to factor when deciding if a game is the right instructional intervention. Certainly time is a big factor. It takes time to both develop a game and to play a game. So this is a huge downside. You need to weigh the impact you think the game will have on learner retention or knowledge versus another instructional intervention. Although, I can tell you there are at least two meta-analysis studies that indicate that games are more effective for learning than traditional, lecture-based instruction. But the point is not to blindly choose game over lecture or discussion. The decision regarding the right instructional design needs to be thoughtful. Knowing the caveats should factor into the final design decision.

Another caveat is that games should not be “stand-alone.” It’s far better for a learning game to be included as part of a larger curriculum rather than developed without any sense of how it fits into the larger pictures. Designers need to make sure they don’t lose site of the learning objective. If you are considering deploying a game within your organization, you have to make sure it’s appropriate for your culture. Another big factor to consider is how the losers are handled in the game. If a person is not successful at a game, what are the repercussions? What if she gets mad and shuts down? What if he walks away halfway through the experience because he is so frustrated? These types of contingencies need to be considered when developing a game. So, yes, there are downsides to games as there are downsides to other types of instruction. Our job, as instructional designers, is to understand as many downsides and upsides as possible for many different design possibilities and make an informed, evidence-based decision.

 

Will (Question 5):

As you found in your research review, feedback is a critical element in gaming. I’ve anointed “feedback” as one of the most important learning factors in my Decisive Dozen – as feedback is critical in all learning. The feedback research doesn’t seem definitive in recommending immediate versus delayed feedback, but the wisdom I take from the research suggests that delayed feedback is beneficial in supporting long-term remembering, whereas immediate feedback is beneficial in helping people “get” or comprehend key learning points or contingencies. In some sense, learners have to build correct mental models before they can (or should) reinforce those understandings through repetitions, reinforcement, and retrieval practice.

Am I right that most games provide immediate feedback? If not, when is immediate feedback common in games, when is delayed feedback common? What missed opportunities are there in feedback design?

Karl:

You are right; most games provide immediate, corrective feedback. You know right-away if you are performing the right action and, if not, the consequences of performing the wrong action. A number of games also provide delayed feedback in the form of after-action reviews. These are often seen in games using branching. At the end of the game, the player is given a description of choices she made versus the correct choices. So, delayed feedback is common in some types of games. In terms of what is missing in terms of feedback, I think that most learning games do a poor job of layering feedback. In well-designed video games, at the first level of help, a player can receive a vague clue. If this doesn’t work or too much time passes, the game provides a more explicit clue and finally, if that doesn’t work, the player receives step-by-step instructions. Most learning games are too blunt. They tend to give the player the answer right away rather than layers choices or escalating the help. I think that is a huge missed opportunity.

 

Will (Question 6):

By the way, your book does a really nice job in describing the complexity and subtlety of feedback, including Robin Hunicke’s formulation for what makes feedback “juicy.” What subtleties around feedback do most of us instructional designers or instructors tend to miss?

Karl:

Our feedback in learning games and even elearning modules is just too blunt. We need more subtlety. Hunicke describes the need for feedback to have many different attributes including the need for the feedback to be tactile and coherent. She refers to tactile feedback as creating an experience where the player can feel the feedback as it is occurring on screen so that it’s not forced or unnatural within the game play. Instructional designers typically don’t create feedback the player or learner feels, typically, they create feedback that is “in your face” such as “Nice job!” or “Sorry, try again.” She describes coherent feedback as feedback that stays within the context of the game. It is congruent with on screen actions and activities as well as with the storyline unfolding as the interactions occur. Our learning games seem to fail at including both of these elements in our feedback. In general, our field needs to focus on feedback that is more naturally occurring and within the flow of the learning.

 

Will (Question 7):

Do learners have to enjoy the game to learn from it? What are the benefits of game pleasure? Are there drawbacks at all?

Karl:

Actually, research by Tracy Sitzmann indicates (2011) that a learner doesn’t have to indicate that he or she was “entertained” to learn from a serious game. So fun should not be the standard by which we measure the success of game. Instead, she found that what makes a game effective for learning is the level of engagement. Engagement should be the goal when designing a learning game. However, there are a number of studies that indicate that games are motivational. Although, one meta-analysis on games indicated that motivation was not a factor. So, I am not sure if pleasure is a necessary factor for learning. Instead, I tend to focus more on building engagement and having learners make meaningful decisions and less on learner enjoyment and fun. This tends to run counter to why most people want a learning game but the reason we should want learning games is to encourage engagement and higher order thinking and not to simply make boring learning fun. Engagement, mastery and tough decision making might not always be fun but, as you indicated in your questions about simulations, it can be engaging and learning results from engagement and then understanding the consequences of actions taken during that engagement.

 

Will (Question 8):

As I was perusing research on games, one of my surprises was that games seemed to be used for health-behavior change at least as much as learning. What they heck’s going on?

Karl:

Games are great tools for promoting health. We all know that we should focus on health and wellness but we often let other life elements get in the way. Making staying healthy a game provides, in many cases that little bit of extra motivation to make you stay on course. I think games for health work so well because they capitalize on our already existing knowledge that we need to stay healthy and then provide tracking of progress, earning of points and other incentives to help us give that extra boost that makes us take the extra 100 steps needed to get our 10,000 for the day. Ironically, I find games used in many life and death situations.

 

Will (Question 9):

In your book you have a whole chapter devoted to research on games. I really like your review. Of course, with all the recent research, maybe we’ve learned even more. Indeed, I just did a search of PsycINFO (a database of scientific research in the social sciences). When I searched for “games” in the title, I found 110 articles in peer-reviewed journals in this year (2016) alone. That’s a ton of research on games!!

Let’s start with the finding in your book that the research methodology of much of the research is not very rigorous. You found that concern from more than one reviewer. Is that still true today (in 2016)? If the research base is not yet solid, what does that mean for us as practitioners? Should we trust the research results or should we be highly skeptical — OR, where in-between these extremes should we be?

Karl:

The short answer, as with any body of research, is to be skeptical but not paralyzed. Waiting for the definitive decision on games is a continually evolving process. Research results are rarely a definitive answer; they only give us guidance. I am sure you remember when “research” indicated that eggs were horrible for you and then “research” revealed that eggs were the ultimate health food. We need to know that research evolves and is not static. And, we need to keep in mind that some research indicated that smoking had health benefits so I am always somewhat skeptical. Having said that, I don’t let skepticism stop me from doing something. If the research seems to be pointing in a direction but I don’t have all the answers, I’ll still “try it out” to see for myself.

That said the research on games, even research done today, could be much more rigorous. There are many flaws which include small sample sizes, no universal definition of games and too much focus on comparing the outcomes of games with the outcomes of traditional instruction. One would think that argument would be pretty much over but decade after decade we continue to compare “traditional instruction” with radio, television, games and now mobile devices. After decades of research the findings are almost always the same. Good design, regardless of the delivery medium, is the most crucial aspect for learning. Where the research really needs to go, and it’s starting to go in that direction, is toward comparing elements of games to see which elements lead to the most effective and deep learning outcomes. So, for example, is the use of a narrative more effective in a learning game than the use of a leaderboard or is the use of characters more critical for learning than the use of a strategy-based design? I think the blanket comparisons are bad and, in many cases, misleading. For example, Tic-Tac-Toe is a game but so is Assassin’s Creed IV. So to say that all games teach pattern recognition because Tic-Tac-Toe teaches pattern recognition is not good. As Clark Aldrich stated years ago, the research community needs some sort of taxonomy to help identify different genres of games and then research into the learning impact of those genres.

So, I am always skeptical of game research and try to carefully describe limitations of the research I conduct and to carefully review research that has been conducted by others. I tend to like meta-analysis studies which are one method of looking at the body of research in the field and then drawing conclusions but even those aren’t perfect as you have arguments about what studies were included and what studies were not included.

At this point I think we have some general guidelines about the use of games in learning. We know that games are most effective in a curriculum when they are introduced and described to the learners, then the learners play the game and then there is a debrief. I would like to focus more on what we know from the research on games and how to implement games effectively rather than the continuous, and in my opinion, pointless comparison of games to traditional instruction. Let’s just focus on what works when games do provide positive learning outcomes.

 

Will (Question 10):

A recent review of serious games (Tsekleves, Cosmas, & Aggoun, 2014, 2016) concluded that their benefits were still not fully supported. “Despite the increased use of computer games and serious games in education, there are still very few empirical studies with conclusive results on the effectiveness of serious games in education.” This seems a bit strong given other findings from recent meta-analyses, for example the moderate effect sizes found in a meta-analysis from Wouters, van Nimwegen, van Oostendorp, & van der Spek (2013).

Can you give us a sense of the research? Are serious games generally better, sometimes better, or rarely better than conventional instruction? Or, are they better in some circumstance, for some learners, for some topics – rather than others? How should us practitioners think about the research findings?

Karl:

Wouters et al. (2013) found that games are more effective than traditional instruction as did Stizmann (2011). But, as you indicated, other meta-analysis studies have not come to that conclusion. So, again, I think the real issue is that the term “games” is way too broad for easy comparisons and we need to focus more on the elements of games and how the individual elements intermingle and combine to cause learning to occur. One major problem with research in the field of games is that to conduct effective and definitive research we often want to isolate one variable and then keep all other variables that same. That processes is extremely difficult to do with games. New research methods might need to be invented to effectively discover how game variables interact with one another. I even saw an article that declared that all games are situational learning and should be studied in that context rather than in an experimental context. I don’t know the answer but there are few simple solutions to game-based research and definitive declarations of the effectiveness of games.

However, having said all that, here are some things we do know from the research related to using games for learning:

  • Games should be embedded in instructional programs. The best learning outcomes from using a game in the classroom occur when a three-step embedding process is followed. The teacher should first introduce the game and explain its learning objectives to the students. Then the students play the game. Finally, after the game is played, the teacher and students should debrief one another on what was learned and how the events of the game support the instructional objectives. This process helps ensure that learning occurs from playing the game (Hays, 2005; Sitzmann, 2011).
  • Ensure game objectives align with curriculum objectives. Ke (2009) found that the learning outcomes achieved through computer games depend largely on how educators align learning (i.e., learning subject areas and learning purposes), learner characteristics, and game-based pedagogy with the design of an instructional game. In other words, if the game objectives match the curriculum objectives, disjunctions are avoided between the game design and curricular goals (Schifter, 2013). The more closely aligned curriculum goals and game goals, the more likely the learning outcomes of the game will match the desired learning outcomes of the student.
  • Games need to include instructional support. In games without instructional support such as elaborative feedback, pedagogical agents, and multimodal information presentations (Hays, 2005; Ke, 2009; Wouters et al., 2013)., students tend to learn how to play the game rather than learn domain-specific knowledge embedded in the game. Instructional support that helps learners understand how to use the game increases the effectiveness of the game by enabling learners to focus on its content rather than its operational rules.
  • Games do not need to be perceived as being “entertaining” to be educationally effective. Although we may hope that Maria finds the game entertaining, research indicates that a student does not need to perceive a game as entertaining to still receive learning benefits. In a meta-analysis of 65 game studies, Sitzmann (2011) found that although “most simulation game models and review articles propose that the entertainment value of the instruction is a key feature that influences instructional effectiveness, entertainment is not a prerequisite for learning,” that entertainment value did not impact learning (see also Garris et al., 2002; Tennyson & Jorczak, 2008; Wilson et al., 2009). Furthermore, what is entertaining to one student may not be entertaining to another. The fundamental criterion in selecting or creating a game should be the learner’s actively engagement with the content rather than simply being entertained (Dondling, 2007; Sitzmann, 2011).

 

Will (Question 11):

If the research results are still tentative, or are only strong in certain areas, how should we as learning designers think about serious games? Is there overall advice you would recommend?

Karl:

First of all, I’d like to point to the research that exists indicating that lectures are not as effective for learning as some believe. So practitioners, faculty members and others have defaulted to lectures and held them up as the “holy grail” of learning experiences when the literature clearly doesn’t back up the use of lectures as the best method for teaching higher level thinking skills. If one wants to be skeptical of learning designs, start with the lecture.

Second, I think the guidelines outlined above are a good start. We are literally learning more all the time so keep checking to see the latest. I try to publish research on my blog (karlkapp.com) and at the ATD Science of Learning blog and, of course, the Will at Work blog for all things learning research are good places to look.

Third, we need to take more chances. Don’t be paralyzed waiting for research to tell you what to do. Try something, if you fail, try something else. Sure you can spend your career creating safe PowerPoint-based slide shows where you hit next to continue but that doesn’t really move your career or the field forward. Take what is known from reading books and from vetted and trusted internet sources and make professionally informed decisions.

 

Will (Question 12):

Finally, if we decide to go ahead and develop or purchase a serious game, what are the five most important things people should know?

Karl:

  1. First clearly define your goals. Why are you designing or purchasing a serious game and what do you expect as the outcome? After the learners play the game what should they be able to do? How should they think? What result do you desire? Without a clearly defined outcome, you will run into problems.
  2. Determine how the game fits into your overall learning curriculum. Games should not be stand-alone; they really should be an integral part of a larger instructional plan. Determine where the serious game fits into the bigger picture.
  3. Consider your corporate culture. So cultures will allow a fanciful game with zombies or strange characters and some will not. Know what your culture will tolerate in terms of game look and feel and then work within those parameters.
  4. If the game is electronic, get your information technology (IT) folks involved early. You’ll need to look at things like download speed, access, browser compatibility and a host of other technical issues that you need to consider.
  5. Think carefully and deeply before you decide to develop a game internally. Developing good, effective serious games is tough. It’s not a two-week project. Partner with a vendor to obtain the desired result.
  6. (A bonus) Don’t neglect the power of card games or board games for teaching. If you have the opportunity to bring learners together, consider low-tech game solutions. Sometimes those are the most impactful.

 

Will (Question 13):

One of your key pieces of advice is for folks to play games to learn about their power and potential. What kind of games should we choose to play? How should we prioritize our game playing? What kind of games should we avoid because they’ll just be a waste of time or might give us bad ideas about games for learning?

Karl:

I think you should play all types of games. First, pick different types of games from a delivery perspective so pick some card games, board games, causal games on your smartphone and video games on a game console. Mix it up. Then play different types of games such as role-play games, cooperative games, matching games, racing games, games where you collect items (like Pokémon Go). The trick is to not just play games that you like but to play a variety of games. You want to build a “vocabulary” of game knowledge. Once you’ve built a vocabulary, you will have a formidable knowledge base on which to draw when you want to create a new learning game.

Also, you can’t just play the games. You need to play and critically evaluate the games. Pay attention to what is engaging about the game, what is confusing, how the rules are crafted, what game mechanics are being employed, etc.? Play games with a critical eye. Of course, you will run the danger of ruining the fun of games because you will dissect any game you are playing to determine what about the game is good and what is bad but, that’s ok, you need that skill to help you design games. You want to think like a game designer because when you create a serious game, you are a game designer. Therefore, the greater the variety of game you the play and dissect, the better game designer you will become.

 

Will (Question 14):

If folks are interested, where can they get your book?

Karl:

Amazon.com is a great place to purchase my book or at the ATD web site. Also, if people have access to Lynda.com, I have several courses on Lynda including “The Gamification of Learning”. And I have a new book coming out in January co-authored by my friend Sharon Boller called “Play to Learn” where we walk readers through the entire serious game design process from conceptualization to implementation. We are really excited about that book because we think it will be very helpful for people who want to create learning games.

 

You can click on the images below to view Karl’s Gamification books on Amazon.com.

 

 

 

 

Research

Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games. Personnel Psychology, 64(2), 489–528.

Tsekleves, E., Cosmas, J., & Aggoun, A. (2016). Benefits, barriers and guideline recommendations for the implementation of serious games in education for stakeholders and policymakers. British Journal of Educational Technology, 47(1), 164-183. Available at: http://onlinelibrary.wiley.com/doi/10.1111/bjet.12223/pdf

Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology, 105(2), 249-265. http://dx.doi.org/10.1037/a0031311

This post is for research geeks, and it's really just an introduction — maybe a gentle warning — as I don't have time or the statistical expertise to explore this deeply.

 

The Basics

When scientific experiments get done, researchers typically compare one experimental treatment to second one (or to no-treatment at all). So for example, we might compare two versions of the same elearning program, one that utilizes spaced repetitions and a second that uses unspaced repetitions. When we do such comparisons we need to know two things before we can draw conclusions:

  1. Statistical Significance:
    How likely is it that the experimental results might be caused by random chance. Social scientists aim for results that are more than 95% likely to result from the experimental factors being studied. In other words, if we did the same experiment 100 times, we should expect the same outcome at least 95% of the time.
  2. Effect Size:
    How different are the actual results. Are they sufficiently large to be meaningful?

If we don't take effect sizes into account, we can have an experiment that is statistically significant but not practically significant. That is, we can have statistical significance, but not effect-size significance. Without looking at effect-size calculations, we can be fooled into thinking that an experimental result is meaningful when it actually shows no substantial advantage for one learning method compared with another.

So for example, suppose that a new mobile-learning app improves learning by less than one-half of one percent, but cost $10,000 per learner…

Meta-analyses are statistical studies that compile many scientific studies, looking at the whole of the results. Meta-analyses have been a potent source of wisdom because they take complicated and complex results over a range of studies and combine them in a way that helps us make sense of the overall trends. Meta-analyses rely on effect sizes to calculate the overall importance of the factors being studied.

 

Some Subtleties

As with all things in science, over time scientists make improvement and refinements in their work. Effect sizes are no different. Recently, researchers have found that meta-analyses have to be interpreted with wisdom, otherwise the results may not be what they seem. Of specific concern is the finding that published studies tend to report higher effect sizes than unpublished studies. Quasi-experimental designs reported higher effect sizes than randomized control studies. Et cetera…

Here are some recommendations for researchers from Cheung and Slavin (2016), who are focused on educational research, but whose recommendations are widely applicable:

  • In doing a meta-analysis, don't just look at published studies. Moreover, work diligently to gather all studies that have been done.
  • Researchers, in general, should utilize randomized trials whenever possible. Those doing meta-analyses should look at these separately because they are likely to have the least-biased data.
  • Policy makers and educators (and I, Will Thalheimer, would add all workplace learning professionals) should "insist on large, randomized evaluations to validate promising programs."

 

Some Research Articles of Relevance

Cheung, A. C. K. & Slavin, R. E. (2016). How methodological features affect effect sizes in education. Educational Researcher, 45(5), 283-292.
 
Ueno, T., Fastrich, G. M., & Murayama, K. (2016). Meta-analysis to integrate effect sizes within an article: Possible misuse and Type I error inflation. Journal of Experimental Psychology: General, 145(5), 643-654. http://dx.doi.org/10.1037/xge0000159
 
van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2015). Meta-analysis using effect size distributions of only statistically significant studies. Psychological Methods, 20(3), 293-309. http://dx.doi.org/10.1037/met0000025

The journal Nature reported today that a new map of the brain reveals 97 newly-found regions, each specialized to certain functions.

In an article, entitled, "A multi-modal parcellation of human cerebral cortex," the 12 scientists found 97 new structures and validated the "83 areas previously reported using post-mortem microscopy," bringing the total structures per hemisphere to 180.

New Brain Map

As the scientists declare, such a map is critical to neuroscientists in evaluating neurological functioning.

"Understanding the amazingly complex human cerebral cortex requires a map (or parcellation) of its major subdivisions, known as cortical areas. Making an accurate areal map has been a century-old objective in neuroscience."

As should be obvious, we are still in the infancy of neuroscience. Any recommendations about learning — supposedly based on neuroscience, should be taken with extreme skepticism. See related article.

For nice review of the findings, see article in the New York Times.

 

New research just published (July 11, 2016) shows that the blood-glucose hypothesis about willpower is probably not true, at least not as evidenced by current scientific studies.

The idea — previously held — was that blood glucose mediated the propensity of people to exert willpower, with decreased blood glucose making it less likely that someone would persevere in a task.

Bigstock-Group-of-business-people-Busi-14495489

I myself had read some of the previous research and shared the blood-glucose hypothesis. Apparently, current research doesn't back up this claim. Miguel A. Vadillo, Natalie Gold, and Magda Osman — writing in the journal Psychological Science, utilized a "new meta-analytic tool, p-curve analysis, to examine the reliability of the evidence from" 19 studies focusing on blood glucose and willpower. They found that overall there was not a reliable effect of glucose.

Here are two quotes from the article:

The findings from the present study are a surprise in the context of the wide acceptance of the glucose hypothesis in general scientific research and its popularity, as evidenced by the number of citations of Gailliot et al. (2007) in the literature and the continued influence of this hypothesis in recent reviews on ego depletion (e.g., Baumeister, 2014; Baumeister & Alghamdi, 2015). Moreover, the hypothesis has intuitive and seemingly practical appeal. If one accepts that a failure of self-control in regulating actions contributes to the many personal and societal problems that people face (Baumeister et al., 2000), then glucose supplements would provide a simple means to enhance willpower and ameliorate these problems (Baumeister & Tierny, 2011). In light of our results, it is doubtful that such a recommendation will work in the real world. This conclusion converges with recent evidence that glucose might have little or no impact on domain-general decision-making tasks (Orquin & Kurzban, 2016) and with an intriguing series of meta-analyses and preregistered replications suggesting that the ego-depletion effect itself might be less robust than previously thought (Carter, Kofler, Forster, & McCullough, 2015; Hagger et al., in press).

Our results suggest that, on average, these studies have little or no evidential value, but they do not allow us to determine whether the significant results are due to publication bias, selective reporting of outcomes or analyses, p-hacking, or all of these. It is not impossible that some of these studies are exploring small but true effects and that their evidential value may be diluted by the biases that pervade the rest of the studies. Perhaps future research will show that glucose does play a role in ego depletion effects, but our conclusions are based on the analysis of the extant literature in this area. Thus, our contribution must be seen as an additional piece of information in the wider context of attempts to verify the reliability of the glucose model of ego depletion.

 

Practical Ramifications

In the past, I used the glucose-depletion idea as a partial explanation why day-long training sessions were difficult for learners. I also used it as a rationale for plying my workshop participants with treats in the afternoon. Well, at least I tried to make them somewhat healthy! As the meta-analysis above reveals, it's likely that some other mechanism is involved in the difficulties learners have during intensive learning sessions. As trainers and instructional designers, we still have to figure out a way to support learners during long learning sessions to prevent attention-zapping fatigue…

 

Research Reviewed

Vadillo, M. A., Gold, N., and Osman, M. (2016). The bitter truth about sugar and willpower: The limited evidential value of the glucose model of ego depletion. Psychological Science, Published Online July 11, 2016. Available at: http://pss.sagepub.com/content/early/2016/07/08/0956797616654911.full

 

For millennium, scholars and thinkers of all sorts — from scientists to men or women on the street — thought that memories simply faded with time.

Locke said:

"The memory of some men, it is true, is very tenacious, even to a miracle; but yet there seems to be a constant decay of all our ideas, even of those which are struck deepest, and in the minds of the most retentive; so that if they be not sometimes renewed by repeated exercise of the senses, or reflection on those kinds of objects which at first occasioned them, the print wears out, and at last there remains nothing to be seen."  John Locke quoted by William James in Principles of Psychology (p. 445, the 1952 Great Books edition, original 1891).

However, in the mid 1900's research by McGeoch (1932), Underwood (1957) and others found that memories can fade when what is learned interferes with other things learned. Previous things learned can interfere with current learning (proactive interference) and current learning can be interfered with by subsequent learning (retroactive interference).

The debate between decay and interference went on for over a century! Indeed, it paralleled the debate in physics over the property of light. Is it a wave or a particle?

The first ever photograph of light as both a particle and wave

In physics, the debate was so important that Albert Einstein won the Nobel Prize for the solution. Einstein's solution was simple. Light was BOTH a wave and a particle. The picture above is reported by Phys.org to be the first photograph demonstrating light's dual properties.

Now in the psychological research, we have the first experimental evidence that forgetting may be caused by BOTH decay and interference.

In a clever experiment, published just this month, Talya Sadeh, Jason Ozubko, Gordon Winocur, and Morris Moscovitch found evidence for both interference and decay.

Their research appears to be inspired, at least partially, by neuroscience findings. Here's what the authors say:

"Two approaches have guided current thinking regarding the functional distinction between hippocampal and extrahippocampal memories. The first approach maintains that the hippocampus supports a mnemonic process termed recollection, whereas extrahippocampal structures, especially the perirhinal cortex, support a process termed familiarity… Recollection is a mnemonic process that involves reinstatement of memory traces within the context in which they were formed. Familiarity is a mnemonic process that manifests itself in the feeling that a studied item has been experienced, but without reinstating the original context." (p. 2)

To be clear, this was NOT a neuroscience experiment. They did not measure brain activity in any way. They measured behavioral findings only.

In their experiment, they had people engage in a word-recognition task and then gave them either (1) another word-learning task, (2) a short music task, or (3) a long music task. The first group's word-learning task was designed to create the most interference. The longer music task was designed to create the most decay (because it took longer).

The results of the experiment were consistent with the researcher's hypotheses. They claimed to have found evidence for both decay and interference.

Caveats

Every scientific experiment has caveats. Usually these are pointed out by the researchers themselves. Often, it takes an outside set of eyes to provide caveats.

Did the researchers prove, beyond the shadow of a doubt, that forgetting has two causes? Short answer: No! Did they produce some interesting findings? Maybe!

My big worry from a research-design perspective is that their manipulation distinguishing between recollection and familiarity is somewhat dubious, seemingly splitting hairs in the questions they ask the learners. My big worry from a practical learning-design perspective is that they are using words as learning materials. First, most important learning situations utilize more complicated materials. Second, words are associative by their very nature — thus more likely to react to interference than typical learning materials. Third, the final "test" of learning was a recognition-memory task that involved learners determining whether they remembered seeing the words before — again, not very relevant to practical learning situations.

Practical Ramifications for Learning Professionals

Since there are potential experimental-design issues, particularly from a practical standpoint, it would be an extremely dubious enterprise to draw practical ramifications. Let me be dubious then (because it's fun, not because it's wise). If the researchers are correct, that memories that are context-based are less likely to be subject to interference effects; we might want to follow the general recommendation — often made today by research-focused learning experts — to provide learners with realistic practice using stimuli that have contextual relevance. In short, teach "if situation–then action" rather than teaching isolated concepts. Of course, we didn't need this experiment to tell us that. There is a ton of relevant research to back this up. For example, see The Decisive Dozen research review.

Beyond the experimental results, the concepts of delay and interference are intriguing in and of themselves. We know people tend to slide down a forgetting curve. Perhaps from interference, perhaps from decay. Indeed, as the authors say, "it is important to note that interference and decay are inherently confounded."

Research

The experiment:

Sadeh, T., Ozubko, J. D., Winocur, G., & Moscovitch. M. (2016) Forgetting patterns differentiate between two forms of memory representation. Psychological Science OnlineFirst, published on May 6, 2016 as doi:10.1177/0956797616638307.

The research review:

Sadeh, T., Ozubko, J. D., Winocur, G., & Moscovitch, M. (2014). How we forget may depend on how we remember. Trends in Cognitive Sciences, 18, 26–36.

 

 

In a recent research article, Tobias Wolbring and Patrick Riordan report the results of a study looking into the effects of instructor “beauty” on college course evaluations. What they found might surprise you — or worry you — depending on your views on vagaries of fairness in life.

Before I reveal the results, let me say that this is one study (two experiments), and that the findings were very weak in the sense that the effects were small.

Their first study used a large data set involving university students. Given that the data was previously collected through routine evaluation procedures, the researchers could not be sure of the quality of the actual teaching, nor the true “beauty” of the instructors (they had to rely on online images).

The second study was a laboratory study where they could precisely vary the level of beauty of the instructor and their gender, while keeping the actual instructional materials consistent. Unfortunately, “the instruction” consisted of an 11-minute audio lecture taught by relatively young instructors (young adults), so it’s not clear whether their results would generalize to more realistic instructional situations.

In both studies they relied on beauty as represented by facial beauty. While previous research shows that facial beauty is the primary way we rate each other on attractiveness, body beauty has also been found to have effects.

Their most compelling results:

1.

They found that ratings of attractiveness are very consistent across raters. People seem to know who is attractive and who is not. This confirms findings of many studies.

2.

Instructors who are more attractive, get better smile sheet ratings. Note that the effect was very small in both experiments. They confirmed what many other research studies have found, although their results were generally weaker than previous studies — probably due to the better controls utilized.

3.

They found that instructors who are better looking engender less absenteeism. That is, students were more likely to show up for class when their instructor was attractive.

4.

They found that it did not make a difference on the genders of the raters or instructors. It was hypothesized that female raters might respond differently to male and female instructors, and males would do the same. But this was not found. In previous studies there have been mixed results.

5.

In the second experiment, where they actually gave learners a test of what they’d learned, attractive instructors engendered higher scores on a difficult test, but not an easy test. The researchers hypothesize that learners engage more fully when their instructors are attractive.

6.

In the second experiment, they asked learners to either: (a) take a test first and then evaluate the course, or (b) do the evaluation first and then take the test. Did it matter? Yes! The researchers hypothesized that highly-attractive instructors would be penalized for giving a hard test more than their unattractive colleagues. This prediction was confirmed. When the difficult test came before the evaluation, better looking instructors were rated more poorly than less attractive instructors. Not much difference was found for the easy test.

Ramifications for Learning Professionals

First, let me caveat these thoughts with the reminder that this is just one study! Second, the study’s effects were relatively weak. Third, their results — even if valid — might not be relevant to your learners, your instructors, your organization, your situation, et cetera!

  1. If you’re a trainer, instructor, teacher, professor — get beautiful! Obviously, you can’t change your bone structure or symmetry, but you can do some things to make yourself more attractive. I drink raw spinach smoothies and climb telephone poles with my bare hands to strengthen my shoulders and give me that upside-down triangle attractiveness, while wearing the most expensive suits I can afford — $199 at Men’s Warehouse; all with the purpose of pushing myself above the threshold of … I can’t even say the word. You’ll have to find what works for you.
  2. If you refuse to sell your soul or put in time at the gym, you can always become a behind-the-scenes instructional designer or a research translator. As Clint said, “A man’s got to know his limitations.”
  3. Okay, I’ll be serious. We shouldn’t discount attractiveness entirely. It may make a small difference. On the other hand, we have more important, more leverageable actions we can take. I like the research-based findings that we all get judged primarily on two dimensions warmth/trust and competence. Be personable, authentically trustworthy, and work hard to do good work.
  4. The finding from the second experiment that better looking instructors might prompt more engagement and more learning — that I find intriguing. It may suggest, more generally, that the likability/attractiveness of our instructors or elearning narrators may be important in keeping our learners engaged. The research isn’t a slam dunk, but it may be suggestive.
  5. In terms of learning measurement, the results may suggest that evaluations come before difficult performance tests. I don’t know though how this relates to adults in workplace learning. They might be more thankful for instructional rigor if it helps them perform better in their jobs.
  6. More research is needed!

Research Reviewed

Wolbring, T., & Riordan, P. (2016). How beauty works. Theoretical mechanisms and two
empirical applications on students’ evaluation of teaching. Social Science Research, 57, 253-272.

Great Article: Burnout and the Brain by Alexandra Michel, writing in The Observer, a publication of The Association for Psychological Science.

Article link is here.

Major Findings:

  • Stress may cause changes in the brain.
  • Stress may cause problems with:
    • attention
    • memory
    • creativity
    • problem-solving
    • working-memory problems in general

Will's Caveats:

  • Studies were mostly correlational, so not clear whether there is cause-and-effect relationship.

Defining Stress:

  • Stress is NOT caused just by working long hours. As the article says:

"a comprehensive report on psychosocial stress in the workplace published by the World Health Organization identified consistent evidence that 'high job demands, low control, and effort–reward imbalance are risk factors for mental and physical health problems.' Ultimately, burnout results when the balance of deadlines, demands, working hours, and other stressors outstrips rewards, recognition, and relaxation."

Learning-and-Performance Ramifications

  • If we want our organization's employees to work at their best, we can't put them under long-periods of stress.
  • We need to give them more control of their work, reward them appropriately especially with recognition and status (not necessarily with money), promote periods of rest and relaxation, and give employees input into their job environment.

Updated on March 29, 2018.
Originally posted on January 5, 2016.
=====================

The world of learning and development is on the cusp of change. One of the most promising—and prominent—paradigms comes from neuroscience. Go to any conference today in the workplace learning field and there are numerous sessions on neuroscience and brain-based learning. Vendors sing praises to neuroscience. Articles abound. Blog posts proliferate.

But where are we on the science? Have we gone too far? Is this us, the field of workplace learning, once again speeding headlong into a field of fad and fantasy? Or are we spot-on to see incredible promise in bringing neuroscience wisdom to bear on learning practice? In this article, I will describe where we are with neuroscience and learning—answering that question as it relates to this point in time—in March of 2018.

What We Believe

I’ve started doing a session in conferences and in local trade-association meetings I call The Learning Research Quiz Show. It’s a blast! I ask a series of questions and get audience members to vote on the answer choices. After each question, I briefly state the correct answer and cite research from top-tiered scientific journals. Sometimes I hand out candy to those who are all alone in getting an answer correct, or all alone in being incorrect. It’s a ton of fun! On the other hand, there’s often discomfort in the room to go with the sweet morsels. Some people’s eyes go wide and some people get troubled when their favorite learning approach gets deep-sixed.

The quiz show is a great way to convey a ton of important information, but audience responses are intriguing in and of themselves. The answers people give tell us about their thinking—and, by extension, when compiled over many audiences, people’s answers hint at the current thinking within the learning profession. Let me give you an example related to the topic of brain science.

Overwhelmingly, people in my audiences answer: “C. Research on brain-based learning and neuroscience.” In the workplace learning field, at this point in time, we are sold on neuroscience.

 

What do the Experts Say?

As you might expect, neuroscientists are generally optimistic about neuroscience. But when it comes to how neuroscience might help learning and education, scientists are more circumspect.

Noted author and neuroscientist John Medina, who happens to be a lovely gentleman as well, has said the following as recently as January 2018. I originally saw him say these things in June 2015:

  • “I don’t think brain science has anything to say for business practice.”
  • “We still don’t really know how the brain works.”
  • “The state of our knowledge [of the brain] is childlike.”

Dan Willingham, noted research psychologist, has been writing for many years about the poor track record of bringing neuroscience findings to learning practice.

In 2012 he wrote an article entitled: “Neuroscience Applied to Education: Mostly Unimpressive.” On the other hand, in 2014 he wrote a blog post where he said, “I’ve often written that it’s hard to bring neuroscientific data to bear on issues in education… Hard, but not impossible.” He then went on to discuss how a reading-disability issue related to deficits in the brain’s magnocellular system was informed by neuroscience.

In a 2015 scientific article in the journal Learning, Media and Technology, Harvard researchers Daniel Busso and Courtney Pollack reviewed the research on neuroscience and education and came to these conclusions:

  • “There is little doubt that our knowledge of the developing brain is poised to make important contributions to the lives of parents, educators and policymakers…”
  • “Some have voiced concerns about the viability of educational neuroscience, suggesting that neuroscience can inform education only indirectly…”
  • “Others insist that neuroscience is only one small component of a multi-pronged research strategy to address educational challenges, rather than a panacea…”

In a 2016 article in the world-renowned journal, Psychological Review, neuroscientist and cognitive psychologist Jeffrey Bowers concluded the following: “There are no examples of novel and useful suggestions for teaching based on neuroscience thus far.” Critiquing Bower’s conclusions, neuroscientists Paul Howard-Jones, Sashank Varma, Daniel Ansari, Brian Butterworth, Bert De Smedt, Usha Goswami, Diana Laurillard, and Michael S. C. Thomas wrote that, “Behavioral and neural data can inform our understanding of learning and so, in turn, [inform] choices in educational practice and the design of educational contexts…” and “Educational Neuroscience does not espouse a direct link from neural measurement to classroom practice.” Neuroscientist John Gabrieli added: “Educational neuroscience may be especially pertinent for the many children with brain differences that make educational progress difficult in the standard curriculum…” “It is less clear at present how educational neuroscience would translate for more typical students, with perhaps a contribution toward individualized learning.” In 2017, Gabrieli gave a keynote on how neuroscience is not ready for education.

Taken together, these conclusions are balanced between the promise of neuroscience and the healthy skepticism of scientists. Note however, that when these researchers talk about the benefits of neuroscience for learning, they see neuroscience applications as happening in the future (perhaps the near future), and augmenting traditional sources of research knowledge (those not based in neuroscience). They do NOT claim that neuroscience has already created a body of knowledge that is applicable to learning and education.

Stanford University researchers Dan Schwartz, Kristen Blair, and Jessica Tsang wrote in 2012 that the most common approach in educational neuroscience tends “to focus on the tails of the distribution; namely, children (and adults) with clinical problems or exceptional abilities.” This work is generally not relevant to workplace learning professionals—as we tend to be more interested in learners with normal cognitive functioning.

Researchers Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof in their book, Urban Myths about Learning and Education, concluded the following:

“In practice, at the moment it is only the insights of cognitive psychology [not neuropsychology] that can be effectively used in education, but even here care needs to be taken. Neurology has the potential to add value to education, but in general there are only two real conclusions we can make at present:

– For the time being, we do not really understand all that much about the brain.
– More importantly, it is difficult to generalize what we do know into a set of concrete precepts of behavior, never mind devise methods for influencing that behavior.”

The bottom line is that neuroscience does NOT, as of yet, have much guidance to provide for learning design in the workplace learning field. This may change in the future, but as of today, we cannot and should not rely on neuroscience claims to guide our learning designs!

 

Neuroscience Research Flawed

In 2016, researchers found a significant flaw in the software used in a large percentage of neuroscience research, calling the findings of neuroscience research into question (Eklund, Nichols, & Knuttson, 2016). Even as recently as February of 2018, it wasn’t clear whether neuroscience data was being properly processed (Han & Park, 2018).

Neuroscience is done using imaging techniques like fMRI, PET, SPECT, and EEG. Functional Magnetic Resonance Imagining (fMRI) is by far the most common method. Basically, fMRI is like taking a series of photos of brain activity by looking at blood flow. Because there tends to be “noise” in these images—that is false signals—software is used to ensure that brain activity is really in evidence where the signals say there is activity. Unfortunately, the software used before 2016 to differentiate between signal and noise was severally flawed, causing up to 70% false positives when 5% was expected (Eklund, Nichols, & Knuttson, 2016). As Wired Magazine wrote in a headline, “Bug in fMRI sofware calls 15 years of research into question.” Furthermore, it’s still not clear that corrective measures are being properly utilized (Han & Park, 2018).

The problems with neuroscience imaging were most provocatively illustrated in a 2010 article in the Journal of Serendipitous and Unexpected Results, that showed fMRI brain activation in a dead salmon—where none would be expected (obviously). This article was reviewed in a 2012 post on Scientific American.

 

Are We Drinking the Snake Oil?

Yes, many of us in the workplace learning field have already swallowed the neuroscience elixir. Some of us have gone further, washing down the snake oil with brain-science Kool-Aid—having become gullible adherents to the cult of neuroscience.

My Learning Research Quiz Show is just one piece of evidence of the pied-piper proliferation of brain- science messages. Conferences in the workplace learning field often have keynotes on neuroscience. Many have education sessions that focus on brain science. Articles, blog posts, and infographics balloon with neuroscience recommendations.

Here are some claims that have been made in the workplace learning field within the past few years:

  • “If you want people to learn, retain, and ultimately transfer knowledge to the workplace, it is essential that you understand the ergonomics of the brain.”
  • “The brain is our primary tool for learning. It’s seat of thought, memory, consciousness and emotion. So it only makes sense to match your eLearning design with how the learner’s brain functions.”
  • “Neuroscience changes everything. Neuroscience is exposing more and more about how our brains work. I find it fascinating, and exciting, because most of the theories our industry follows are based on the softer behavioral sciences. We now have researchers in the hard sciences uncovering the wonders of our neuroanatomy.”
  • “Neuroscience Facts You Need to Know: Human attention span – 8.25 seconds. Goldfish attention span – 9 seconds… Based on these facts (and a few others)… you can see why 25% of L&D professionals are integrating neuroscience.”

All of these claims are from vendors trying to get your business—and all of these claims were found near the top of a Google search. Fortunately for you, you’re probably not one of those who is susceptible to such hysterics.

Or are you?

Interestingly, researchers have actually done research on whether people are susceptible to claims based on neuroscience. In 2008, two separate studies showed how neuroscience information could influence people’s perceptions and decision making. McCabe and Castel (2008) found that adding neuroscience images to articles prompted readers to rate the scientific reasoning in those articles more highly than if a bar chart was added or if there was no image added. Weisberg, Keil, Goodstein, Rawson, and Gray (2008) found that adding extraneous neuroscience information to poorly-constructed explanations prompted novices and college students (in a neuroscience class) to rate the explanations as more satisfying than if there was no neuroscience information.

Over the years, the finding that neuroscience images lend credibility to learning materials has been called into question numerous times (Farah & Hook, 2013; Hook & Farah, 2013; Michael, Newman, Vuorre, Cumming, & Garry, 2013; Schweitzer, Baker, & Risko, 2013).

On the other hand, the finding that neuroscience information—in a written form—lends credibility has been supported many times (e.g., Rhodes, Rodriguez, & Shah, 2014; Weisberg, Taylor, & Hopkins, 2015; Fernandez-Duque, Evans, Christian, & Hodges, 2015).

In 2017, a research study found that adding both irrelevant neuroscience information and irrelevant brain images pushed learners to rate learning material as having more credibility (Im, Varna, & Varna, 2017).

As Busso and Pollack (2015) have concluded:

“Several highly cited studies have shown that superfluous neuroscience information may bias the judgement of non-experts…. However, the idea that neuroscience is uniquely persuasive has been met with little empirical support….”

Based on the research to date, it would appear that we as learning professionals are not likely to be influenced by extraneous neuroscience images on their own, but we are likely to be influenced by neuroscience information—or any information that appears to be scientific. When extraneous neuroscience info is added to written materials, we are more likely to find those materials credible than if no neuroscience information had been added.

 

If the Snake Oil Tastes Good, Does it Matter in Practice?

If we learning professionals are subject to the same human tendencies as our fellow citizens, we’re likely to be susceptible to neuroscience information embedded in persuasive messages. The question then becomes, does this matter in practice? If neuroscience claims influence us, is this beneficial, benign, or dangerous?

Here are some recent quotes from researchers:

  • “Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people’s abilities to critically consider the underlying logic of this explanation.” (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008).
  • “Given the popularity of neuroimaging and the attention it receives in the press, it is important to understand how people are weighting this evidence and how it may or may not affect people’s decisions. While the effect of neuroscience is small in cases of subjective evaluations, its effect on the mechanistic understanding of a phenomenon is compelling.” (Rhodes, Rodriguez, & Shah, 2014)
  • “Since some individuals may use the presence of neuroscience information as a marker of a good explanation…it is imperative to find ways to increase general awareness of the proper role for neuroscience information in explanations of psychological phenomena.” (Weisberg, Taylor, & Hopkins, 2015)
  • “For several decades, myths about the brain — neuromyths — have persisted in schools and colleges, often being used to justify ineffective approaches to teaching. Many of these myths are biased distortions of scientific fact. Cultural conditions, such as differences in terminology and language, have contributed to a ‘gap’ between neuroscience and education that has shielded these distortions from scrutiny.” (Howard-Jones, P. A., 2014).
  • “Powerful, often self-interested, commercial forces serve as mediators between research and practice, and this raises some pressing questions for future work in the field: what does responsible [research-to practice] translation look like?” (Busso and Pollock, 2015).

As these quotations make clear, researchers are concerned that neuroscience claims may push us to make poor learning-design decisions. And, they’re worried that unscrupulous people and enterprises may take advantage—and push poor learning approaches on the unsuspecting.

But is this concern warranted? Is there evidence that neuroscience claims are false, misleading, or irrelevant?

Yes! Neuroscience and brain-science claims are almost always deceptive in one way or another. Here’s a short list of issues:

  • Selling neuroscience and brain science as a panacea.
  • Selling neuroscience and brain science as proven and effective for learning.
  • Portraying standard learning research as neuroscience.
  • When cognitive psychologists portray themselves as neuroscientists.
  • Portraying neuroscience as having already developed a long-list of learning recommendations.
  • Portraying one’s products and/or services as based on neuroscience or brain-science.
  • Portraying personality diagnostics as based on neuroscience.
  • Portraying questionnaire data as diagnostic of neurophysiological functioning.

These neuroscience-for-learning deceptions lead to substantial problems:

  1. They push us away from more potent methods for learning design—methods that are actually proven by substantial scientific research.
  2. They make us believe that we are being effective, lessening our efforts to improve our learning interventions. This is an especially harmful problem in the learning field since rarely are we getting good feedback on our actual successes and failures.
  3. They encourage us to follow the recommendations of charlatans, increasing the likelihood that we are getting bad advice.
  4. They drive us to utilize “neurosciencey” diagnostics that are ineffective and unreliable.
  5. They enable vendors to provide us with poor learning designs—partly due to their own blind spots and partly due to intentional deceptions.

Here is a real-life example:

Over the past several years, a person with a cognitive psychology background has portrayed himself as a neuroscientist (which he is NOT). He has become very popular as a conference speaker—and offers his company’s product as the embodiment of neuropsychology principles. Unfortunately, the principles embodied in his product are NOT from neuroscience, but are from standard learning research. More importantly, the learning designs actually implemented with his product (even when designed by his own company) are ineffective and harmful—because they don’t take into account several other findings from the learning research.

Here is an example of one of the interactions from his company’s product:

This is very poor instructional design. It focuses on trivial information that is NOT related to the main learning points. Anybody who knows the learning research—even a little bit—should know that focusing on trivial information is (a) a waste of our learners’ limited attention, (b) a distraction away from the main points, and (c) potentially harmful in encouraging learners to process future learning material in a manner that guides their attention to details and away from more important ideas.

This is just one example of many that I might have used. Unfortunately, we in the learning field are seeing more and more misapplications of neuroscience.

 

Falsely Calling Learning Research Neuroscience

The biggest misappropriation of neuroscience in workplace learning is found in how vendors are relabeling standard learning research as neuroscience. The following graphic is a perfect example.

 

I’ve grayed out the detailed verbiage in the image above to avoid implicating the company who put this forward. My goal is not to finger one vendor, but to elucidate the broader problem. Indeed, this is just one example of hundreds that are easily available in our field.

Note how the vendor talks about brain science but then points to two research findings that were elucidated NOT by neuroscience, but by standard learning research. Both the spacing effect and the retrieval-practice effect have been long known – certainly before neuroscience became widely researched.

Here is another example, also claiming that the spacing effect is a neuroscience finding:

Again, I’m not here to skewer the purveyors of these examples, although I do shake my head in dismay when they are portrayed as neuroscience findings. In general, they are not based on neuroscience, they are based on behavioral and cognitive research.

Below is a timeline that demonstrates that neuroscience was NOT the source for the findings related to the spacing effect or retrieval practice.

You’ll notice in the diagram that one of the key tools used by neuroscientists to study the intersection between learning and the brain wasn’t even utilized widely until the early 2000’s, whereas the research on retrieval practice and spacing was firmly established prior to 1990.

 

Conclusion

The field of workplace learning—and the wider education field—have fallen under the spell of neuroscience (aka brain-science) recommendations. Unfortunately, neuroscience has not yet created a body of proven recommendations. While offering great promise for the future, as of this writing—in January 2016—most learning professionals would be better off relying on proven learning recommendations from sources like Brown, Roediger, and McDaniel’s book Make It Stick; by Benedict Carey’s book How We Learn; and by Julie Dirksen’s book Design for How People Learn.

As learning professionals, we must be more skeptical of neuroscience claims. As research and real-world experience has shown, such claims can persuade us toward ineffective learning designs and unscrupulous vendors and consultants.

Our trade associations and industry thought leaders need to take a stand as well. Instead of promoting neuroscience claims, they ought to voice a healthy skepticism.

 

Post Script

This article took a substantial amount of time to research and write. It has been provided for free as a public service. If you’d like to support the author, please consider hiring him as a consultant or speaker. Dr. Will Thalheimer is available at info@worklearning.com and at 617-718-0767.

 

Also of Interest

 

Research Citations

Bennett, C. M., Baird, A. A., Miller, M. B., Wolford, G. L. (2010) “Neural correlates of interspecies perspective taking in the post-mortem atlantic salmon: An argument for multiple comparisons correction,” Journal of Serendipitous and Unexpected Results, 1 (1), 1-5.

Bjork, R. A. (1988). Retrieval practice and the maintenance of knowledge. In M. M. Gruneberg, P. E. Morris, & R. N. Sykes (Eds.), Practical aspects of memory: Current research and issues, Vol. 1. Memory in everyday life (pp. 396-401). Oxford, England: John Wiley.

Bowers, J. S. (2016). The practical and principled problems with educational neuroscience. Psychological Review, 123(5), 600-612.

Bruce, D., & Bahrick, H. P. (1992). Perceptions of past research. American Psychologist, 47(2), 319-328.

Busso, D. S., & Pollack, C. (2015). No brain left behind: Consequences of neuroscience discourse for education. Learning, Media and Technology, 40(2), 168-186.

Eklund A., Nichols T. E., Knutsson H. (2016). Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Science, 113, 7900–7905.

Farah, M. J., & Hook, C. J. (2013). The seductive allure of “seductive allure”. Perspectives on Psychological Science, 8(1), 88-90. http://dx.doi.org/10.1177/1745691612469035

Fernandez-Duque, D., Evans, J., Christian, C., & Hodges, S. D. (2015). Superfluous neuroscience information makes explanations of psychological phenomena more appealing. Journal of Cognitive Neuroscience, 27(5), 926-944. http://dx.doi.org/10.1162/jocn_a_00750

Gabrieli, J. D. E. (2016). The promise of educational neuroscience: Comment on Bowers (2016). Psychological Review, 123(5), 613-619.

Gordon, K. (1925). Class results with spaced and unspaced memorizing. Journal of Experimental Psychology, 8, 337-343.

Gotz, A., & Jacoby, L. L. (1974). Encoding and retrieval processes in long-term retention. Journal of Experimental Psychology, 102(2), 291-297.

Han, H., & Park, J. (2018). Using SPM 12’s Second-level Bayesian Inference Procedure for fMRI Analysis: Practical Guidelines for End Users. Frontiers in Neuroinfomatics, 12, February 2.

Hook, C. J., & Farah, M. J. (2013). Look again: Effects of brain images and mind–brain dualism on lay evaluations of research. Journal of Cognitive Neuroscience, 25(9), 1397-1405. http://dx.doi.org/10.1162/jocn_a_00407

Howard-Jones, P.A. (2014). Neuroscience and education: myths and messages. Nature Reviews Neuroscience, 15, 817-824. Available at: http://www.nature.com/nrn/journal/v15/n12/full/nrn3817.html.

Howard-Jones, P. A., Varma, S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U.,  Laurillard, D., & Thomas, M. S. C. (2016). The principles and practices of educational neuroscience: Comment on Bowers (2016). Psychological Review, 123(5), 620-627.

Im, S-h., Varma, K., Varma, S. (2017). Extending the seductive allure of neuroscience explanations effect to popular articles about educational topics. British Journal of Educational Psychology, 87, 518–534.

Jones, H. E. (1923-1924). Experimental studies of college teaching: The effect of examination on permanence of learning. Archives of Psychology, 10, 1-70.

Michael, R. B., Newman, E. J., Vuorre, M., Cumming, G., & Garry, M. (2013). On the (non)persuasive power of a brain image. Psychonomic Bulletin & Review, 20(4), 720-725.

Rhodes, R. E., Rodriguez, F., & Shah, P. (2014). Explaining the alluring influence of neuroscience information on scientific reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1432-1440. http://dx.doi.org/10.1037/a0036844

Ruch, T. C. (1928). Factors influencing the relative economy of massed and distributed practice in learning. Psychological Review, 35, 19-45.

Schweitzer, N. J., Baker, D. A., & Risko, E. F. (2013). Fooled by the brain: Re-examining the influence of neuroimages. Cognition, 129(3), 501-511. http://dx.doi.org/10.1016/j.cognition.2013.08.009

Weisberg, D. S., Taylor, J. C. V., & Hopkins, E. J. (2015). Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making, 10(5), 429-441.

Weisberg, Deena Skolnick; Keil, Frank C.; Goodstein, Joshua; Rawson, Elizabeth; Gray, Jeremy R. (2008). The seductive allure of neuroscience explanations. Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470-477.

Zhao, X., Wang, C., Liu, Q., Xiao, X., Jiang, T., Chen, C., & Xue, G. (2015). Neural mechanisms of the spacing effect in episodic memory: A parallel EEG and fMRI study. Cortex: A Journal Devoted to the Study of the Nervous System and Behavior, 69, 76-92. http://dx.doi.org/10.1016/j.cortex.2015.04.002

An article in the most recent issue of Psychological Science finds that people tend to perceive men as more creative than women.

Here's a quote from the article:

"Five studies provide converging evidence that lay conceptions of creative cognition (i.e., beliefs regarding what it takes to “think creatively”) overlap substantially with the unique content of male stereotypes, engendering systematic bias in the way that men’s and women’s creativity is evaluated. We found that creativity is strongly associated with stereotypically masculine-agentic qualities (Study 1), and both experimental and archival data indicated that men are judged as more creative than women (Studies 2–4). Finally, we found that attributions of agency mediate differential judgments of men’s and women’s creativity (Study 5)."  (page 1759)

They did NOT find that men are more creative than women. Indeed, they cite other research that has found no difference in creativity between the sexes.

This research adds further emphases regarding gender bias — highlighting that creative potency may be inaccurately judged based on stereotypes that work at an unconscious level. We must all beware of bias in hiring, recruiting, promotions, rewards, and task assignments.