Learning Styles Notion Still Prevalent on Google

, , ,

Two and a half years ago, in writing a blog post on learning styles, I did a Google search using the words “learning styles.” I found that the top 17 search items were all advocating for learning styles, even though there was clear evidence that learning-styles approaches DO NOT WORK.

Today, I replicated that search and found the following in the top 17 search items:

  • 13 advocated/supported the learning-styles idea.
  • 4 debunked it.

That’s progress, but clearly Google is not up to the task of providing valid information on learning styles.

Scientific Research that clearly Debunks the Learning-Styles Notion:

  • Kirschner, P. A. (2017) Stop propagating the learning styles myth. Computers & Education, 106, 166-171.
  • Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The scientific status of learning styles theories. Teaching of Psychology, 42(3), 266-271.
  • Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9(3), 105-119.
  • Rohrer, D., & Pashler, H. (2012). Learning styles: Where’s the evidence? Medical Education, 46(7), 634-635.

Follow the Money

  • Still no one has come forward to prove the benefits of learning styles, even though it’s been over 10 years since $1,000 was offered, and 3 years since $5,000 was offered.

Purpose of Workplace Learning and Development. Survey Inquiry

, ,

Grovo Attempts to Patent the Word “Microlearning”

A few days ago, CLO Magazine published a provocative article describing how elearning provider Grovo has tried to patent the word “microlearning,” applying for registration in October, 2016. The article, and the comments, are a fascinating read.

This is very interesting, and a bad play by Grovo. Many of us in the learning industry have used the term “microlearning” and I’ll bet that a great many are irked by Grovo’s shameless attempt to restrict its use for their commercial benefit.

Here is evidence to put a dagger in any claim that “microlearning” is a specific product attributable to Grovo. On April 9th, 2015 (Long before Grovo’s original application for a patent), numerous people in the learning industry met in a Twitter chat and discussed their perceptions of what microlearning is (my synopsis of the results is available here, along with a link to the actual tweets: https://www.worklearning.com/2015/04/10/twitter-chat-on-microlearning/). Interestingly, Grovo’s name was NEVER mentioned. This was one and a half years before Grovo applied for a patent.

Here is an even earlier communication in 2015 about microlearning on a blog post from Tom Spiglanin, again with no mention of Grovo.

Here is an even earlier blog post from 2014 on microlearning by Learnnovators, again with no mention of Grovo.

Here is another piece of data that shows that Grovo considered microlearning as a general concept, not as proprietary to them: An article written by their top learning professional, Alex Khurgin, published on August 25, 2015, clearly shows what Grovo thought of microlearning. “The broadest and most useful definition of microlearning is ‘learning, and applying what one has learned, in small, focused steps.'” This is more than one year before Grovo applied for a patent.

 

Full disclosure: I have authored my own definition of microlearning (https://www.worklearning.com/2017/01/13/definition-of-microlearning/). Several years ago, Grovo paid me for a couple hours of consulting. Grovo management and I once talked about me working for them. I have referred to Grovo previously on my subscription learning blog (here and here).

 

Let me add that others are more than welcome to use my definition of microlearning, modify it, or ignore it.

Seek Research-to-Practice Experts as Your Trusted Advisors

, , ,

I added these words to the sidebar of my blog, and I like them so much that I’m sharing them as a blog post itself.

Please seek wisdom from research-to-practice experts — the dedicated professionals who spend time in two worlds to bring the learning field insights based on science. These folks are my heroes, given their often quixotic efforts to navigate through an incomprehensible jungle of business and research obstacles.

These research-to-practice professionals should be your heroes as well. Not mythological heroes, not heroes etched into the walls of faraway mountains. These heroes should be sought out as our partners, our fellow travelers in learning, as people we hire as trusted advisors to bring us fresh research-based insights.

The business case is clear. Research-to-practice experts not only enlighten and challenge us with ideas we might not have considered — ideas that make our learning efforts more effective in producing business results — research-to-practice professionals also prevent us from engaging in wasted efforts, saving our organizations time and money, all the while enabling us to focus more productively on learning factors that actually matter.

Another Reason to Learn About Performance-Focused Smile Sheets

,

This has been a great year for the Performance-Focused Smile Sheet approach. Not only did the book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, win a prestigious Award of Excellence from the International Society of Performance Improvement, but people are flocking to workshops, conference sessions, and webinars to learn about this revolutionary new method of gathering learner feedback.

Now there’s even more reason to learn about this method. In the July 2017 issue of TD (Talent Development), it was reported that the Human Capital Institute (HCI) issued a report that said that measurement/evaluation is the top skill needed by learning and development professionals!

Go to SmileSheets.com to get the book.

Neon Elephant Award 2016

, ,

 

21st December 2016

Neon Elephant Award Announcement

Dr. Will Thalheimer of Work-Learning Research announces the winner of the 2016 Neon Elephant Award, given this year to Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof for their book, Urban Myths about Learning and Education. Pedro, Paul, and Casper provide a research-based reality check on the myths and misinformation that float around the learning field. Their incisive analysis takes on such myths as learning styles, multitasking, discovery learning, and various and sundry neuromyths.

Urban Myths about Learning and Education is a powerful salve on the wounds engendered by the weak and lazy thinking that abounds too often in the learning field — whether on the education side or the workplace learning side. Indeed, in a larger sense, De Bruyckere, Kirschner, and Hulshof are doing important work illuminating key truths in a worldwide era of post-truth communication and thought. Now, more than ever, we need to celebrate the truth-tellers!

Click here to learn more about the Neon Elephant Award…

2016 Award Winners – Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof

Pedro De Bruyckere (1974) is an educational scientist at Arteveldehogeschool, Ghent since 2001. He co-wrote two books with Bert Smits in which they debunk popular myths on GenY and GenZ, education and pop culture. He co-wrote a book on girls culture with Linda Duits. And, of course, he co-wrote the book for which he and his co-authors are being honored, Urban Myths about Learning and Education. Pedro is an often-asked public speaker, one of his strongest points is that he “is funny in explaining serious stuff.”

Paul A. Kirschner (1951) is University Distinguished Professor at the Open University of the Netherlands as well as Visiting Professor of Education with a special emphasis on Learning and Interaction in Teacher Education at the University of Oulu, Finland. He is an internationally recognized expert in learning and educational research, with many classic studies to his name. He has served as President of the International Society for the Learning Sciences, is an AERA (American Education Research Association) Research Fellow (the first European to receive this honor). He is chief editor of the Journal of Computer Assisted Learning, associate editor of Computers in Human Behavior, and has published two very successful books: Ten Steps to Complex Learning and Urban Legends about Learning and Education. His co-author on the Ten-Steps book, Jeroen van Merriënboer, won the Neon-Elephant award in 2011.

Casper D. Hulshof is a teacher (assistant professor) at Utrecht University where he supervises bachelors and masters students. He teaches psychological topics, and is especially intrigued with the intersection of psychology and philosophy, mathematics, biology and informatics. He uses his experience in doing experimental research (mostly quantitative work in the areas of educational technology and psychology) to inform his teaching and writing. More than once he has been awarded teaching honors.

Why Honored?

Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof are honored this year for their book Urban Myths about Learning and Education, a research-based reality check on the myths and misinformation that float around the learning field. With their research-based recommendations, they are helping practitioners in the education and workplace-learning fields make better decisions, create more effective learning interventions, and avoid the most dangerous myths about learning.

For their efforts in sharing practical research-based insights on learning design, the workplace learning-and-performance field owes a grateful thanks to Pedro De Bruyckere, Paul Kirschner, and Casper Hulshof.

Book Link:

Click here to learn more about the Neon Elephant Award…

Jobs in Learning Measurement — Let’s Try Something

, ,

Dear Readers,

Many of you are now following me and my social-media presence because you’re interested in LEARNING MEASUREMENT. Probably because of my recent book on Performance-Focused Smile Sheets (which you can learn about at the book’s website, SmileSheets.com).

More and more, I’m meeting people who have jobs that focus on learning measurement. For some, that’s their primary focus. For most, it’s just a part of their job.

Today, I got an email from a guy looking for a job in learning measurement and analytics. He’s a good guy, smart and passionate, and so he ought to be able to find a good job where he can really help. So here’s what I’m thinking. You, my readers are some of the best and brightest in the industry — you care about our work and you look to the scientific research as a source of guidance. You are also, many of you, enlightened employers, looking to recruit and hire the best and brightest. So it seems obvious that I should try to connect you…

So here’s what we’ll try. If you’ve got a job in learning measurement, let me know about it. I’ll post it here on my blog. This will be an experiment to see what happens. Maybe nothing… but it’s worth a try.

Now, I know many of you are also loyal readers because of things BESIDES learning measurement, for example, learning research briefs, research-based insights, elearning, subscription learning, learning audits, and great jokes… but let’s keep this experiment to LEARNING MEASUREMENT JOBS at first.

BOTTOM LINE: If you know of a learning-measurement job, let me know. Email me here…

Sweet Mother of Deception — Sugar Industry Lessons for the Learning Professional

,

OMG. If you haven’t heard, the sugar industry engaged in a sophisticated plot to manipulate the public into believing that saturated fat was the enemy of good health–NOT sugar. They were completely successful in capturing the narrative, so much so that they likely caused millions of deaths, complications from diabetes, illnesses due to obesity, cancer and heart disease, and more.

For articles on this blatantly immoral activity: New York Times, Stat — Frontiers of Health and Medicine

Of course, this kind of behavior happens over and over.

  • The great recession found malfeasance within the banking industry.
  • The tobacco industry famously tried to claim smoking wasn’t harmful.
  • The oil and gas industry has fought global warming warnings for years.

In the learning field, are we naive enough to believe that everything is pure?

I’ve seen things that make me mad–real mad! For example, some of the top-10, top-20 lists are basically bought and paid for… The neuroscience hype to sell products… Well-known “gurus” who don’t mention improved practices because they are protecting their legacy offerings … Trade associations that provide members with a diet of information tilted to their vendor benefactors…

What have you seen?

What’s your motto? Regrettably, mine is, “I’m mad as hell, and I’m still taking it…”

I’m dying to know though, have you seen malfeasance? Slippery slopes? Money changing hands?

Sadder still, given all the great, passionate, caring, and honest folks in our field.

Connect with me privately if you need to… (info a.t work-learning d.o.t com)

Sunshine as disinfectant and all that…

The Last Two Decades of Neuroscience Research (via fMRI) Called Into Question!

,

Updated July 11, 2016. An earlier version was more apocalyptic.

==============================

THIS IS HUGE. A large number of studies from the last 15 years of neuroscience research (via fMRI) could be INVALID!

A recent study in the journal PNAS looked at the three most commonly used software packages used with fMRI machines. Where they expected to find a normal familywise error rate of 5%, they found error rates up to 70%.

Here’s what the authors’ wrote:

“Using mass empirical analyses with task-free fMRI data, we have found that the parametric statistical methods used for group fMRI analysis with the packages SPM, FSL, and AFNI can produce FWE-corrected cluster P values that are erroneous, being spuriously low and inflating statistical significance. This calls into question the validity of countless published fMRI studies based on parametric clusterwise inference. It is important to stress that we have focused on inferences corrected for multiple comparisons in each group analysis, yet some 40% of a sample of 241 recent fMRI papers did not report correcting for multiple comparisons (26), meaning that many group results in the fMRI literature suffer even worse false-positive rates than found here (37).”

In a follow-up blog post, the authors estimated that up to 3,500 scientific studies may be affected, which is down from their initial published estimate of 40,000. The discrepancy results because only studies at the edge of statistical reliability are likely to have results that might be affected. For an easy-to-read review of their walk-back, Wired has a nice piece.

The authors also point out that there is more to worry about than those 3,500 studies. An additional 13,000 studies don’t use any statistical correction at all (so they’re not affected by the software glitch reported in the scientific paper). However, these 13,000 studies use an approach that “has familywise error rates well in excess of 50%.” (cited from the blog post)

Here’s what the authors say in their walk-back:

“So, are we saying 3,500 papers are “wrong”? It depends. Our results suggest CDT P=0.01 results have inflated P-values, but each study must be examined… if the effects are really strong, it likely doesn’t matter if the P-values are biased, and the scientific inference will remain unchanged. But if the effects are really weak, then the results might indeed be consistent with noise. And, what about those 13,000 papers with no correction, especially common in the earlier literature? No, they shouldn’t be discarded out of hand either, but a particularly jaded eye is needed for those works, especially when comparing them to new references with improved methodological standards.”

 

Some Perspective

Let’s take a deep breadth here. Science works slowly and we need to see what other experts have to say in the coming months.

The authors reported that there were about 40,000 published studies in the last 15 years that might be affected. Of this amount, only some of 3,500 + 13,000 = 16,500 are affected. That’s 41% of published articles with a potential to have invalid results.

But, of course, in the learning field, we don’t care about all these studies as most of them have very little to do with learning or memory. Indeed, a search of the whole history of PsycINFO (a social-science database) finds a total of 22,347 articles mentioning fMRI at all. Searching for articles that have a learning or memory aspect culls this number down to 7,056. This is a very rough accounting, but it does put the overall findings in some perspective.

As the authors warn, it’s not appropriate to dismiss the validity of all the research articles, even if they’re in one of the suspect groups of studies. Instead, when looking at the potentially-invalidate articles, each one has to be examined individually to know whether it has problems.

Despite these comforting caveats, the findings by the scientists have implications for many neuroscience research studies over the past 15 years (when the bulk of neuroscience research has been done).

On the other hand, there truly haven’t been many neuroscience findings that have much practical relevance to the learning field as of yet. See my review for a critique of overblown claims about neuroscience and learning. Indeed, as I’ve argued elsewhere, neuroscience’s potential to aid learning professionals probably rests in the future. So, being optimistic, maybe these statistical glitches will end up being a good thing. First, perhaps they’ll propel greater scrutiny to research methodologies, improving future neuroscience research. Second, perhaps they’ll put the brakes on the myth-creating industrial complex around neuroscience until we have better data to report and utilize.

Still, a dark cloud of low credibility may settle over the whole neuroscience field itself, hampering researchers from getting funding, and making future research results difficult for practitioners to embrace. Time will tell.

 

Popular Press Articles Citing the Original Article (Published Before the Walk-Backs).

Here are some articles from the scientific press pointing out the potential danger:

  • http://arstechnica.com/science/2016/07/algorithms-used-to-study-brain-activity-may-be-exaggerating-results/
  • http://cacm.acm.org/news/204439-a-bug-in-fmri-software-could-invalidate-15-years-of-brain-research/fulltext
  • http://www.wired.co.uk/article/fmri-bug-brain-scans-results
  • http://www.zmescience.com/medicine/brain-imageflaw-57869/

==============================

Notes:

From Wikipedia (July 11, 2016): “In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.”

Talking Smile Sheets at ATD’s 2016 Annual Conference

, ,

Next week, I'm headed to Denver, Colorado for ATD's Annual Conference for 2016. The largest conference in the workplace learning and development field, it brings together all kinds of folks for a wondrous bacchanal of learning.

I'll be talking about smile sheets (learner response forms) on Tuesday May 24th, 4:30 – 5:30 pm
TU420 – Utilizing Radically Improved Smile Sheets to Improve Learning Results at Room: 708/710.

 

I'll also be joining a "Science of Learning" panel on Monday May 23rd, 1:00 – 2:00 pm
M1CE – Community Express: Science of Learning Fast Track
along with Sebastian Bailey, Justin Brusino, Paul Zak, Patti Shank at Room: Mile High 1c.

 

If you're there at ATD's ICE — and you want to meet to discuss your organization's needs for a practical research-based approach to learning or evaluation design — send me a note at info@work-learning.com.