Tag Archive for: evidence

Will’s Note: ONE DAY after publishing this first draft, I’ve decided that I mucked this up, mashing up what researchers, research translators, and learning professionals should focus on. Within the next week, I will update this to a second draft. You can still read the original below (for now):

 

Some evidence is better than other evidence. We naturally trust ten well-designed research studies better than one. We trust a well-controlled scientific study better than a poorly-controlled study. We trust scientific research more than opinion research, unless all we care about is people’s opinions.

Scientific journal editors have to decide which research articles to accept for publication and which to reject. Practitioners have to decide which research to trust and which to ignore. Politicians have to know which lies to tell and which to withhold (kidding, sort of).

To help themselves make decisions, journal editors regular rank each article on a continuum from strong research methodology to weak. The medical field regularly uses a level-of-evidence approach to making medical recommendations.

There are many taxonomies for “levels of evidence” or “hierarchy of evidence” as it is commonly called. Wikipedia offers a nice review of the hierarchy-of-evidence concept, including some important criticisms.

Hierarchy of Evidence for Learning Practitioners

The suggested models for level of evidence were created by and for researchers, so they are not directly applicable to learning professionals. Still, it’s helpful for us to have our own hierarchy of evidence, one that we might actually be able to use. For that reason, I’ve created one, adding in the importance of practical evidence that is missing from the research-focused taxonomies. Following the research versions, Level 1 is the best.

  • Level 1 — Evidence from systematic research reviews and/or meta-analyses of all relevant randomized controlled trials (RCTs) that have ALSO been utilized by practitioners and found both beneficial and practical from a cost-time-effort perspective.
  • Level 2 — Same evidence as Level 1, but NOT systematically or sufficiently utilized by practitioners to confirm benefits and practicality.
  • Level 3 — Consistent evidence from a number of RCTs using different contexts and situations and learners; and conducted by different researchers.
  • Level 4 — Evidence from one or more RCTs that utilize the same research context.
  • Level 5 — Evidence from one or more well-designed controlled trial without randomization of learners to different learning factors.
  • Level 6 — Evidence from well-designed cohort or case-control studies.
  • Level 7 — Evidence from descriptive and/or qualitative studies.
  • Level 8 — Evidence from research-to-practice experts.
  • Level 9 — Evidence from the opinion of other authorities, expert committees, etc.
  • Level 10 — Evidence from the opinion of practitioners surveyed, interviewed, focus-grouped, etc.
  • Level 11 — Evidence from the opinion of learners surveyed, interviewed, focus-grouped, etc.
  • Level 12 — Evidence curated from the internet.

Let me consider this Version 1 until I get feedback from you and others!

Critical Considerations

  1. Some evidence is better than other evidence
  2. If you’re not an expert in evaluating evidence, get insights from those who are–particularly valuable are research-to-practice experts (those who have considerable experience in translating research into practical recommendations).
  3. Opinion research in the learning field is especially problematic, because the learning field is comprised of both strong and poor conceptions of what works.
  4. Learner opinions are problematic as well because learners often have poor intuitions about what works for them in supporting their learning.
  5. Curating information from the internet is especially problematic because it’s difficult to distinguish between good and poor sources.

Trusted Research to Practice Experts

(in no particular order, they’re all great!)

  • (Me) Will Thalheimer
  • Patti Shank
  • Julie Dirksen
  • Clark Quinn
  • Mirjam Neelen
  • Ruth Clark
  • Donald Clark
  • Karl Kapp
  • Jane Bozarth
  • Ulrich Boser

Robert Slavin, Director of the Center for Research and Reform in Education at Johns Hopkins University, recently wrote the following:

"Sooner or later, schools throughout the U.S. and other countries will be making informed choices among proven programs and practices, implementing them with care and fidelity, and thereby improving outcomes for their children. Because of this, government, foundations, and for-profit organizations will be creating, evaluating, and disseminating proven programs to meet high standards of evidence required by schools and their funders. The consequences of this shift to evidence-based reform will be profound immediately and even more profound over time, as larger numbers of schools and districts come to embrace evidence-based reform and as more proven programs are created and disseminated."

To summarize, Slavin says that (1) schools and other education providers will be using research-based criteria to make decisions (2) that this change will have profound effects, significantly improving learning results, and (3) many stakeholders and institutions within the education field will be making radical changes, including holding themselves and others to account for these improvements.

In Workplace Learning and Performance

But what about us? What about we workplace learning-and-performance professionals? What about our institutions? Will we be left behind? Are we moving toward evidence-based practices ourselves?

My career over the last 16 years is devoted to helping the field bridge the gap between research and practice, so you might imagine that I have a perspective on this. Here it is, in brief:

Some of our field is moving towards research-based practices. But we have lots of roadblocks and gatekeepers that are stalling the journey for the large majority of the industry. I've been pleasantly surprised in working on the Serious eLearning Manifesto about the large number of people who are already using research-based practices; but as a whole, we are still stalled.

Of course, I'm still a believer. I think we'll get there eventually. In the meantime, I want to work with those who are marching ahead, using research wisely, creating better learning for their learners. There are research translaters who we can follow, folks like Ruth Clark, Rich Mayer, K. Anders Ericsson, Jeroen van Merriënboer, Richard E. Clark, Julie Dirksen, Clark Quinn, Gary Klein, and dozens more. There are practitioners who we can emulate–because they are already aligning themselves with the research: Marty Rosenheck, Eric Blumthal, Michael Allen, Cal Wick, Roy Pollock, Andy Jefferson, JC Kinnamon, and thousands of others.

Here's the key question for you who are reading this: "How fast do you want to begin using research-based recommendations?"

And, do you really want to wait for our sister profession to perfect this before taking action?