Guest Post from Robert O. Brinkerhoff: 70-20-10: The Good, the Bad, and the Ugly

, , ,

This is a guest post by Robert O. Brinkerhoff (www.BrinkerhoffEvaluationInstitute.com).

Rob is a renowned expert on learning evaluation and performance improvement. His books, Telling Training’s Story and Courageous Training, are classics.

______________________________

70-20-10: The Good, the Bad, and the Ugly

The 70-20-10 framework may not have much if any research basis, but it is still a good reminder to all of us in in the L&D and performance improvement professions that the work-space is a powerful teacher and poses many opportunities for practice, feedback, and improvement.

But we must also recognize that a lot of the learning that is taking place on the job may not be for the good. I have held jobs in agencies, corporations and the military where I learned many things that were counter to what the organization wanted me to learn: how to fudge records, how to take unfair advantage of reimbursement policies, how to extend coffee breaks well beyond their prescribed limits, how to stretch sick leave, and so forth.

These were relatively benign instances. Consider this: Where did VW engineers learn how to falsify engine emission results? Where did Well Fargo staff learn how to create and sell fake accounts to their unwitting customers?

Besides these egregiously ugly examples, we have to also recognize that in the case of L&D programming that is intended to support new strategic and other change initiatives, the last thing the organization needs is more people learning how to do their jobs in the old way. AT&T, for example, worked very hard to drive new beliefs and actions to enable the business to shift from landline technologies to wireless; on-the-job learning dragged them backwards, and creates problems still today. As AllState Insurance tries to shift sales focus away from casualty policies to financial planning services, the old guard teaches the opposite actions, as they continue to harvest the financial benefits of policy renewals. Any organization that has to make wholesale and fundamental shifts to execute new strategies will have to cope with the negative effects of years of on-the-job learning.

When strategy is new, there are few if any on-the-job pockets of expertise and role models. Training new employees for existing jobs is a different story. Here, obviously, the on-job space is an entirely appropriate learning resource.

In short, we have to recognize that not all on-the-job learning is learning that we want. Yet on the job learning remains an inexorable force that we in L&D must learn how to understand, leverage, guide and manage.

Purpose of Workplace Learning and Development. Survey Inquiry

, ,

Seek Research-to-Practice Experts as Your Trusted Advisors

, , ,

I added these words to the sidebar of my blog, and I like them so much that I’m sharing them as a blog post itself.

Please seek wisdom from research-to-practice experts — the dedicated professionals who spend time in two worlds to bring the learning field insights based on science. These folks are my heroes, given their often quixotic efforts to navigate through an incomprehensible jungle of business and research obstacles.

These research-to-practice professionals should be your heroes as well. Not mythological heroes, not heroes etched into the walls of faraway mountains. These heroes should be sought out as our partners, our fellow travelers in learning, as people we hire as trusted advisors to bring us fresh research-based insights.

The business case is clear. Research-to-practice experts not only enlighten and challenge us with ideas we might not have considered — ideas that make our learning efforts more effective in producing business results — research-to-practice professionals also prevent us from engaging in wasted efforts, saving our organizations time and money, all the while enabling us to focus more productively on learning factors that actually matter.

Will Thalheimer Interviewed on HRD TV in Belgium

,

I had the great pleasure of being invited to provide a keynote in Leuven, Belgium at the VOV Congress. There were two highlights in my day, my keynote and being interviewed on HRD TV by Sandra De Milliano.

Audience members asked me questions and I did my best to answer them. We had questions about elearning, microlearning, PowerPoint, learning styles, rewards for learners, the spacing effect, remembering and application, and more.

Take a look…


Getting Better Responses on Your Smile Sheets.

,

One of the most common questions I get when I speak about the Performance-Focused Smile-Sheet approach (see the book’s website at SmileSheets.com) is “What can be done to get higher response rates from my smile sheets?”

Of course, people also refer to smile sheets as evals, level 1’s, happy sheets, hot or warm evaluation, response forms, reaction forms, etc. They also refer to both paper-and-pencil forms and online surveys. Indeed, as smile sheets go online, more and more people are finding that online surveys get a much lower response rate than in-classroom paper surveys.

Before I give you my list for how to get a higher response rate, let me blow this up a bit. The thing is, while we want high response rates, there’s something much more important than response rates. We also want response relevance and precision. We want the questions to relate to learning effectiveness, not just learning reputation and learner satisfaction. We also want the learners to be able to answer the questions knowledgeably and give our questions their full attention.

If we have bad questions — one’s that use Likert-like or numeric scales for example — it won’t matter that we have high response rates. In this post, I’m NOT going to focus on how to write better questions. Instead, I’m just tackling how we can motivate our learners to give our questions more of their full attention, thus increasing the precision of their responding while also increasing our response rates as well.

How to get Better Responses and Higher Response Rates

  1. Ask with enthusiasm, while also explaining the benefits.
  2. Have a trusted person make the request (often an instructor who our learners have bonded with).
  3. Mention the coming smile sheet early in the learning (and more than once) so that learners know it is an integral part of the learning, not just an add-on.
  4. While mentioning the smile sheet, let folks know what you’ve learned from previous smile sheets and what you’ve changed based on the feedback.
  5. Tell learners what you’ll do with the data, and how you’ll let them know the results of their feedback.
  6. Highlight the benefits to the instructor, to the instructional designers, and to the organization. Those who ask can mention how they’ve benefited in the past from smile sheet results.
  7. Acknowledge the effort that they — your learners — will be making, maybe even commiserating with them that you know how hard it can be to give their full attention when it’s the end of the day or when they are back to work.
  8. Put the time devoted to the survey in perspective, for example, “We spent 7 hours today in learning, that’s 420 minutes, and now we’re asking you for 10 more minutes.”
  9. Ensure your learners that the data will be confidential, that the data is aggregated so that an individual’s responses are never shared.
  10. Let your learners know the percentage of people like them who typically complete the survey (caveat: if it’s relatively high).
  11. Use more distinctive answer choices. Avoid Likert-like answer choices and numerical scales — because learners instinctively know they aren’t that useful.
  12. Ask more meaningful questions. Use questions that learners can answer with confidence. Ask questions that focus on meaningful information. Avoid obviously biased questions — as these may alienate your learners.

How to get Better Responses and Higher Response Rates on DELAYED SMILE SHEETS

Sometimes, we’ll want to survey our learners well after a learning event, for example three to five weeks later. Delayed smile sheets are perfectly positioned to find out more about how the learning is relevant to the actual work or to our learners’ post-learning application efforts. Unfortunately, prompting action — that is getting learners to engage our delayed smile sheets — can be particularly difficult when asking for this favor well after learning. Still, there are some things we can do — in addition to the list above — that can make a difference.

  1. Tell learners what you learned from the end-of-learning smile sheet they previously completed.
  2. Ask the instructor who bonded with them to send the request (instead of an unknown person from the learning unit).
  3. Send multiple requests, preferably using a mechanism that only sends these requests to those who still need to complete the survey.
  4. Have the course officially end sometime AFTER the delayed smile sheet is completed, even if that is largely just a perception. Note that multiple-event learning experiences lend themselves to this approach, whereas single-event learning experiences do not.
  5. Share with your learners a small portion of the preliminary data from the delayed smile sheet. “Already, 46% of your fellow learners have completed the survey, with some intriguing tentative results. Indeed, it looks like the most relevant topic as rated by your fellow learners is… and the least relevant is…”
  6. Share with them the names or job titles of some of the people who have completed the survey already.
  7. Share with them the percentage of folks from his/her unit who have responded already or share a comparison across units.

What about INCENTIVES?

When I ask audiences for their ideas for improving responses and increasing response rates, they often mention some sort of incentive, usually based on some sort of lottery or raffle. “If you complete the survey, your name will be submitted to have chance to win the latest tech gadget, a book, time off, lunch with an executive, etc.”

I’m a skeptic. I’m open to being wrong, but I’m still skeptical about the cost/benefit calculation. Certainly for some audiences an incentive will increase rates of completion. Also, for some audiences, the harms that come with incentives may be worth it.

What harms you might ask? When we provide an external incentive, we might be sending a message to some learners that we know the task has no redeeming value or is tedious or difficult. People who see their own motivation as caused by the external incentive are potentially less likely to seriously engage our questions, producing bad data. We’re also not just having an effect on the current smile sheet. When we incentivize people today, they may be less willing next time to engage in answering our questions. They may also be pushed into believing that smile sheets are difficult, worthless, or worse.

Ideally, we’d like our learners to want to provide us with data, to see answering our questions as a worthy and helpful exercise, one that is valuable to them, to us, and to our organization. Incentives push against this vision.

 

Doing Research On Our Learning Products

,

The learning profession has been blessed in recent years with a steady stream of scientific research that points to practical recommendations for designers of learning. If you or your organization are NOT hooked into the learning research, find yourself a research translator to help you! Call me, for example!

That’s the good news, but I have bad news for you too. In the old days, it wasn’t hard to create a competitive advantage for your company by staying abreast of the research and using it to design your learning products and services. Pretty soon, that won’t be enough. As the research becomes more widely known, you’ll have to do more to get a competitive advantage. Vendors especially will have to differentiate their products — NOT just by basing them on the research — but also by conducting research (A-B testing at a minimum) on your own products.

I know of at least a few companies right now who are conducting research on their own products. They aren’t advertising their research, because they want to get a jumpstart on the competition. But eventually, they’ll begin sharing what they’ve done.

Do you need an example of a company who’s had their product tested? Check out this page. Scroll down to the bottom and look at the 20 or so research studies that have been done using the product. Looks pretty impressive right?

To summarize, there are at least five benefits to doing research on your own products:

  1. Gain a competitive advantage by learning to make your product better.
  2. Gain a competitive advantage by supporting a high-quality brand image.
  3. Gain a competitive advantage by enabling the creation of unique and potent content marketing.
  4. Gain a competitive advantage by supporting creativity and innovation within your team.
  5. Gain a competitive advantage by creating an engaging and learning-oriented team environment.

Definition of MicroLearning

, , ,

I’ve looked for a good definition of microlearning, but because I couldn’t find one, I’ve created my own.

Microlearning involves the use of:

“Relatively short engagements in learning-related activities—typically ranging from a few seconds up to 20 minutes (or up to an hour in some cases)—that may provide any combination of content presentation, review, practice, reflection, behavioral prompting, performance support, goal reminding, persuasive messaging, task assignments, social interaction, diagnosis, coaching, management interaction, or other learning-related methodologies.”

Microlearning has five utilization cases:

  1. Course Replacement
    Provides training content and learning support, often as a replacement for classroom training or long-form elearning.
  2. Course Augmentation
    Provides after-course or within-course streams of short learning interactions to reinforce, strengthen, or deepen learning.
  3. Retrieval Support
    Provides retrieval practice, spaced repetitions, and reminding to ensure knowledge and skills can be remembered when needed.
  4. Just-In-Time (Moment-of-Need) Learning
    Provides information when learners need it to perform a task they are working on.
  5. Behavioral Prompts
    Provides action nudges, task assignments, or performance support to directly prompt and support behavior.

If it’s not obvious, there are clearly overlaps in these five use cases, and furthermore, a single microlearning thread may utilize more than one of the methodologies suggested. For example, when using microlearning as a replacement for a standard elearning course, you might also consider retrieval support and behavioral prompts in your full learning design.

Will’s History of Spacing Out on Learning

,

As you know, if you’ve dabbled into my work for a few years, I’ve closely followed the research finding, “The Spacing Effect,” both from a research perspective and a practical perspective. Indeed, I was one of the first in the workplace learning field to recognize its practical significance, which I wrote about as early as 2002. In 2006 I published the research-to-practice report entitled Spacing Learning Over Time, which should — if there was justice in the world (LOL) or viable trade organizations (OUCH) — be enshrined in the Learning and Development Hall of Fame. Snickers are welcome. Taza Chocolate even better. A few years ago, still wanting to advocate for the practical use of the spacing effect, I began speaking about Subscription Learning at conferences and I developed a website (SubscriptionLearning.com) to encourage folks in the learning field to utilize the spacing effect in their learning designs. SubscriptionLearning.com is being retired in 2017, as it is no longer needed. Blog posts from the website are incorporated in this blog.

I am grateful to the enlightened organizations who have supported my work over the years and specifically to the individuals who continue to encourage the reading of the 2006 research report. Feel free to share yourself.

Now in 2017, I am grateful to another organization, Learning Technologies (in the UK) who is sponsoring me to speak on the spacing effect at their conference starting in a few weeks. As part of my efforts, I am developing a new presentation and I am updating my research compilation on the spacing effect. Stay tuned to this blog as I’m likely to share a few of my findings as I dig into the research.

Indeed, the research on spacing is some of the most interesting I’ve studied over the years. The first thing that fascinates is that there is so much damn research on the spacing effect, also referred to as spaced learning, distributed practice, and interleaving. In 1992, Bruce and Bahrick counted up the number of scientific studies on spacing and found over 300 articles at that time. Every year, there are more and more scientific articles published on spacing. By my rough count of journal articles cited on PsycINFO (a primary social-science database), over the last three years there have been 31 new articles published on the spacing effect (7 in 2014, 14 in 2015, and 10 in 2016).

One of the main reasons that so many research articles are published on the spacing effect is that the phenomenon is so intriguing. Why would spacing repetitions over time produce so much more remembering than giving the learners the exact same repetitions but simply massing them all at once or spacing them with less time in between? Freakin’ fascinating! So researchers keep digging into the complexities.

Harry Bahrick and Lynda Hall announced in 2005 that, “The spacing effect is one of the oldest and best documented phenomena in the history of learning and memory research.” And, just last year in a scientific review article, Geoffrey Maddox wrote, Because of its robustness, the spacing effect has the potential to be applied across a variety of contexts as a way of improving learning and memory.”

Stay tuned, as I hope to be be spacing my research compilations over time…

Research

Bahrick, H. P., & Hall, L. K. (2005). The importance of retrieval failures to long-term retention: A metacognitive explanation of the spacing effect. Journal of Memory and Language, 52, 566-577.

Maddox, G. B. (2016). Understanding the underlying mechanism of the spacing effect in verbal learning: A case for encoding variability and study-phase retrieval. Journal of Cognitive Psychology, 28(6), 684-706.

Thalheimer, W. (2006, February). Spacing Learning Events Over Time: What the Research Says. Available at: http://work-learning.com/catalog.html.

Stop the Madness — Animated Screen Writing in eLearning

, ,

Is anyone else getting completely annoyed watching someone’s hand draw and write on videos and elearning?

OMG! It’s beginning to drive me nuts! What the hell is wrong with us?

Here’s the thing. When this was new, it was engaging. Now it’s cliche! Now most people are habituated to it. What we’re doing now is taking one of our tools and completely overusing it.

Let’s be smarter.

Testing for Instructional Designers — A Common Mistake

, , ,

Somebody sent me a link to a YouTube video today — a video created to explain to laypeople what instructional design is. Most of it was reasonable, until it gave the following example, narrated as follows:

“… and testing is created to clear up confusion and make sure learners got it right.”


Something is obviously wrong here — something an instructional designer ought to know. What is it?

Scroll down for the answer…

Before you scroll down, come up with your own answer…

.

.

.

.

.

.

.

.

.

.

.

.

Answer: 

The test question is devoid of real-world context. Instead of asking a text-based question, we could provide an image and ask them to point to the access panel.

Better yet, we could have them work on a simulated real-world task and follow steps that would enable them to complete the simulated task only if they used the access panel as part of their task completion.

Better yet, we could have them work on an actual real-world task… et cetera…

Better yet, we might first ask ourselves whether anybody really needs to “LEARN” where the access panel is — or would they just find it on their own without being trained or tested on it?

Better yet, we might first ask ourselves whether we really need a course in the first place. Maybe we’d be better off to create a performance-support tool that would take them through troubleshooting steps — with zero or very little training required.

Better yet, we might first ask ourselves whether we could design our equipment so that technicians don’t need training or performance support.

.

.

.

Or we could ask ourselves existential questions about the meaning and potency of instructional design, about whether a career devoted to helping people learn work skills is worthy to be our life’s work…

Or we could just get back to work and crank out that test…

SMILE…