Note: Pilot is Over… Post kept for historical reasons only…

 

Organizations Wanted to Pilot Leadership-Development Subscription Learning!!

I am looking for organizations who are interested in piloting subscription learning as a tool to aid in developing their managers and energizing their senior management’s strategic initiatives.

To read more about the benefits and possibilities for subscription learning and leadership development, read my article posted on the ATD (Association for Talent Development) website.

Potential Benefits

  • Reinforce concepts learned to ensure remembering and application.
  • Drive management behaviors through ongoing communications.
  • Utilize the scientifically-verified spacing effect to boost learning.
  • Enable dialogue between your senior leaders and your developing managers.
  • Inculcate organizational values through scenario-based reflection.
  • Prompt organizational initiatives through your management cadre.
  • Engage in organizational learning, promoting cycles of reinforcement.
  • Utilize and pilot test new technologies, boosting motivation.
  • Utilize the power of subscription learning before your competitors do.

Potential Difficulties

  • Pilot efforts may face technical difficulties and unforeseen obstacles.

Why Will Thalheimer and Work-Learning Research, Inc.?

  • Experienced leadership-development trainer
  • Previously ran leadership-development product line (Leading for Business Results)
  • Leader in the use of scenario-based questions
  • Experienced in using subscription learning
  • Devoted to evidence-based practices
  • Extensive experience in practical use of learning research

Why Now?

  • Subscription-learning tools are available.
  • Mobile-learning is gaining traction.
  • Substantial discounts for pilot organizations.

Next Steps!!

  • Sorry, the pilot is over…

 

Hello everyone! Next week, I'm going to be arising from San Antonio's Riverwalk to learn from some of the most passionate and research-based folks in our industry — at ISPI's Performance Improvement Conference.

I'll also be sharing some of my work. Come join me at the following sessions:

  • Monday April 27 from 3:30 to 5:00
    TITLE:  Your Smile Sheets Suck!
    FORMAT:  PechaKucha (Performance Art, 20 slides, 20 seconds each).
  • Tuesday April 28 from 11:15 to 12:30
    TITLE:  Performance-Focused Smile Sheets: A Radical Rethinking
    FORMAT:  75-minute Conference Session
  • Tuesday April 28 starting at 12:45
    TITLE:  How and Why to Conduct Learning Audits
    FORMAT:  Roundtable Discussion

If you'd like to download or check out my slides for these sessions, click here (through April only).

Also, if you've got a business issue you'd like to discuss while there, let me know.

Contact me at: info at work-learning dot com.

I had the great pleasure of being interviewed recently by Brent Schlenker, long-time elearning advocate. We not only had a ton of fun talking, but Brent steered us into some interesting discussions.

———-

He's created a three-part video series of our discussion:

———-

Brent is a great interviewer–and he gets some top-notch folks to join him. Check out his blog.

 

The world’s best writers use editors to improve their work. Architectural engineers have their calculations reviewed to ensure structural integrity. Doctors seek second opinions on complex cases.

Unfortunately, we in the workplace learning field–and often in the education field–are often left in the dark about the strengths and weaknesses of our learning interventions. Mostly we use smile sheets (learner response forms) to get feedback even though hundreds of research studies show that smile sheets are not correlated with learning results. When we measure learning, we often do it in a way that incorporates severe forms of bias, providing ourselves with false data that pushes us into faulty decision-making.

Learning Audits enable learning professionals to get valid feedback about the strengths and weaknesses of their learning interventions. With the feedback they produce, learning audits help organizations maximize the benefits of their learning.


What is a Learning Audit?

“A learning audit is a systematic review of a learning program to determine the program’s strengths and weaknesses—with the aim to guide subsequent improvements of that learning program and/or other learning programs. Learning audits are conducted in a high-integrity manner to ensure validity and limit bias.”

Learning audits can examine classroom training, elearning, mobile learning, on-the-job learning, self-initiated learning, and academic learning. They can be relatively quick-and-dirty or they can be highly exhaustive. They can cost a little or cost a lot.

Learning audits can utilize the following data-gathering techniques:

  1. Interview learners, learners’ supervisors, learning designers, learning developers, learning deliverers, and other organizational stakeholders.
  2. Focus-group learning stakeholders—most likely in groups of similar individuals.
  3. Survey learning stakeholders.
  4. Job-shadow people as they learn and work on the job.
  5. Research-benchmark the learning program or prototype (or design intentions) based on a validated list of key learning factors (such as the Decisive Dozen).
  6. Analyze organizational artifacts (like company newsletters and bulletin boards) and other communication devices from a learning perspective.
  7. Create a list of the learning media that have been utilized.
  8. Create a list of the available learning media.
  9. Analyze the learning measurement approaches utilized.
  10. Review smile sheet results.
  11. Review results of learning assessments—especially scenario-based decisions, case studies, simulations, and realistic hands-on exercises.
  12. In addition to reviewing results that are assessed during or immediately after learning, seek to review results that assess learning after a delay of a week or more.
  13. Review on-the-job performance results that are routinely captured.
  14. Review business results, especially those that are linked to learning.
  15. Review the quality and use of prompting mechanisms (like job aids).
  16. Review the quality and use of on-the-job learning affordances, including coaching, social media, knowledge-management systems, team learning, etc.
  17. Review the supports in place for creativity-based insight learning.
  18. Review the supports in place for after-learning application.
  19. Develop and deploy improved smile sheets, learning assessments, and performance assessments.
  20. Conduct A-B testing on different versions of the same learning program.


What is Research Benchmarking?

Research Benchmarking is the process by which your learning interventions are benchmarked against research-based best practices.

Work-Learning Research excels in research benchmarking because of exhaustive work we’ve done over more than a decade compiling scientific research on learning, memory, and instruction. By combining research-based knowledge and practical wisdom, our learning audits lead the world in providing leverable recommendations for improvement.

Your learning programs will be benchmarked against the Decisive Dozen, the 12 most important learning factors. In addition, your learning ecosystem will be reviewed to look for learning and application support, learning measurement issues, and business or organizational considerations.


Want to Know More?

If you’d like to discuss learning audits further, contact me, Dr. Will Thalheimer, at 1-617-718-0767 or email me by clicking here.

If you’d like to consider conducting your own learning audits, I encourage you to view this web page.

 

Last week I launched the website LearningAudit.com to promote the practice of learning audits.

LearningAudit_Banner_2014

It is my passionate belief that our learning interventions would be tremendously improved if we took a research-based systematic approach to reviewing them. LearningAudit.com is dedicated to the proposition that we can all do this.

On the site there is the research-to-practice report, "How to Conduct a Learning Audit" and a job aid to support the learning-audit process.

I'm a bad blogger. I don't analyze my site traffic. I don't drive my readers down a purchase funnel. I don't sell advertising on my blog site. I'm a bad, bad blogger.

Two months ago, I set up Google Analytics to capture my blog's traffic. Holy heck batman! I found out something amazing, and I'm not really sure how to think about it.

Over April and May 2014, my most popular blog post–that is, the one most visited–was a blog post I published in 2006. How popular was this 2006 blog post? It accounted for 50% of all my blog traffic! Fifty freakin' percent! And April and May have been relatively busy blog posting months for me, so it wasn't like I wasn't creating new traffic.

What blog post was the envy of all the others?

It was this one, on one of the biggest myths in the learning field.

I guess this makes sense, (1) it's important, (2) the myth keeps resurfacing, and (3) by now the link has been posted in hundreds of places.

If I die today, at least I have made a bit of a difference, just in this blog post.

I'm a bad, bad blogger.  <<<<<WINK>>>>>

If you're going to the eLearning Guild's Learning Solutions Conference this coming week in Orlando, come join me–and say hello!

I'll be speaking in three sessions:

Featured Session (F2)
Subscription Learning: A Fundamentally Different Form of eLearning

Time: Wednesday March 19, 10:45AM

Details on the session

Slides for the session

Over 300 people are expected to attend. Get there early for a good seat!

Concurrent Session (105)
Serious eLearning Manifesto (Also with Clark Quinn and Michael Allen)

Time: Wednesday March 19, 1:00PM

Details on the session

We will hand out paper version of the Manifesto at the session (there are no slides)

Morning Buzz (MB31)

Time: Thursday March 20, 7:15AM

A casual conversation about the eLearning Manifesto and Instructional Design

Note: Look for Clark Quinn, or Michael Allen's name (as mine is not listed), but I'll be there!

 

Sign up today for the eLearning Guild's Thought-Leaders series — where they've asked me to reflect on my 15 years bridging the gap between research and practice. It's not until September, but sign up is now open.

Click here to view details and to sign up…

The description begins this way:

As workplace learning-and-performance professionals, we live in world of shiny toys, blinding clouds of floating ash, and darkness. While we have passion and good intentions, we are unable to maximize performance because we are infected with misinformation about how learning really works.

Should be fun!

The Work-Learning Research website is ranked as follows:

  • #4 on Google
  • #4 on Bing
  • #7 on Yahoo

When searching for "learning research."

Interestingly, we hardly ever get paid to do research. Mostly we get paid to use research wisdom to make practical recommendations, for example in the following areas:

  1. Learning Design
  2. E-Learning
  3. Training
  4. Onboarding
  5. Safety
  6. Learning Evaluation
  7. Organizational Change
  8. Leadership Development
  9. Improving the Learning Department's Results
  10. Making Fundamental Change in Your Organization's Learning Practices

Research for me is a labor of love, and also, a way to help clients cut through opinions and get to practical truths that will actually make a difference.

But still, we are happy that the world-according-to-search-engines (WOTSE) values the research perspective we offer.

And here's a secret. We don't pay any search-optimizer companies, nor do we do anything intentional to raise our search profile (who has time or money for that?). Must be our dance moves or something…

Research Benchmarking

Research Benchmarking is the process by which your learning interventions are benchmarked against research-based best practices.

While random-assignment between-group research is likely to be too expensive and time-consuming for most of us, and benchmarking our work against other industry players is likely to push us toward mediocrity, Research Benchmarking offers a potent alternative.

Learning programs are examined to determine how well they (a) create understanding, (b) support long-term remembering (minimize forgetting), and (c) motivate on-the-job performance. They will be research-benchmarked against the 12 most decisive factors in learning design.

If you’d like to discuss research benchmarking further, contact me, Dr. Will Thalheimer, at 1-888-579-9814 or email me.