CEO’s are calling for their companies to be more innovative in the ever-accelerating competitive landscape! Creativity is the key leverage point for innovation. Research I’ve compiled (from the science on creativity) shows that unique and valuable ideas are generated when people and teams look beyond their inner circle to those in their peripheral networks. GIVEN THIS, a smart company will seed themselves with outside influencers who are working with new ideas.

But what are a vast majority of big companies doing that kills their own creativity? They are making it difficult or virtually impossible for their front-line departments to hire small businesses and consultants. It’s allowed, but massive walls are being built! And these walls have exploded over the last five to ten years:

  1. Only fully vetted companies can be hired, requiring small lean companies to waste time in compliance—or turn away in frustration. Also causing large-company managers to favor the vetted companies, even if a small business or consultant would provide better value or more-pertinent products or services.
  2. Master Service Agreements are required (pushing small companies away due to time and legal fees).
  3. Astronomical amounts of insurance are required. Why the hell do consultants need $2 million in insurance, even when they are consulting on non-safety-related issues? Why do they need any insurance at all if they are not impacting critical safety factors?
  4. Companies can’t be hired unless they’ve been in business for 5 or 10 or 15 years, completely eliminating the most unique and innovative small businesses or consultants—those who recently set up shop.
  5. Minimum company revenues are required, often in the millions of dollars.

These barriers, of course, aren’t the only ones pushing large organizations away from small businesses or consultants. Small companies often can’t afford sales forces or marketing budgets so they are less likely to gain large companies’ share of attention. Small companies aren’t seen as safe bets because they don’t have a name, or their website is not as beautiful, or they haven’t yet worked with other big-name companies, or the don’t speak the corporate language. Given these surface characteristics, only the bravest, most visionary frontline managers will take the risk to make the creative hire. And even then, their companies are making it increasingly hard for them to follow through.

Don’t be fooled by the high-visibility anecdotes that show a CEO hiring a book author or someone featured in Wired, HBR, or on some podcast. Yes, CEO’s and senior managers can easily find ways to hire innovators, and the resulting top-down creativity infusion can be helpful. But it can be harmful as well!!!! Too many times senior managers are too far away from knowing what works and what’s needed on the front lines. They push things innocently not knowing that they are distracting the troops from what’s most important, or worse, pushing the frontline teams to do stupid stuff against their best judgment.

Even more troublesome with these anecdotes of top-down innovation is that they are too few and far between. There may be ten senior managers who can hire innovation seeds, but there are dozens or hundreds or thousands of folks who might be doing so but can’t.

A little digression: It’s the frontline managers who know what’s needed—or perhaps more importantly the “leveraging managers” if I can coin a term. These are the managers who are deeply experienced and wise in the work that is getting done, but high enough in the organization to see the business-case big picture. I will specifically exclude “bottle-cap managers” who have little or no experience in a work area, but were placed there because they have business experience. Research shows these kind of hires are particularly counterproductive in innovation.

Let me summarize.

I’m not selling anything here. I’m in the training, talent development, learning evaluation business as a consultant—I’m not an innovation consultant! I’m just sharing this out of my own frustration with these stupid counter-productive barriers that I and my friends in small businesses and consultancies have experienced. I also am venting here to provide a call to action for large organizations to wake the hell up to the harm you are inflicting on yourselves and on the economy in general. By not supporting the most innovative small companies and consultants, you are dumbing-down the workforce for years to come!

Alright! I suppose I should offer to help instead of just gripe! I have done extensive research on creativity. But I don’t have a workshop developed, the research is not yet in publishable form, and it’s not really what I’m focused on right now. I’m focused on innovating in learning evaluation (see my new learning-evaluation model and my new method for capturing valid and meaningful data from learners). These are two of the most important innovations in learning evaluation in the past few years!

However, a good friend of mine did, just last month, suggest that the world should see the research on creativity that I’ve compiled (thanks Mirjam!). Given the right organization, situation, and requirements—and the right amount of money—I might be willing to take a break from my learning-evaluation work and bring this research to your organization. Contact me to try and twist my arm!

I’m serious, I really don’t want to do this right now, but if I can capture funds to reinvest in my learning-evaluation innovations, I just might be persuaded. On the contact-me link, you can set up an appointment with me. I’d love to talk with you if you want to talk innovation or learning evaluation.

This is a guest post by Annette Wisniewski, Learning Strategist at Judge Learning Solutions. In this post she shares an experience building a better smile sheet for a client.

She also does a nice job showing how to improve questions by getting rid of Likert-like scales and replacing them with more concrete answer choices.

______________________________

Using a “Performance-focused Smile Sheets” Approach for Evaluating a Training Program

Recently, one of our clients had experienced an alarming drop in customer confidence, so they hired us, Judge Learning Solutions, to evaluate the effectiveness of their customer support training program. I was the learning strategist assigned to the project. Since training never works in isolation, I convinced the client to let me evaluate both the training program and the work environment.

I wanted to create the best survey possible to gauge the effectiveness of the training program as well as evaluate the learners’ work environment, including relevant tools, processes, feedback, support, and incentives. I also wanted to create a report that included actionable recommendations on how to improve both the training program and workforce performance.

I had recently finished reading Will’s book, Performance-focused Smile Sheets, so I knew that traditional Likert-based questions are problematic. They are very subjective, don’t give clear distinction between answer choices, and limit respondents to one, sometimes insufficient, option.

For example, most smile sheets ask learners to evaluate their instructor. A traditional smile sheet question might ask learners to rank the instructor using a Likert-scale.

   How would you rate your course instructor?

  1. Very ineffective
  2. Somewhat ineffective
  3. Somewhat effective
  4. Very effective

But the question leaves too much open to interpretation. What does “ineffective” mean? What does “effective” mean? One learner might have completely different criteria for an “effective” instructor than another. What is the difference between “somewhat ineffective” and “somewhat effective”? Could it be the snacks the instructor brought in mid-afternoon? It’s hard to tell. Also, how can the instructor use this feedback to improve next time? There’s just not enough information in this question to make it very useful.

For my evaluation project, I wrote the survey question using Will’s guidelines to provide distinct, meaningful options, and then allowed learners to select as many responses as they wanted.

   What statements are true about your course instructor? Select all that apply.

  1. Was OFTEN UNCLEAR or DISORGANIZED.
  2. Was OFTEN SOCIALLY AWKWARD OR INAPPROPRIATE.
  3. Exhibited UNACCEPTABLE LACK OF KNOWLEDGE.
  4. Exhibited LACK OF REAL-WORLD EXPERIENCE.
  5. Generally PERFORMED COMPETENTLY AS A TRAINER.
  6. Showed DEEP SUBJECT-MATTER KNOWLEDGE.
  7. Demonstrated HIGH LEVELS OF REAL-WORLD EXPERIENCE.
  8. MOTIVATED ME to ENGAGE DEEPLY IN LEARNING the concepts.
  9. Is a PERSON I CAME TO TRUST.

It’s still just one question, but in this case, the learner was able to provide more useful feedback to both the instructor and to the course sponsors. As Will recommended, I added proposed standards, and then tracked percentages of each response to include in my report:

I used this same approach when asking learners about the course learning objectives.

Instead of asking a question using a typical Likert scale:

   After taking the course, I am now able to navigate the system.

  1. Strongly agree
  2. Agree
  3. Neither agree nor disagree
  4. Disagree
  5. Strongly disagree

I created a more robust question that provided better information about how well the learner was able to navigate the system and what the learner felt he/she needed to become more proficient. I formatted the question as a matrix, so  I could ask about all of the learning objectives at once. The learner perceived this to be one question, but I gleaned nine questions-worth of data out of it. Here’s a redacted excerpt of that question as it appeared in my report, shortened to the first four learning objectives.

The questions took a little more time to write, but the same amount of time for respondents to answer. At first, the client was hesitant to use this new approach to survey questions, but it didn’t take them long to see how I would be able to gather much more valuable data.

The descriptive answer choices of the survey, combined with interviews and extant data reviews, allowed me to provide my client with a very thorough evaluation report. The report not only included a clear picture of the current training program, but also provided detailed and prioritized recommendations on how to improve both the training program and the work environment.

The client was thrilled. I had given them not only actionable recommendations but also the evidence they needed to procure funding to make the improvements. When my colleague checked back with them several months later, they had already implemented several of my recommendations and were in the process of implementing more.

I was amazed at how easy it was to improve the quality of the data I gathered, and it certainly impressed my client. I will never write evaluation questions again any other way.

If you plan on conducting a survey, try using Will’s approach to writing performance-focused questions. Whether you are evaluating a training program or looking for insights on improving workforce performance, you will be happy you did!

I’m thrilled to announce that my Gold-Certification Workshop on Performance-Focused Smile Sheets is now open for registration, with access available in about a week on Tuesday May 14 (2019).

This certification workshop is the culmination of years of work and practice. First there was my work with clients on evaluation. Then there was the book. Then I gained extensive experience building and piloting smile sheets with a variety of organizations. I taught classroom and webinar workshops. I spoke at conferences and gave keynotes. And of course, I developed and launched LTEM (The Learning-Transfer Evaluation Model), which is revolutionizing the practice of workplace learning—and providing the first serious alternative to the Kirkpatrick-Katzell Four-Level Model.

Over the last year, I’ve been building an online, asynchronous workshop that was rigorous, comprehensive, and challenging enough to offer a certification. It’s now ready to go!

I’d love if you would enroll and join me and others in learning!

You can learn more about this Gold-Certification Workshop by clicking here.

 

Congratulations to Steve Semler who has become Work-Learning Research’s first certification earner by successfully completing the Work-Learning Academy course on Performance-Focused Smile Sheets!

The certification workshop is not yet available to the public but Steve generously agreed to take the program before its release. Note that certification verification can be viewed here.

Those who want to be notified of the upcoming release date can do that here.

 

Links of Interest:

 

 

I’ve had the distinct honor of being invited to speak at the Learning Technologies conference in London for three years in a row. This year, I talked about two learning innovations:

  • Performance-Focused Learner Surveys
  • LTEM (The Learning-Transfer Evaluation Model)

It was a hand-raising experience!

Most importantly, they have done a great job capturing my talk on YouTube.

Indeed, although I’ve made some recent improvements in the way I talk about these two learning innovations, the video does an excellent job of capturing some of the main points I’ve been making about the state of learning evaluation and two innovations that are tearing down some of the obstacles that have held us back from doing good evaluation.

Thanks to Stella Collins at Stellar Learning for organizing and facilitating my session!

Special thanks to the brilliant conference organizer and learning-industry influencer Robert Taylor for inviting and supporting me and my work.

Again, click here to see the video of my presentation at Learning Technologies London 2019.

While I was in London a few months ago, where I talked about learning evaluation, I was interviewed by the LearningNews about learning evaluation.

Some of what I said:

  • “Most of us have been doing the same damn thing we’ve always done [in learning evaluation]. On the other hand, there is a breaking of the logjam.”
  • “A lot of us are defaulting to happy sheets, and happy sheets that aren’t effective.”
  • “Do we in L&D have the skills to be able to do evaluation in the first place?…. My short answer is NO WAY!”
  • “We can’t upskill ourselves fast enough [in terms of learning evaluation].

It was a fun interview and LearningNews did a nice job in editing it. Special thanks to Rob Clarke for the interview, organizing, and video work (along with his great team)!!

Click here to see the interview.

Dani Johnson at RedThread Research has just released a wonderful synopsis of Learning Evaluation Models. Comprehensive, Thoughtful, Well-Researched! It also has suggestions of articles to read!!!

This work is part of an ongoing effort to research the learning-evaluation space. With research sponsored by the folks at the adroit learning-evaluation company forMetris, RedThread is looking to uncover new insights about the way we do workplace learning evaluation.

Here’s what Dani says in her summary:

“What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation.”

 

“While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.”

 

“Of the literature we reviewed, several pieces stood out to us. Each of the following authors [detailed in the summary] and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.”

 

I also encourage you to look at this great review! You can see the summary here.

 

 

As I preach in my workshops on how to create better learner-survey questions (for example my Gold-Certification workshop on Performance-Focused Smile Sheets), open-ended comment questions are very powerful questions. Indeed, they are critical in our attempts to truly understand our learners’ perspectives.

Unfortunately, to get the most benefit from comment questions, we have to take time to read every response and reflect on the meaning of all the comments taken together. Someday AI may be able to help us parse comment-question data, but currently the technology is not ready to give us a full understanding. Nor are word clouds or other basic text-processing algorithms useful enough to provide valid insights into our data.

It’s good to take the time in analyzing our comment-question data, but if there was a way to quickly get a sense of comment data, wouldn’t we consider using it? Of course!

As most of you know, I’ve been focusing a lot of my attention on learning evaluation over the last few years. While I’ve learned a lot, have been lauded by others as an evaluation thought leader, and have even created some useful innovations like LTEM, I’m still learning. Today, by filling out a survey after going to a CVS MinuteClinic to get a vaccine shot, I learned something pretty cool. Take a look.

This is a question on their survey, delivered to me right after I’d answered a comment question. This gives the survey analyzers a way to quickly categorize the comments. It DOES NOT REPLACE, or should not replace, a deeper look at the comments (for example, my comment was very specific and useful i hope), but it does enable us to ascribe some overall meaning to the results.

Note that this is similar to what I’ve been calling a hybrid question, where we first give people a forced-choice question and then give them a comment question. The forced choice question drives clarity whereas the follow-up comment question enables more specificity and richness.

One warning! Adding a forced choice question after a comment question should be seen as a tool in our toolbox. Let’s not overuse it. More pointedly, let’s use it when it is particularly appropriate.

If we’ve asked two open-ended comment questions—one asking for positive feedback and one asking for constructive criticism—we might not need a follow-up forced choice question, because we’ve already prompted respondents to give us the good and the bad.

The bottom line is that we now have two types of hybrid questions to add to our toolbox:

  1. Forced-choice question followed by clarifying comment question.
  2. Comment question followed by categorizing forced-choice question.

Freakin’ Awesome!

 

Donald Taylor, learning-industry visionary, has just come out with his annual Global Sentiment Survey asking practitioners in the field what topics are the most important right now. The thing that struck me is that the results show that data is becoming more and more important to people, especially as represented in adaptive learning through personalization, artificial intelligence, and learning analytics.

Learning analytics was most important category for the opinion leaders represented in social media. This seems right to me as someone who will be focused mostly on learning evaluation in 2019.

As Don said in the GoodPractice podcast with Ross Dickie and Owen Ferguson, “We don’t have to prove. We have to improve through learning analytics.”

What I love about Don Taylor’s work here is that he’s clear as sunshine about the strengths and limitations of this survey—and, most importantly, that he takes the time to explain what things mean without over-hyping and slight-of-hand. It’s a really simple survey, but the results are fascinating—not necessarily about what we should be doing, but what people in our field think we should be paying attention to. This kind of information is critical to all of us who might need to persuade our teams and stakeholders on how we can be most effective in our learning interventions.

Other findings:

  • Businessy-stuff fell in rated importance, for example, “consulting more deeply in the business,” “showing value,” and “developing the L&D function.”
  • Neuroscience/Cognitive Science fell in importance (most likely I think because some folks have been debunking the neuroscience-and-learning connections). And note: These should not be one category really, especially given that people in the know know that cognitive science, or more generally learning research, has shown to have proven value. Neuroscience not so much.
  • Mobile delivery and artificial intelligence were to two biggest gainers in terms of popularity.
  • Very intriguing that people active on social media (perhaps thought leaders, perhaps the opinionated mob) have different views that a more general population of workplace learning professionals. There is an interesting analysis in the book and a nice discussion in the podcast mentioned above.

For those interested in Don Taylor’s work, check out his website.