Tag Archive for: education

One of the most common questions I get when I speak about the Performance-Focused Smile-Sheet approach (see the book’s website at SmileSheets.com) is “What can be done to get higher response rates from my smile sheets?”

Of course, people also refer to smile sheets as evals, level 1’s, happy sheets, hot or warm evaluation, response forms, reaction forms, etc. They also refer to both paper-and-pencil forms and online surveys. Indeed, as smile sheets go online, more and more people are finding that online surveys get a much lower response rate than in-classroom paper surveys.

Before I give you my list for how to get a higher response rate, let me blow this up a bit. The thing is, while we want high response rates, there’s something much more important than response rates. We also want response relevance and precision. We want the questions to relate to learning effectiveness, not just learning reputation and learner satisfaction. We also want the learners to be able to answer the questions knowledgeably and give our questions their full attention.

If we have bad questions — one’s that use Likert-like or numeric scales for example — it won’t matter that we have high response rates. In this post, I’m NOT going to focus on how to write better questions. Instead, I’m just tackling how we can motivate our learners to give our questions more of their full attention, thus increasing the precision of their responding while also increasing our response rates as well.

How to get Better Responses and Higher Response Rates

  1. Ask with enthusiasm, while also explaining the benefits.
  2. Have a trusted person make the request (often an instructor who our learners have bonded with).
  3. Mention the coming smile sheet early in the learning (and more than once) so that learners know it is an integral part of the learning, not just an add-on.
  4. While mentioning the smile sheet, let folks know what you’ve learned from previous smile sheets and what you’ve changed based on the feedback.
  5. Tell learners what you’ll do with the data, and how you’ll let them know the results of their feedback.
  6. Highlight the benefits to the instructor, to the instructional designers, and to the organization. Those who ask can mention how they’ve benefited in the past from smile sheet results.
  7. Acknowledge the effort that they — your learners — will be making, maybe even commiserating with them that you know how hard it can be to give their full attention when it’s the end of the day or when they are back to work.
  8. Put the time devoted to the survey in perspective, for example, “We spent 7 hours today in learning, that’s 420 minutes, and now we’re asking you for 10 more minutes.”
  9. Ensure your learners that the data will be confidential, that the data is aggregated so that an individual’s responses are never shared.
  10. Let your learners know the percentage of people like them who typically complete the survey (caveat: if it’s relatively high).
  11. Use more distinctive answer choices. Avoid Likert-like answer choices and numerical scales — because learners instinctively know they aren’t that useful.
  12. Ask more meaningful questions. Use questions that learners can answer with confidence. Ask questions that focus on meaningful information. Avoid obviously biased questions — as these may alienate your learners.

How to get Better Responses and Higher Response Rates on DELAYED SMILE SHEETS

Sometimes, we’ll want to survey our learners well after a learning event, for example three to five weeks later. Delayed smile sheets are perfectly positioned to find out more about how the learning is relevant to the actual work or to our learners’ post-learning application efforts. Unfortunately, prompting action — that is getting learners to engage our delayed smile sheets — can be particularly difficult when asking for this favor well after learning. Still, there are some things we can do — in addition to the list above — that can make a difference.

  1. Tell learners what you learned from the end-of-learning smile sheet they previously completed.
  2. Ask the instructor who bonded with them to send the request (instead of an unknown person from the learning unit).
  3. Send multiple requests, preferably using a mechanism that only sends these requests to those who still need to complete the survey.
  4. Have the course officially end sometime AFTER the delayed smile sheet is completed, even if that is largely just a perception. Note that multiple-event learning experiences lend themselves to this approach, whereas single-event learning experiences do not.
  5. Share with your learners a small portion of the preliminary data from the delayed smile sheet. “Already, 46% of your fellow learners have completed the survey, with some intriguing tentative results. Indeed, it looks like the most relevant topic as rated by your fellow learners is… and the least relevant is…”
  6. Share with them the names or job titles of some of the people who have completed the survey already.
  7. Share with them the percentage of folks from his/her unit who have responded already or share a comparison across units.

What about INCENTIVES?

When I ask audiences for their ideas for improving responses and increasing response rates, they often mention some sort of incentive, usually based on some sort of lottery or raffle. “If you complete the survey, your name will be submitted to have chance to win the latest tech gadget, a book, time off, lunch with an executive, etc.”

I’m a skeptic. I’m open to being wrong, but I’m still skeptical about the cost/benefit calculation. Certainly for some audiences an incentive will increase rates of completion. Also, for some audiences, the harms that come with incentives may be worth it.

What harms you might ask? When we provide an external incentive, we might be sending a message to some learners that we know the task has no redeeming value or is tedious or difficult. People who see their own motivation as caused by the external incentive are potentially less likely to seriously engage our questions, producing bad data. We’re also not just having an effect on the current smile sheet. When we incentivize people today, they may be less willing next time to engage in answering our questions. They may also be pushed into believing that smile sheets are difficult, worthless, or worse.

Ideally, we’d like our learners to want to provide us with data, to see answering our questions as a worthy and helpful exercise, one that is valuable to them, to us, and to our organization. Incentives push against this vision.

 

A recent research review (by Paul L. Morgan, George Farkas, and Steve Maczuga) finds that teacher-directed mathematics instruction in first grade is superior to other methods for students with “math difficulties.” Specifically, routine practice and drill was more effective than the use of manipulatives, calculators, music, or movement for students with math difficulties.

For students without math difficulties, teacher-directed and student-centered approaches performed about the same.

In the words of the researchers:

In sum, teacher-directed activities were associated with greater achievement by both MD and non-MD students, and student-centered activities were associated with greater achievement only by non-MD students. Activities emphasizing manipulatives/calculators or movement/music to learn mathematics had no observed positive association with mathematics achievement.

For students without MD, more frequent use of either teacher-directed or student-centered instructional practices was associated with achievement gains. In contrast, more frequent use of manipulatives/calculator or movement/music activities was not associated with significant gains for any of the groups.

Interestingly, classes with higher proportions of students with math difficulties were actually less likely to be taught with teacher-directed methods — the very methods that would be most helpful!

Will’s Reflection (for both Education and Training)

These findings fit in with a substantial body of research that shows that learners who are novices in a topic area will benefit most from highly-directed instructional activities. They will NOT benefit from discovery learning, problem-based learning, and similar non-directive learning events.

See for example:

  • Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), 75-86.
  • Mayer, R. E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? American Psychologist, 59(1), 14-19.

As a research translator, I look for ways to make complicated research findings usable for practitioners. One model that seems to be helpful is to divide learning activities into two phases:

  1. Early in Learning (When learners are new to a topic, or the topic is very complex)
    The goal here is to help the learners UNDERSTAND the content. Here we provide lots of learning support, including repetitions, useful metaphors, worked examples, immediate feedback.
  2. Later in Learning (When learners are experienced with a topic, or when the topic is simple)
    The goal here is to help the learners REMEMBER the content or DEEPEN they’re learning. To support remembering, we provide lots of retrieval practice, preferably set in realistic situations the learners will likely encounter — where they can use what they learned. We provide delayed feedback. We space repetitions over time, varying the background context while keeping the learning nugget the same. To deepen learning, we engage contingencies, we enable learners to explore the topic space on their own, we add additional knowledge.

What Elementary Mathematics Teachers Should Stop Doing

Elementary-school teachers should stop assuming that drill-and-practice is counterproductive. They should create lesson plans that guide their learners in understanding the concepts to be learned. They should limit the use of manipulatives, calculators, music, and movement. Ideas about “arts integration” should be pushed to the back burner. This doesn’t mean that teachers should NEVER use these other methods, but they should be used to create occasional, short, and rare moments of variety. Spending hours using manipulatives, for example, is certainly harmful in comparison with more teacher-directed activities.

 

Here is the comment I sent to the NY Times in response to their focus on a supposed research study that purported to show that gifted kids are being underserved.

I'm a little over the top in my comments, but still I think this is worth printing because it demonstrates the need for good research savvy and it shows that even the most respected news organizations can make really poor research-assessment mistakes.

Egads!! Why is the New York Times giving so much "above-the-fold" visibility to a poorly-conceived research study funded by a conservative think tank with obvious biases?

Why isn't at least one of your contributors a research-savvy person who could comment on the soundness of the research? Instead, your contributors assume the research is sound.

Did you notice that the references in the original research report were not from top-tier refereed scientific journals?

In the original article from the Thomas Fordham Institute (a conservative-funded enterprise), the authors try to wash away criticisms about regression-to-the-mean and test-variability, but this bone against the obvious–and most damaging, and most valid–criticisms is not good enough.

If you took the top 10% of players in the baseball draft, the football draft, any company's onboarding class, any randomly selected group of maple trees, a large percentage of the top performers would not be top performers a year or two later. Damn, ask any baseball scout whether picking out the best prospects is a sure thing. It's not!

And, in the cases I mentioned above, the measures are more objective than an educational test, which has much higher variability–which would make more top performers leak out of the top ranks.

NY Times–you should be embarrassed to have published these responses to this non-study. Seriously, don't you have any research-savvy people left on your staff?

We have scientific journals because the research is vetted by experts.