I work in the learning-and-performance field. I watched the first two presidential debates. They were problematic to say the least. So, I thought to myself, “Hey Will, you’re a learning-and-performance professional. What might be done to improve them?” Actually, it was more like, “Damn, somebody’s got to make these debates better. They’re not creating the best outcomes. They’re not really educating us on the important issues for the presidential election. Our debates are not helping the democratic process. Indeed, they’re probably creating harm.”

24564574914_0cdd268f92_zPicture above by Donkey Hotey 2016 Creative Commons on Flickr

In looking at the debate process, there are several leverage points, including the following:

  1. The debate format.
  2. The questions asked.
  3. The candidates’ responses.
  4. The citizenry’s cognitive processing of the debate.
  5. The news media’s and social media’s messaging about the debate.
  6. The citizenry’s cognitive processing of the media messaging.

For most of these, we — the citizenry — have very little direct leverage to influence the process. Together, we can influence the social-media messages, but as individuals, we have very little influence there. Tears in the ocean.

But certainly the media and the candidates are acting with intentionality to influence the citizenry. Candidates act bold and confident because they know that people, in general, are suckers for those who display confidence. Candidates use certain words to influence — even if those words are too general to be meaningful (words like freedom, equality, strength, diversity). Candidates avoid talking about complicated issues, because they think we can’t understand. Candidates speak in terms of black and white, good and bad, either this or that — though the world is shaded in sepia tones — because they think we need certainty.

They act because they think we as citizens will react in predictable ways. But what if we changed and improved our responses to the debates? What if we as citizens improved our cognitive processing and instead of having knee-jerk reactions, we improved our thinking? What if instead of being persuaded by irrelevancies, we were persuaded by a clear understanding of the issues? Is there anything we can do to improve and deepen our thinking about the debates?

Yes! We can do a better job! Not a perfect job, not a brilliant job, but we can improve and deepen our thinking if we take some time to think about what we care about and what matters.

I’m sure my efforts here will be inadequate, but I offer a one-page checklist to spur your thinking. Take a look, try it out at Wednesday’s final Presidential debate and let me know how you’d improve it. Tell me what works and what doesn’t. Better yet, let me know what else we can do to improve the results of the debates.

 

Try the Presidential Debates Insight Checklist

 

Read My Article on LinkedIn

 

 

WHAT ELSE? WHAT DO YOU THINK?

Okay, I really feel like this effort is too inconsequential and too unlikely to make a difference, so I’d love to hear your ideas:

  • What’s wrong with this approach?
  • What could make it better?
  • What else could be done to improve the outcome of the debates — that is, creating a better informed citizenry.
  • Are there other tools in our learning-and-performance toolbox we might use?
  • What could make our debates great again? (sorry, couldn’t resist)

I was asked today the following question from a learning professional in a large company:

It will come as no surprise that we create a great deal of mandatory/regulatory required eLearning here. All of these eLearning interventions have a final assessment that the learner must pass at 80% to be marked as completed; in addition to viewing all the course content as well. The question is around feedback for those assessment questions.

  • One faction says no feedback at all, just a score at the end and the opportunity to revisit any section of the course before retaking the assessment.
  • Another faction says to tell them correct or incorrect after they submit their answer for each question.
  • And a third faction argues that we should give them detailed feedback beyond just correct/incorrect for each question. 

Which approach do you recommend? 

 

 

Here is what I wrote in response:

It all depends on what you’re trying to accomplish…

If this is a high-stakes assessment you may want to protect the integrity of your questions. In such a case, you’d have a large pool of questions and you’d protect the answer choices by not divulging them. You may even have proctored assessments, for example, having the respondent turn on their web camera and submit their video image along with the test results. Also, you wouldn’t give feedback because you’d be concerned that students would share the questions and answers.

If this is largely a test to give feedback to the learners—and to support them in remembering and performance—you’d not only give them detailed feedback, but you’d retest them after a few days or more to reinforce their learning. You might even follow-up to see how well they’ve been able to apply what they’ve learned on the job.

We can imagine a continuum between these two points where you might seek a balance between a focus on learning and a focus on assessment.

This may be a question for the lawyers, not just for us as learning professionals. If these courses are being provided to meet certain legal requirements, it may be most important to consider what might happen in the legal domain. Personally, I think the law may be behind learning science. Based on talking with clients over many years, it seems that lawyers and regulators often recommend learning designs and assessments that do NOT make sense from a learning standpoint. For example, lawyers tell companies that teaching a compliance topic once a year will be sufficient — when we know that people forget and may need to be reminded.

In the learning-assessment domain, lawyers and regulators may say that it is acceptable to provide a quiz with no feedback. They are focused on having a defensible assessment. This may be the advice you should follow given current laws and regulations. However, this seems ultimately indefensible from a learning standpoint. Couldn’t a litigant argue that the organization did NOT do everything they could to support the employee in learning — if the organization didn’t provide feedback on quiz questions? This seems a pretty straightforward argument — and one that I would testify to in a court of law (if I was asked).

By the way, how do you know 80% is the right cutoff point? Most people use an arbitrary cutoff point, but then you don’t really know what it means.

Also, are your questions good questions? Do they ask people to make decisions set in realistic scenarios? Do they provide plausible answer choices (even for incorrect choices)? Are they focused on high-priority information?

Do the questions and the cutoff point truly differentiate between competence and lack of competence?

Are the questions asked after a substantial delay — so that you know you are measuring the learners’ ability to remember?

Bottom line: Decision-making around learning assessments is more complicated than it looks.

Note: I am available to help organizations sort this out… yet, as one may ascertain from my answer here, there are no clear recipes. It comes down to judgment and goals.

If your goal is learning, you probably should provide feedback and provide a delayed follow-up test. You should also use realistic scenario-based questions, not low-level knowledge questions.

If your goal is assessment, you probably should create a large pool of questions, proctor the testing, and withhold feedback.