I’m a strong advocate for assessment. If we don’t assess our learning interventions, (a) we as instructional designers don’t learn ourselves, (b) we don’t have valid data to give ourselves feedback, and (c) we can’t possibly improve our learning designs.
If we’re going to validly assess our learning interventions, we have to understand human learning and beware of biasing our results. Some of the things we have to watch out for:
- Testing our learners only when information is top-of-mind.
- Testing our learners in the learning context.
- Testing our learners unfairly with biasing pretests.
- Testing our learners with stupid, irrelevant questions.
- Using Level 1 smile-sheet data exclusively.
- Measuring with post-hoc metrics.
To help folks in the field avoid some of these pitfalls, I’ve developed the Fair-Assessment Quick-Audit courtesy of LearningAudit.com and Work-Learning Research.
Some of these criteria are critical, especially because we in the field tend to do the exact opposite of what is fair and valid.
See this (click here) recent post that describes the mistakes we’re making in assessment.