I’ve just completed a new research-to-practice white paper. As far as I can tell, it is the first work on learning measurement (assessment and evaluation) that actually takes human learning into consideration. I’d like to thank Questionmark for agreeing to support this work.
Words from the paper’s introduction:
In writing this report on using fundamental learning research to inform assessment design, I am combining two of my passions—learning and the measurement of learning. As an experienced learner and learning designer, I have come to the belief that those of us responsible for designing, developing, and delivering learning interventions are often left in the dark about our own successes and failures. The measurement techniques we use simply do not provide us with valid feedback about our own performances.
The traditional model of assessment utilizes end-of-learning assessments provided to learners in the context in which they learned. This model is seriously flawed, especially in failing to give us an idea of how well our learning interventions are doing in preparing our learners to retrieve information in future situations—the ultimate goal of training and education. By failing to measure our performance in this regard, we are missing opportunities to provide ourselves with valid feedback. We are also likely failing our institutions and our learners because we are not able to create a practice of continuous improvement to maximize our learning outcomes.
This report is designed to help you improve your assessments in this regard. I certainly won’t claim to have all the answers, nor do I think it is easy to create the perfect assessment, but I do believe very strongly that all of us can improve our assessments substantially, and by so doing improve the practice of education and training.