The eLearning Guild has asked me to lead a discussion at their upcoming conference (Guild Annual Gathering) on how we might think about evaluating Learning 2.0 interventions.
I’d welcome your examples and insights.
For those who don’t know what "Learning 2.0" means, I’ll forgo my cynical answer, and say that others describe Learning 2.0 as learning that enables learner-creation of information, comparing Learning 2.0 to the stereotypic traditional model where the teacher teaches and the learner absorbs the information (or the e-learning delivers content and the learner absorbs).
So, Learning 2.0 is said to include such things as Wikis, Blogs, Learner Portfolios, Media Development and Sharing by Learners, Informal Learning, etc.
Here’s a few things I’m contemplating:
- Traditional metrics are certainly appropriate because bottom-line we want to know if Learning 2.0 interventions produce learning, enable on-the-job performance, and produce desirable individual and organizational outcomes.
- Comparisons to other methods of learning. Especially important to see if Learning 2.0 methods (on the positive side) create more elaborate mental models, produce more satisfaction, etc. and (on the negative side) waste time, create unproductive distractions, communicate incorrect or inappropriate information.
- We need to measure not only what HAS been learned, but also on what MAY BE LEARNED IN THE FUTURE. It could be, for instance, that Learning 2.0 is inefficient for learning anything specifically, but enables faster future learning in the same area of inquiry.
Here’s where I can use your help. Let me know if you know of any of the following:
- Rigorous research studies on Learning 2.0 interventions
- Anecdotal evidence on Learning 2.0 interventions
Better yet, join me at the eLearning Guild’s Annual Conference — Specifically the Learning Management Colloquim and discuss this in real time.