Recently, the New York Times Public Editor wrote an article on polling. The article skewered many current practices. It also was educational for any of us who want to truly understand the polling data we may hear in the news.

Sadly, the article made me think of the pathetic data that runs around our field—the training, learning, development, e-learning field. I’ve already mentioned some of the data allegedly posing as learning research.

But there is another wide swath of data that we should be very skeptical about—the data some "research" firms and trade organizations are peddling as industry data. The data is typically gathered by sending out surveys to an unrepresentative sample of companies, by having only a fraction of respondants complete the data, by gathering opinions, and by boisterously proclaiming that the data tell us what the industry best-practices are. Here are some of the problems with this farce:

  1. Biased sampling of organizations.
  2. No control for the biasing effects of non-respondants.
  3. Assuming that opinion is fact.
  4. Relying on the averaging of opinions.
  5. Assuming that the average respondants have the best insights.
  6. The arrogance and lack of caveats in the reporting.
  7. Year-to-Year comparisons with different companies in each year’s sample.
  8. Additional biasing due to fraud and corruption, as when these "research" organizations tilt the best-practice results to their paid customers.