I’ve dabbled a little with crowdsourcing for my own projects, but never used it as a primary research tool. It isn’t hard to see how the major crowdsourcing platforms like Mechanical Turk could be used to undertake quick and cost-effective behavioural research (potential for bias notwithstanding!). So, the following study by crowdsourcing firm Crowdflower on its own worker base was interesting in itself. That it related to another interest of mine, human bias, made it even more intriguing :)
The key take-out: over 75% of contributors overestimated their ability to answer multiple choice questions correctly. The Dunning-Kruger effect is alive and well!