Anonymity: Difference between revisions
No edit summary |
No edit summary |
||
Line 2: | Line 2: | ||
*''Definition'': Although crowdsourcing can have many meanings, we define it here to mean breaking down large tasks into small ones that can be performed asynchronously. | *''Definition'': Although crowdsourcing can have many meanings, we define it here to mean breaking down large tasks into small ones that can be performed asynchronously. | ||
* The Best Practices for crowdwork, developed last year and reposted on [[Class 3]], classify crowdwork three ways: | |||
''First, a large group of workers may do microtasks to complete a whole project; the best-known platform in this arena is Amazon Mechanical Turk. Second, companies may use cloudwork platforms to connect with individual workers, or a small group of workers, who then complete larger jobs (e.g., Elance and oDesk). Finally, a company may run “contests,” where numerous workers complete a task and only the speediest or best worker is paid (e.g., InnoCentive and Worth1000). In some contests, the company commits to picking at least one winner; in others, there is no such guarant''ee. | |||
We are focusing on the quality of crowdsourced work. | We are focusing on the quality of crowdsourced work. |
Revision as of 15:29, 24 October 2010
Crowdsourcing & Work Quality
- Definition: Although crowdsourcing can have many meanings, we define it here to mean breaking down large tasks into small ones that can be performed asynchronously.
- The Best Practices for crowdwork, developed last year and reposted on Class 3, classify crowdwork three ways:
First, a large group of workers may do microtasks to complete a whole project; the best-known platform in this arena is Amazon Mechanical Turk. Second, companies may use cloudwork platforms to connect with individual workers, or a small group of workers, who then complete larger jobs (e.g., Elance and oDesk). Finally, a company may run “contests,” where numerous workers complete a task and only the speediest or best worker is paid (e.g., InnoCentive and Worth1000). In some contests, the company commits to picking at least one winner; in others, there is no such guarantee.
We are focusing on the quality of crowdsourced work.
1. How do concerns of reputation and identity play into crowdsourced work quality?
- tasks where want rep known, others not known
- phone card/coupon system
2. Can we ensure work quality using automated mechanisms?
3. Can we enhance work quality using a targeting system
- Amazon rec, ebay sytle, MT?, differentiate tasks?