CrowdConf Brainstorm page: Difference between revisions

From Identifying Difficult Problems in Cyberlaw
Jump to navigation Jump to search
No edit summary
No edit summary
Line 9: Line 9:
**Could the platform have a rating system that suggested a fair rate based on the type of tasks requested? There could be a "survey" that each employer fills out before submitting the task, which would calculate a suggested rate. Perhaps it could be based off of past rates, as tracked by the platform operator? (Does Amazon's "recommended" technology do this in a different form already?)
**Could the platform have a rating system that suggested a fair rate based on the type of tasks requested? There could be a "survey" that each employer fills out before submitting the task, which would calculate a suggested rate. Perhaps it could be based off of past rates, as tracked by the platform operator? (Does Amazon's "recommended" technology do this in a different form already?)
**Could the technology facilitate a cyber dispute-resolution forum? (What if the dispute-resolution process was, in turn, crowd-sourced?!)
**Could the technology facilitate a cyber dispute-resolution forum? (What if the dispute-resolution process was, in turn, crowd-sourced?!)
**Have platforms set up features to facilitate the creation of online worker unions?


*''Feedback Loops''. The Best Practices suggests that workers and companies use a feedback mechanism in good faith.
*''Feedback Loops''. The Best Practices suggests that workers and companies use a feedback mechanism in good faith.
**Is there any way to use technology to prevent abuse of feedback systems, or at least encourage people to use the feedback system in good faith?
**Is there any way to use technology to prevent abuse of feedback systems, or at least encourage people to use the feedback system in good faith?

Revision as of 23:02, 30 September 2010

Use this page to discuss the best practices reading we did not have time for in class, and brainstorm questions and topics that we might present as a class at the CrowdConf Future of Work Conference next week.

  • Preserving Confidentiality in Complex Tasks. As the best practices document notes, some tasks require worker exposure to proprietary information. The Best Practices mention contracts as a way of dealing with this issues. Do we think that contractual relationships can assuage companies' fears of workers disclosing propriety information? Does the sheer volume of workers on a given task make enforcing such an agreement impossible?
    • Could the problem be solved potentially by drafting specific tasks to specific information, the disclosure of which would make the individual who divulged the info identifiable?
    • What are the costs of drafting such complex contracts?
    • Is there a way the technology can account for this problem?
  • Worker Fairness. The Best Practices document suggests that the crowd-sourcing platform should facilitate easy payment and provide a forum for dispute resolution.
    • Could the platform have a rating system that suggested a fair rate based on the type of tasks requested? There could be a "survey" that each employer fills out before submitting the task, which would calculate a suggested rate. Perhaps it could be based off of past rates, as tracked by the platform operator? (Does Amazon's "recommended" technology do this in a different form already?)
    • Could the technology facilitate a cyber dispute-resolution forum? (What if the dispute-resolution process was, in turn, crowd-sourced?!)
    • Have platforms set up features to facilitate the creation of online worker unions?
  • Feedback Loops. The Best Practices suggests that workers and companies use a feedback mechanism in good faith.
    • Is there any way to use technology to prevent abuse of feedback systems, or at least encourage people to use the feedback system in good faith?