CrowdConf Brainstorm page: Difference between revisions
No edit summary |
Rachelbfunk (talk | contribs) No edit summary |
||
Line 50: | Line 50: | ||
(As a side note, I also feel the need to add that the concern we often hear about having an insufficient number of people pursuing the hard sciences is overblown. In most fields, there simply aren't enough jobs out there to absorb the quantity of science PhDs we produce.) | (As a side note, I also feel the need to add that the concern we often hear about having an insufficient number of people pursuing the hard sciences is overblown. In most fields, there simply aren't enough jobs out there to absorb the quantity of science PhDs we produce.) | ||
(Rachel): As an offshoot of what Davis said, I'm curious as to whether we can discern general principles of when crowdsourcing is or is not viable -- not so much in terms of Rene's question about public acceptance of crowdsourcing, but rather in terms of when it can actually be done or not, as in Davis' Large Hadron Collider example. Or what about litigation? Doc review is being outsourced more and more to contract lawyers working as independent contractors, both within the U.S. and abroad, so that seems like fair game for crowdsourcing (assuming we can get 'specialized' crowds), but it does not seem like the same could be said for actual trial or appellate practice in court. Is it merely a question of skill and experience? Given projects like Innocentive, where scientific issues requiring a lot of skill and experience are crowdsourced, that does not seem to be the case, and yet what is it about things like Large Hadron Colliders and trials that appear to be resistant to crowdsourcing? | |||
Case Study: oDesk (Rene) | Case Study: oDesk (Rene) |
Revision as of 08:25, 3 October 2010
Use this page to discuss the best practices reading we did not have time for in class, and brainstorm questions and topics that we might present as a class at the CrowdConf Future of Work Conference next week.
Crowdsourcing (Human Computing) is one of the most promising technologies, which has already been successfully in many different areas (Examples: X-Prize, 99designs, Amazon Mechanical Turk) and we believe has a huge potential for the future. It has the potential to significantly shape and change the way the labor market works. That said, it also creates challenges which need to be addressed. We would love to hear your thoughts on how technology could be leveraged to solve some of these challenges:
- (1) Preserving Confidentiality in Complex Tasks. As the best practices document notes, some tasks require worker exposure to proprietary information. The Best Practices mention contracts as a way of dealing with this issues. Do we think that contractual relationships can assuage companies' fears of workers disclosing propriety information? Does the sheer volume (and potential geographical spread) of workers on a given task make enforcing such an agreement impossible?
- Is there a way the technology can account for this problem?
- Could the problem be solved potentially by drafting specific tasks to specific information, the disclosure of which would make the individual who divulged the info identifiable?
- What are the costs of drafting such complex contracts?
- (2) Feedback Mechanisms and Dispute Resolution. When there is little or no direct contact between employers and employees and when tasks are completed on a one-off basis, it can be tough to encourage fair feedback or to verify a potential worker's competence in advance. Workers themselves face portability problems; a good rating on Mechanical Turk doesn't necessarily carry over to other crowdsourcing sites or offline careers.
- Could the technology facilitate a cyber dispute-resolution forum? (What if the dispute-resolution process was, in turn, crowd-sourced?!).
- Could the platform have a rating system that suggested a fair rate based on the type of tasks requested? There could be a "survey" that each employer fills out before submitting the task, which would calculate a suggested rate. Perhaps it could be based off of past rates, as tracked by the platform operator? (Does Amazon's "recommended" technology do this in a different form already?)
- Is there any way to use technology to prevent abuse of feedback systems, or at least encourage people to use the feedback system in good faith?
- Have platforms set up features to facilitate the creation of online worker unions? (See SECTION BELOW for more questions on Online Worker Unions)
- (3) Disclosure. The anonymity of cyber-space and the possibility to divide a large project into a a large number of small tasks so that the ultimate product is unidentifiable raises a number of ethical concerns. Have companies, clients and platforms alike, explored setting up or mandating an ethical commission investigating these concerns? What about a voluntary code of conduct created and agreed on by the industry as a quality management system to prevent black sheep from ruining the reputation of the entire industry in case of misconduct and as a preemptive action towards governmental regulation? --> How do you prevent a private-run "Manhattan Project" implemented through crowdsourcing and sold to the highest bidder?
Follow-Up Questions / Further discussion points
- Online Worker Unions. Crowdsourcing's success is dependent on finding ways to engage its labor pool, whether it be through offering money or gamesque points. However, as mentioned in class and in the best practices document, there are many ways for these laborers to become dissatisfied with their work, whether it be through a lack of transparency, stress, low wages, etc. Is there a potential for a crowdsourcing labor movement in response to these dissatisfactions? As an inherently digital workforce, these individuals' attempts to share discontents and act upon them are facilitated by their familiarity with and access to online communities. However, how far will this unity go? Do you feel that workers will only offer critiques of certain employers to others or could there be the formation of unions and similar entities in the crowdsourcing world?
- Overlap of legal frameworks. Some countries have a state pension fund that is financed by a tax deducted from a worker's salary. How are these legal requirements adhered to in the realm of crowd-sourcing? How is the location/jurisdiction of the worker determined? If the company's location is chosen, how measures are taken for the worker to have access to that respective legal system?
- Compensation. Crowdsourcing appears to rely on monetary compensation, a gamelike points system, or personal gratification to motivate people to participate in these tasks. Which of these compensation forms is the most effective in ensuring a large labor pool and the best results for employers? Which (if any) of these forms will be the most prominent system of compensation in the future and which do you think would be the most ideal compensation structure for crowdsourcing in the future?
- Mobile Online Devices. Mobile and closed platforms with constant connection to the Internet have been supplanting sedentary workplaces in popularity.. How has crowdsourcing taken advantage of this change or has it struggled to do so? What advantages/challenges do these mobile devices offer workers, employees, and crowdsourcing agencies?
- Recommendation System / Performance Tracking. It seems like sharing information about workers as suggested in the Best Practices document is a bit invasive for my taste, and perhaps something would have to be written into workers' contracts to explicitly allow this type of information sharing? (I'm not exactly what sure this law would implicate, but I seem to remember that when a potential employer calls a jobseeker's former employers, the employers can only share information that the person worked there, but can't reveal performance evaluations, etc.) Perhaps it's just me, but I'd feel more comfortable if companies enabled performance-tracking software, but didn't go so far as to share it with all other similar companies. (Question from Jenny)
(Sorry, forgot to log in, this is Erin) OK so to keep in mind what our goal is — we're supposed to put together some sort of agenda to talk about with the people who think about this day-in and day-out, and we have about half an hour? So it seems like we should try to focus in on a particularly compelling angle. The list above is good but can we prioritize? I really liked the point made in class last week that pointed out that if we can identify some sort of way for interesting technology to "fix" the "problems" that we see arising out of crowdsourcing, we'll have a much more receptive audience. Nothing jumps out at me from any particular section of the Best Practices document, but if we combine some of it maybe we can come up with something interesting?
- Maybe combining some of the aspects of portability and reviews with the identity movement more generally would be interesting?
- Frankly a lot of the best practices aren't super interesting in terms of the required technology — is there some other way to get them excited about a particular angle on something?
- Is there a way to frame a problem that we're particularly concerned about that will speak to them? Don't mean to make this an "us-against-them" thing — but the way that technologists think about technology is a little different from the way that lawyers do, so we want to be able to frame the issue in a way that will resonate with the audience...
- What about praising the technology (maybe have a few specific examples), but then asking them if they've met any resistance from, or thought about, any of the potential actors who may block or alter the technology's use? Have you (the technologist) talked with local/state/federal government? Have you discussed potential roadblocks with companies that will implement the technology?
Jenny here: I've been reading some blog posts on crowdsourcing, and one comment from a scientist (found in the comments section here: http://money.usnews.com/money/blogs/outside-voices-small-business/2009/01/27/using-social-media-and-crowd-sourcing-for-quick-and-simple-market-research ) got me thinking that scientific research and development could suffer if companies move from hiring a dedicated team of scientists to farming all of their scientific problems out to a crowdsourcing lottery payment system (ie, first one to do this gets all of this money; the others get nothing). Honestly, worst case scenario, we'd have even fewer people going into sophisticated scientific fields than we do now, because there wouldn't be any guarantee of a stable living, and I wonder if this could really hinder the development of solutions to scientific problems or if it would limit the scientific fields to scientists who are business-savvy enough to be connected to venture capitalists, etc. Either way, the outcome could be scary.
I'd be interested to hear from crowdsourcing experts about how they think crowdsourcing scientific problems affects the quality of scientific research, and if there could be any safeguards implemented to prevent the aforementioned problems from occurring (ie -- could the crowdsourcing community fund a dedicated pool of scientists, with extra prizes going to those who successfully complete R&D tasks, or would this go against the very core of the crowdsourcing movement?)
(Heather): I'm not sure this is a novel problem in the crowdsourcing context. I've never worked in a lab or been involved in research, but this concern about stable living and a consistent funding source seems to be pretty common (at least in the academic context, where it seems like a lot of time and energy is spent chasing down grants and competing for funding). Obviously the decentralization of oDesk and similar websites exacerbates the problem by breaking down competition into individual tasks and individual workers instead of across entire projects or teams, but it doesn't seem to me to create an entirely new economic incentive structure. Then again, I'm not an expert on this and we'll be talking to people who are.
(Davis): I question whether crowdsourcing has the potential to displace much scientific research. Most commercially viable research projects (such as pharmaceuticals) require significant capital investments in sophisticated experimental equipment, or access to tightly regulated materials (such as dangerous chemicals or radioactive sources). There is simply no way to crowdsource around the need for a spectrometer or a chromatograph. The types of scientific problems that are readily solved through crowdsourcing will tend to be idea-based (rather than experiment-based), and correct solutions must be easily verifiable. These criteria alone suffice to tightly restrict the class of problems that are amenable to solution through, say, Innocentive. Moreover, companies will need to employ scientific experts simply to know what questions to ask (and how to divide larger problems into smaller ones), and so a significant amount of centralization will still be necessary even with distributable projects.
And of course much scientific research is basic research, and therefore not (immediately) commercially viable. Thus we're unlikely to see a large category of research go the way of Innocentive. Something like the Large Hadron Collider is the very antithesis of crowdsourcing; such large collaborative projects seem to be the direction physics research will be headed for some time to come. The same will probably become true for other scientific fields as they mature.
(As a side note, I also feel the need to add that the concern we often hear about having an insufficient number of people pursuing the hard sciences is overblown. In most fields, there simply aren't enough jobs out there to absorb the quantity of science PhDs we produce.)
(Rachel): As an offshoot of what Davis said, I'm curious as to whether we can discern general principles of when crowdsourcing is or is not viable -- not so much in terms of Rene's question about public acceptance of crowdsourcing, but rather in terms of when it can actually be done or not, as in Davis' Large Hadron Collider example. Or what about litigation? Doc review is being outsourced more and more to contract lawyers working as independent contractors, both within the U.S. and abroad, so that seems like fair game for crowdsourcing (assuming we can get 'specialized' crowds), but it does not seem like the same could be said for actual trial or appellate practice in court. Is it merely a question of skill and experience? Given projects like Innocentive, where scientific issues requiring a lot of skill and experience are crowdsourced, that does not seem to be the case, and yet what is it about things like Large Hadron Colliders and trials that appear to be resistant to crowdsourcing?
Case Study: oDesk (Rene)
- I have used oDesk a lot of times over the summer to outsource smaller programming projects for my startup to developers, mainly in India and Southeast Asia. For those who haven't used oDesk, you post a job with a budget, oDesk workers apply for the job, you can interview them and then hire one; a small portion of the overall payment might be upfront, the rest is paid at completion of the project at discretion of the employer. oDesk has standard terms (NDA, etc.) to facilitate the transactions, but I have asked the developers I hired to sign additional documentation. The biggest issue is quality control; despite the fact that there is a rating system, it is quite difficult to evaluate whether someone is able to get a certain job done or not. I really like Jenny's question around recommendation systems / quality control as and extension of point (2) above and would like to hear what technologist have in mind to address this important challenge.
I would like to hear a discussion about the general public's acceptance of crowdsourcing. As mentioned in class, our knowledge and opinions of crowdsourcing is very much a minority viewpoint. Although to us it presents a really novel and theoretically interesting development, I imagine different entities (investors, crowdsourcing employees, workers outside of the field) would view this new practice through the lens of their own interests. I would like to hear these crowdsourcing leaders discuss their interactions with these groups, either through an open question or a directed one.
What if we approach the best practices document with a view to Lessig's four modes of regulation, and frame our discussion of crowdsourcing in terms of which combination of modes could best achieve the desired outcomes? For example, assume a crowdsourcing application that has an architecture in place forcing disclosure pursuant to the best practices model. With such a system in place, norms may then provide the best solution to the fairness problem: workers would share information about employers who are known to violate users' sense of fairness in worker forums, and discourage others from doing the work. Or workers could "strike" by making a concerted effort to accept all that employer's tasks and intentionally perform poorly, thereby obstructing completion of the disfavored company's assignments (sort of like 4Chan meets Mechanical Turk).
On a related question, could a crowdsourcing approach solve any of the crowdsourcing best practices problems? For example, is there a way to implement a feedback and monitoring system whereby the quality of a submission is judged by crowd workers?