Crowdsourcing

From Identifying Difficult Problems in Cyberlaw
Jump to navigation Jump to search

Crowdsourcing: Background and Working Definitions

Definitions

At present there is no generally agreed definition of crowdsourcing, and commentators may have used different meanings. Therefore, we believe an overview of definitions is helpful for further discussion of different types of crowdsourcing.

The most widely accepted definition of crowdsourcing comes from Jeff P Howe, who recognized it as

 "the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call."[1]

He further clarified that the form of crowdsourcing could be either peer production (when co-workers have interactions among themselves) or sole individuals (when co-workers, if any, are isolated). Under Howe's definition, the employer must be a organization (in most cases, corporations), because he was considering crowdsourcing as a new type of corporate business model, by which corporations may raise current productivity or establish new business less possible before. Nevertheless, we do not think the employer in the process of crowdsourcing, as a matter of definition, entails organization; individuals can certainly outsource a task to an online crowd.

Kleemann and Vob (2008) argue that

 "central to the concept of crowdsourcing is the idea that a crowd of people, collaboratively (or at least simultaneously) contribute to an aspect of the production process or to the solution of a design issue or other problems."[2]

Although we agree that simultaneous or collaborative work is a significant type of crowedsourcing, it is not the only one. The Best Practices entry for crowdwork, developed last year and reposted on Class 3, classifies crowdwork three ways: "First, a large group of workers may do microtasks to complete a whole project; the best-known platform in this arena is Amazon Mechanical Turk. Second, companies may use cloudwork platforms to connect with individual workers, or a small group of workers, who then complete larger jobs (e.g., Elance and oDesk). Finally, a company may run 'contests,' where numerous workers complete a task and only the speediest or best worker is paid (e.g., InnoCentive and Worth1000). In some contests, the company commits to picking at least one winner; in others, there is no such guarantee." It is clear that when the crowdsourcing takes the form of competitive bidding, not every participant works on an aspect of the task; each of them work on the whole task, and they do not have to work at the same time. Only the final winner gets compensated.It is possible that only one individual or organization joins the bidding process and no competing parties are involved.

Based on the above discussion, we believe there are two core elements that constitute crowdsourcing, both of which are facilitated by an online platform (such as Amazon Mechanical Turk[3])

 1. The task is outsourced through an open call from the employer;
 2. The recipients (not the ones who finally decided to participate) of the call is a large, undefined crowd.

General Information on Crowdsouring

  • General Information
    • For a quick overview by Jeff Howe, author of Crowdsourcing,[4] take a look at this YouTube clip.[5]
    • Northwestern University Professor Kris Hammond also explains crowdsourcing, but argues its downsides are worker rewards and quality.[6]
    • Our very own Jonathan Zittrain discusses crowdsourcing in his talk, Minds for Sale.[7]
    • Several individuals gathered to discuss crowdsourcing in panel moderated by New York Times correspondent Brad Stone.[8]
  • In the News.
    • The New York Times recently ran an article on crowdsourcing featuring two crowdsourcing companies:[9] Microtask[10] and CloudCrowd.[11]
    • It's interesting to note that these companies are attempting to monetize crowdsourcing in exactly the way in which Howe says it cannot be monetized successfully.
  • Examples of crowdsourcing.
    • Take a look at Wikipedia's compilations.[12]

Crowdsourcing Literature

General Overview

Although the idea of crowdsourcing--if not the word itself--has been around for many years, the Internet has made it much easier, cheaper, and efficient to harness the power of crowds. The power of crowds was popularized in 2004 when James Surowiecki published a book entitled, The Wisdom of Crowds.[13] This book purported to show how large groups of people can, in many cases, be more effective at solving problems than specialists. According to Surowiecki (2004: xiii), "under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them." Two years later, journalist Jeff Howe coined the phrase "crowdsourcing" to refer to work that was performed by the "masses" online.[14] Since Howe's article was published in 2006, numerous authors have written books on crowdsourcing, each choosing to focus on different aspects of the topic. Howe himself took up the topic in 2008, proclaiming crowdsourcing to be a panecea--a place where a perfect meritocracy could thrive.[[15] Howe examined crowdsourcing from a variety of perspectives: what benefits it can provide, what kinds of tasks it can accomplish, and the potential changes it may bring about. Howe's prognosis for crowdsourcing was positive--in it he saw many potential solutions and few potential problems. Others have followed Howe's lead in describing the benefits of crowdsourced work. Clay Shirky has published two books--Here Comes Everybody (2008)[16] and Cognitive Surplus (2010)[17]--in which he describes how technology does more than enable new tools, it also enables consumers to become collaborators and producers. Although Shirky's book are not expressly about crowdsourcing per se, they mirror the optimism Howe expresses, both in terms of collaborative enterprises and the Internet's power to enable them.

These books have provoked an academic interest in finding out who is the crowd, or why the crowd moves the way it does. Some have looked at scientific crowdsourcing, asking what characteristics make someone a successful crowdworker/problem-solver.[18] Part of answering that question, it turns out, asking why people attempt to be part of the innovating crowd in the first place. The authors of this study found that the crowd was highly educated. It also found that heterogeneity in scientific interests, as well as monetary and intrinsic motivations, to be important drivers of "good" problem-solvers. Others have examined non-scientific endeavors and asked similar questions.[19] This report also found that most iStock crowdworkers developing photographs were highly educated and motivated primarily by money. Jeff Howe, however, takes a different perspective as to crowdworkers' motivation: "There are...two shared attributes among almost all crowdsourcing projects: participants are not primarily motivated by money, and they’re donating their leisure hours to the cause. That is, they’re contributing their excess capacity, or 'spare cycles,' to indulge in something they love to do." (Crowdsourcing, pgs. 28-29.)


While some focused on the potential consumer revolution or the composition of the crowd, others examined the business-related aspects of crowdsourcing. Identifying attributes of successful crowd innovators also has a business dimension. One researcher suggests that having experience spanning across a variety of communities or disciplines makes one likely to be considered 'innovative.'[20] Others focus more broadly on how to use the crowd to maintain or bolster business or brand. In Groundswell (2008),[21] Charlene Li and Josh Bernoff focus on how to most effectively use crowdsourcing to advantage businesses. The authors highlight how user bases of products can undermine a product or brand.[22] As a result, the authors propose businesses use the "groundswell" to their advantage, fostering communities that can provide valuable feedback and economic payoffs. Marion K. Poetz and Martin Schreier also have taken a business perspective on crowdsourcing,[23] arguing that the crowd is capable of producing valuable (but not always viable) business ideas at a low cost. Other researchers have found that young entrepreneurs who were attempting to start businesses frequently belonged to these kinds of communities.[24] For a related discussion on user innovation and user communities, see Eric Von Hippel's books[25] and William Fisher's article in the Minnesota Law Review.[26]


Other authors have pointed out some of the problems with crowdsourcing. Dr. Mathieu O'Neil has argued that, despite its benefits, crowdsourcing can have inconsistent quality, can lack the diversity needed to draw on the "wisdom of the crowd", and can contain many irresponsible actors.[27] Miriam Cherry has argued that some crowdwork can be exploitative, sometimes forcing people to work for absurdly low wages.[28] She argues that we need a legal framework for addressing low wages, proposing we apply the Fair Labor Standards Act (FLSA) to crowdsourced work like that found on Mechanical Turk. In a forthcoming article, she takes a more systematic (but still legal) approach to suggesting solutions for the problems faced by different kinds of virtual work.[29] Cherry seems to be the only law professor to have written on addressing crowdsourcing from a doctrinal perspective.


Much of the other literature on the subject concerns the problem of quality. Soylent--which essentially is a crowdsourced editing program--has been a prime example of how lack of quality can limit the commercialization of a innovative and useful crowdsourcing product.[30] Cheat detection--the ability to filter out individuals who complete tasks without actually reading them in the hopes of receiving money without doing the work--has recently drawn attention. Indeed, a possible crowdsourced solution to cheaters has been proposed for sentence translations, relying on principles such as crowd consensus and logical parallelism in sentence structure and word choice. [31] Others have attempted to increase the quality of the traditionally-automated mechanism used to translate words by crowdsourcing translation tasks.[32] In addition to simple crowdsourcing, one set of authors suggests combining human crowdwork with machine work. This process, according to the authors, the system can specific a specific "speed-cost-quality tradeoff," which is based on an allocation of tasks among computers and humans.[33] John J. Horton, David Rand, and Richard Zeckauser have addressed using the online crowd for quality experimental research.[34]

Other Problems

The literature on crowdsourcing often discusses broad or specific issues. Books tend to have an overall argument about the value of crowdsourcing, its core attributes, and how it needs to be structured. Articles, conversely, tend to describe specific studies or problems within a particular community. here is little room for systematically addressing common crowdsourcing problems. Instead, the platforms offering crowdsourcing address these problems internally. 99designs--a website that allows people to solicit creative logo designs--has several policies regulating the behavior of those who request[35] and perform[36] work. Most crowdsourcing services have similar policies or recommendations. In January 2010, a small group of students from Harvard Law School and Stanford Law School gathered in Palo Alto for three weeks to talk about these more general problems. They produced a document of Best Practices (Class 3), which sought to identify and propose a framework to address problems endemic to crowdsourcing. That document identified six major issues that needed to be addressed in clowdwork:

1. Disclosure: Workers want to know the identity of the employer, so disclosure should be the default preference.

2. Fairness: Employers sometimes underpay, pay late, or don't pay at all, so employers should pay fair and just wages on time.

3. Feedback and Monitoring: Judging the worker, task, or company is difficult for each player, so platforms should work to enable better feedback and monitoring systems.

4. Healthy Work Environment: Workers face the risks of stress from repetition, alienation and isolation, and addiction, so platforms should explain risks and companies should implement strategies to reduce risks.

5. Reputation and Portability: Workers who do good (or bad) work cannot capitalize on (and employers cannot avoid) their work, so platforms and companies should work to keep records of worker information and use it to track performance and confirm identities.

6. Privacy Protection: Workers are concerned with employers sharing their (potentially sensitive) information, so platforms should protect information and not release it.

The best practices provide a nice starting point because they identify several major issues common to all crowdsourcing problems. It does not, however, capture all potential problems. Additionally, it tends to focus concerns only on the workers, but platforms and companies also face similar problems. Additionally, because the document is meant as a general framework, it is hard to get a sense of whether it could be effectively implemented across the board. There is room, then, to explore problems that are both broad enough to have implications for a variety of actors, but specific enough to merit a context-specific solution.

Our Addition: Identifying Areas, Exploring the Problems

Given the body of literature and the Best Practices document, we found the idea of addressing systemic problems both attractive and difficult. Instead of replicating the Best Practices, or simply writing an overview of crowdsourcing, we decided to take a different angle. Unlike the Best Practices document, which classified problems generally and then worked downward to devise specific solutions by applying them to different types of crowdwork, we worked from the top down. We identified three types of crowdwork that suggested a variety of important, but (context-)specific problems. At the beginning stages, we had only our intuition to guide our "sense" of the problems. As we delved further into them, however, they crystalized. From our discussions we identified three types of crowdwork in which specific problems arise, some of which are systemic problems with crowdsourcing that the Best Practices does not address. Nevertheless, we wanted to draw on the Best Practices document to determine whether some of its strategies seemed workable or needed to be expanded, refined, or discarded. To accomplish this goal, we attempted to integrate the Best Practices approaches into our framing of both the problems and the solutions we discussed.


An Introduction to Our Approach

Our discussion of various crowdsourcing environment suggested a variety of ways to slice the pie. In the end, we settled on three areas of crowdsourcing, reaching a rough classification based on the type of work performed. In that sense, our division followed the Best Practices division of work into microtasks, connective tasks, and contest tasks. But there was an important difference: our classification of work depended also upon the purpose for which the work was being put, focusing on a specific case study for each. In other words, it mattered to us that one task was framed as a "game" versus a "survey." We cared not just about the framing, but the motives of employer and the worker. We asked questions like, "For what purpose is the employer requesting this task?" and "Why does the worker choose to perform the task?" Our aim was not to analyze every kind of crowdwork using motive and purpose; rather, these questions provided a general framing for dividing crowdwork into analytical categories--places where we could identify specific problems that may differ depending on the answers to these questions. After significant discussion, we settled on three types of tasks, choosing a case study to explore each one:

1. Microtasks: Amazon's Mechanical Turk[37];

2. Tasks requiring "professional" skills: 99designs[38] and InnoCentive[39]; and

3. "Game" tasks: Gwap[40].

For each of these tasks, we attempted to identify salient "problems": issues that cause concern for workers, employers, platforms, businesses, or society generally. In identifying problems, we had two goals. The first was to provide a set of new issues for others to build upon in future work. The second was to explore a small number of issues and propose our own context-specific solutions. In this sense, it was an exercise in both applying the Best Practices and inventing new solutions that either context or framing prevented the Best Practices from solving. In what follows, we explain each topic, the problems it presents, and specific solutions to selected problems. Although we think the solutions we propose have some teeth, they are not meant to be final. Indeed, our goal in presenting these solutions and problems is to provide a base from which others can build.

The 3 Crowdsourcing Environments and Problems

Microtasks: Amazon's Mechanical Turk; Microtask.com

An introductory video of Microtask.com: [41]

"Professional-Grade" Tasks: 99designs & InnoCentive

99designs is a website that allows individuals or companies that need a design to ask for it by crowdsourcing.[42] InnoCentive is a company that allows individuals and entities to post scientific problems that anyone can attempt to solve.[43] These companies are a particularly interesting form of crowdsourcing because they enable the crowd to perform work typically performed by "professionals." Although services like Mechanical Turk also "deprofessionalize" work, 99designs and InnoCentive do so much more directly. Typically, designs are created (at 99designs) or problems are solved (at InnoCentive) by professional companies, the employees of which typically have some formal training. This platform raises a variety of concerns.

"Deprofessionalization"

In some sense, graphic design and other industries such as science are "professionalized": they are businesses occupied by individuals with formal training (and many times formal education). Many types of work qualify as "professions" under this definition. The traditional occupations like lawyer, doctor, and clergy certainly fall within it; but so too do other kinds of work, such as graphic design. For the past several years, crowdsourcing has crept into these "professionalized" areas without much fanfare. In science, for example, InnoCentive has provided a platform for corporate employers to crowdsource complex science problems. In the "creative" space, 99designs performs a similar function: it enables companies to crowdsource graphic design work. In some sense, professionalization is a gatekeeping mechanism--it vets people before they can perform certain work. In other cases, some argue, it is merely reinforces existing structures that disadvantage certain individuals. Professional crowdsourcing platforms reduce the role of industry or profession as gatekeeper--and has the potential to eliminate it entirely. If that's the risk, then there are several resulting problems.

Specific Problems

  • 1. Cannabalization/Wage Reduction. If 99designs or InnoCentive lowers entry barriers and costs, it could stimulate a race to the bottom. In this world, professionals could not earn a living because "amateurs" or other professionals that do not have jobs will drive down prices for crowdsourced design work. With low prices and an abundance of crowdworkers, companies may shed their traditional means of acquiring designs. So crowdsourcing, which started off as a way to lower specific business costs or solve thorny problems, becomes the sole means of (research &) design work. In this environment, the professional industry collapses because wages are too low. Alternatively, a new web-based professionalization occurs. That, of course, depends on a variety of factors, including the ability of crowdworkers to maintain a coherent identity/reputation online.
  • 2. Devaluing of Education. If deprofessionalization occurs and an industry cannibalizes itself, there will be a concomitant and precipitous drop in the market value of education. In scientific areas, many crowdworkers have advanced degrees, or at least be highly educated. Their ability to perform sophisticated or professional crowdwork is therefore partly dependent upon their education. But the crowdwork market undervalues the educational experience of the worker because, at present, most workers are highly educated. In this situation wages fall and crowdwork replaces professional work, but the cost of education remains constant. This means that sophisticated crowdworkers will pay the same amount for school but will be full-time, instead of part-time, crowdworkers. Given the low wages, people will not be making a return on their educational investment. Several possibilities then result--but we focus on the most dire here. Knowing their inability to generate return on their investment, individuals could be deterred from higher education. This, in turn, will cause a brain drain, where fewer and fewer people obtain advanced degrees. As a result, the market for professional crowdworkers shrinks. Given the technological growth rate, the demand for crowdworkers will continue to grow. The shorty supply and high prices will mean the end of crowdsourced professional work for two reasons. Costs will rise to the point where crowdsourcing is no longer more economical than traditional professional services. Second, and more importantly, the lack of workers means that, given the growing technological and cultural demands our society makes, tasks simply cannot be crowdsourced effectively.

Solutions.

One might wonder whether concerns over deprofessionalization are overstated. The criticism is that we are worried simply about "amateurs" displacing "professionals." We think these two issues just outlined illustrate that the problem is greater than professionals losing their privileged status. There are doubtless many potential solutions to these problems. Here are a few.

  • 1. Wage Scale. Industries and professionals could collaborate to set wages they think are reasonable. The Best Practices document recommends "fair" wages, but in this context it might be wise to allow collaboration of interested parties, rather than rely only on employers to set a fair wage. One could see such a wage scale being set by various stakeholders, and perhaps some "nonstakeholders."
  • 2. Wage Determinants. Wages could be set according to some or a series credential-measure. This could take several different forms, which could be combined.
    • 2a. Workers with a degree or work experience in the professional or a related professional field may be entitled to higher wage than a worker with no higher education. The problem with this approach is that it reduces the "meritocracy" aspect of crowdsourcing. It also seems to devalue the diversity that crowdsourcing thrives on.
    • 2b. If the platform implements an effective reputation or rating system (detailed or simple), workers could use their reputations to generate more work. This solution faces some technical and privacy problems, but seems like at least one plausible way of ensuring favorable wages for better performers. This also has the benefit of showing which workers are repeatedly good at performing tasks. This is important because many innocentive solvers, for example, never solve more than one problem.[44] One negative of this method is that it decreases the "perfect meritocracy" that some seem to think can persist forever (if it exists at all).
    • 2c. Workers could be paid according to the contributions they make. For this system two work, a platform and employer would have to work together. They would have to create a framework that allowed an initial screening for those who held the requisite qualifications or ratings, and then paid them according to the amount of time worked and contributions made. (Here some kind of algorithm might be useful).
  • 3. Educational Reform. Change educational components to provide skills that crowdsourcing cannot cover. This could include non-compartentalizable tasks or exposure to a broader range of subjects. There are many problems with reforming the educational system. Aside from actually doing it, here are two specific problems. First, it may be difficult to identify in advance what problems are crowdsource-able, as the capacity to crowdsource work is likely to change in the future. Second, because we don't yet know enough about crowdsourcing, it's difficult to say what skills or exposure to ideas one needs to perform certain tasks well--or outperform crowdsourcing.
  • 4. Discourage wage reductions/crowdsourcing by having platforms require disclosure when crowdsourcing. This, however, may discourage crowdsourcing generally. The goal should be to reap crowdsourcing's benefits while minimizing its potential downside. Still, this solution could be used to less extreme degrees to pressure companies to offer competitive wages for crowdsourced tasks.
  • 5. Implement incentives to increase pay? Patent system effect this?


Reputation

Reputation is an issue for all parties involved in professional crowdsourcing: the worker, the employer, and the platform. The Best Practices document focused on some of these issues in the microtasking environment. Concerns there were focused on speed, efficiency, and work quality. The Best Practices also focused solely on the relationship between individual workers and employers. These concerns also exist in the professional environment. There are, however, other problems to confront. Specifically, worker quality may become more important for professional work because the work done is more time, labor, and skill intensive. Additionally, because tasks are fewer in number and take longer, each contest or task completed also has greater influence on, and importance for, worker reputation. Finally, if professional work pays more than microtasks, the reputational stakes are higher.

Reputation Concerns for Workers, Employers, and Platforms

  • Workers. As crowdsourcing increases and professionalized crowdworkers have success, they want to signal to potential employers that they have done good work in the past. Workers may also want to list their professional experience, education, or training (as one can do on Gerson Lehram Group[45]). Workers also will seek to ensure that antisocial behavior among themselves is counted against a worker's reputation. The literature on crowdsourcing emphasizes the community that exists in crowdwork environment, and ensuring the community functions well is important. 99designs has numerous instances where a crowdworker accuses another of "stealing" another's design. Workers also will want to know the reputation of an employer. This could include factors like the amount paid, the quickness of payment, and the type of work offered.
  • Employers. Reputation also would be valuable for employers--they'd prefer work from people with high reputations, not only to ensure good work, but also to ensure the work was not copied from somewhere else. While copying from the worker's perspective is important as a social matter, from an employer's perspective copying is important as a legal matter. Employers want to avoid copyright infringement or any claims of copyright infringement. Employers also want to broadcast their "good" reputations.
  • Platforms. Platforms have an incentive to ensure that workers' and employers' desire for a reputation system is met. It will, in many cases, be up to the platform to institute a system that can deal effectively with reputation, antiscoial behavior, and legal issues. 99designs has implemented a system that seeks to deal with the latter two, but does not address reputation.

Problems.

  • 1. Portability. As the Best Practices document notes, workers and employers may want have a coherent identities across a variety of platforms. In doing this, they may want to keep their identities secret but their reputations public. If a worker moves from 99designs to iStockphoto, for example, they may have to create a totally new profile and build up a reputation, even though some aspects of their previous reputation may be relevant.
  • 2. Reliability. Any feedback-reputation system needs to be reliable. That is, it needs to accurately reflect the quality and quantity of work performed. One way to ensure diversity in feedback, and therefore reliability, is to have a mediated reputation system, where workers vote on both each other (where applicable) and employers, and employers do the same (vote on workers and each other). The system could then be mediated by the platform.

Solutions

  • 1. Platform specific solution. The Best Practices document suggests that each crowdsourcing platform implement its own measures to track reputation. This is a reasonable solution. 99designs and InnoCentive do not yet have such a system. This kind of solution would allow each platform to identify the reputational characteristics necessary for the kind of work offered. 99designs, for example, could track worker reputation based on cooperation among workers, number of contests won, and portfolio ratings from other users. It could also track employer reputation based on payment amount and speed, and copyright licensing terms. InnoCentive, by contrast, could focus on credentials characteristics, such as degrees obtained or relevant work experience. It could track employers based on potential for future projects with the same company. The Best Practices document also suggests that workers have access to their data, and presumably envisions a way to use reputation information at one provider or another. Here, the solution could be for each platform to issue a "portable reputation," which would explain the reputation of the worker on that site, the work performed, and the average/median rating of a worker on the site (with similar work performed). In this scenario, platforms would have to have some interoperability. Each platform, for example, could agree to host the reputational information provided by another. In such a situation, the worker could 'import' his or her data from, say, 99designs to iStockphoto. When that worker completes a task or uploads a photo, an employer could click on the worker's profile to view various bits of reputational information. In one iteration, this could include "Reputation Homepage" that displayed the logos of various platforms for which the worker had completed tasks. Scrolling the mouse over the logo might reveal some general reputational information,such as a reputation score (with the appropriate scale for the platform). There are a variety of options, but they all include some kind of cooperation among platforms.
  • 2. Uniform Reputation. Another approach to reputation would allow a centralized mechanism or entity that managed worker/employer reputation. This entity could customize for type of worker (think science versus creative), and would allow all employers and workers to interact with it through all crowdsourcing platforms. A central reputation-authority would allow people to take their work repuations with and stay "anonymous"--they won't disclose their real identity. One can imagine a universal reputation platform that draws information from various sites and aggregates it--maybe it has agreements with all the platforms. Employers and workers can then consult this platform to determine reputations. The main problem is that there is no way to reliably track reputation without respect to work done at a platform. Because platforms would automatically submit worker reputation info to the site, there are essentially two ways the site could work. The first way resembles the solution described above: a visitor is taken to a reputation homepage that displays a worker/employer reputation from different platforms, and provides information about each. One advantage of this is for platforms: they don't have to generate interoperability--they can delegate reputation aggregation to one entity, which could probably collect and display the information more effectively. A second method would be to for the reputation site to create its own "general" reputation for each worker based on the information it receives from platforms. We can imagine a situation in which workers or employers can ask the reputation site to rank workers based on specific, general, or a combination of characteristics. This would allow employers to sort workers by reputational characteristics they deem important to a particular type of work.
  • 3. Reliability for Platform-Specific Versus Uniform.
    • Although there are benefits to this approach, there are also several problems. The technical problem entails enabling compatibility and communication with platforms. There are also privacy issues with consolidating reputation in one place.

Tracking reputation

Disclosure

Because companies can use services like InnoCentive and 99designs anonymously or semi-anonymously, questions about disclosure arise. The Best Practices document deals primarily with disclosure in the microtask environment. The document identifies concerns such as "search engine optimization." In the professional environment, the problems could be much greater. Imagine, for example, that Blackwater is seeking to develop a chemical compound that destroys an enzyme that breaks down oxygen. Blackwater plans to use this enzyme as a dispersant during combat. If Blackwater uses InnoCentive to accomplish this goal, the worker may know only that she developed compound X. The problem here seems more salient than in microtasks (though that can be disputed). There are several problems related to disclosure.

Problems

  • 1. Objectionable Tasks. Workers may not want to perform tasks that have what they perceive to be nefarious ends.
  • 2. Worker Confidentiality.
  • 3. Work Quality.

Solutions

  • 1. Create Categories. If tasks or contests were divided by task subject category, workers could better identify those that they would likely consider appropriate. Additionally, there could be a separate category for "potentially objectionable" tasks, such as those for military companies.
  • 2. Platform-enabled partial disclosure & worker-employer disclosure matching. The Best Practices suggests some form of limited disclosure protected by contract. A platform also could provide a matching system, where both wokers and employers can select various preferences, such as type of work or ability to work without knowing the employer or project. The platform could then use a computerized process to match employers to workers with similar preferences. If a conflict developed, or either party had questions of the other, they could communicate directly or through a third party. Contracts could be used at various points to limit disclosure.

"Game" Tasks: Gwap

Summary