Challenges and Critiques

From Identifying Difficult Problems in Cyberlaw
Jump to navigation Jump to search

This section of the wiki acknowledges core criticisms of the Pharos project and recommends future work that can be done to address some of these concerns. It addresses foundational critiques of the project, practical concerns, and potential unintended consequences that may result from the project.

Foundational Considerations: Defining the Problem

The Pharos Project is premised on the idea that in spite of a rise in low-cost video technologies, a dearth of video footage about human rights abuses currently exists.[1] Assuming for the moment that there exists a true dearth of videos about human rights abuses, it is unclear what has stymied the use of video technologies to document and report human rights abuses. It is possible that few human rights videos have been circulated for reasons independent of a need for a new technological system that sanitizes and serves as an end-to-end upload-and-hosting service for human rights videos.

For example, internet penetration in Africa remains low—one 2010 study estimates that only about 10.9 percent of the population in the continent has internet access. The lack of internet access could explain why human rights activists make limited use of video technologies to upload human rights content onto the web. Social norms may also drive whether and how technologies are used to further human rights. In addition, some activists might choose not to film and circulate videos documenting human rights abuses because increasing international exposure of a particular type of government abuse may not stop government abuses. The “range of human rights abuses” is wide, and human rights advocacy taking the form of “pursuing reform through formal judicial routes or diplomatic channels” might be more effective in a variety of situations. The solution offered by Pharos may be ill-suited to addressing first-order issues like network connection and practices surrounding technology use.

More precision is needed in defining the problems Pharos hopes to solve. In internal discussions, the Pharos team proposed several potential hypotheses, each of which should be critically examined:

Human rights videos are not widely disseminated on the internet because:

  • There is no tool to sanitize human rights videos (i.e. remove identifying information), and human rights abuses are not being recorded on video or uploaded to the internet because of the fear of jeopardizing the physical security of individuals featured in human rights videos or of the uploader.
  • Human rights activists do not have the technological knowledge to use existing “liberation technologies” like Tor that can provide some measure of anonymity and security to the human rights activist.
  • Current technologies (like Tor) that can be used by human rights activists are being underleveraged because human rights activists do not know about their existence.
  • For-profit services, such as YouTube, are particularly susceptible to government requests for takedown; as a result, human rights videos are not being posted to these for-profit hosting sites.

Some of the problems above can be addressed without the creation of a new organization and the design of a new technological system. For example, in order to address human rights activists’ lack of technical expertise, some researchers have run technology training sessions. Patrick Meier, a researcher studying the use of technologies in civil resistance against repressive regimes, has led technology training sessions to teach individuals how to use technologies like Ushahidi, a tool that uses crowdsourced information to map the emergence of natural disasters and government abuses against citizens.

If existing technologies are not being leveraged to the extent possible by human rights activists because they do not know of these tools’ existence, training sessions or branding efforts may similarly be used to inform human rights activists of the tools at their disposal. Low-cost video editing software that strips metatags from the video and enables face blurring could provide a similarly narrow solution to address the problem of government efforts to identify and jail dissidents who upload or are featured in human rights videos.

Going forward, more information providing support for the hypotheses debated within the team is needed to determine if a new technological system is needed, and if so, what features of such a technological system would most aid the dissemination of human rights media.

Practical Concerns

The success of Pharos’s mission depends on acquiring adequate funding; developing a clear policy to meet its objectives of enabling, protecting, and promoting human rights; and making video upload simple enough to encourage activists to use video technologies more frequently to capture and circulate content on human rights abuses.

Costs and Funding

The costs of running a non-profit organization like Pharos would be significant. Resources would be needed to train a sophisticated technology team to keep the Pharos’s servers secure. Regular staff would be needed to blur faces and remove metatags and other identifiable information from the videos. Educating human rights organizations about Pharos and branding and disseminating information about Pharos would also be expensive because it would require visiting many different NGOs around the world.

Finding sources of funding might be problematic because of the highly political nature of Pharos’s mission. All of the potential sources of funding—governments, venture capitalists, and philanthropists—would likely have concerns that limit their willingness to donate.

Most governments would fear the destabilizing effects of videos posted by their citizens or the geopolitical consequences of funding such an effort. Most venture capitalists and other profit-seeking investors would not be interested in funding Pharos because Pharos will not generate profit.

Even though many philanthropists might be initially attracted to the broad goals of Pharos, some of them may object to the organization’s definition of acceptable content. For example, an American philanthropist that generally supports ending human rights abuses may object to Pharos’s willingness to release video content exposing American troops’ firing on Iraqi or Afghani civilians because of the video’s potential to further deepen anti-American sentiment in the Middle East.

Governance and Defining Acceptable Content

Extremist groups might submit videos to Pharos that are intended to terrorize a population. Determining whether it is appropriate to disseminate a video showing the capture or the killing of a civilian would be difficult. When the video of journalist Daniel Pearl’s beheading circulated on the internet, many video hosting services decided to continue hosting the video even though some individuals, including Senator Joe Lieberman, asked for YouTube to remove graphic and violent videos that seemed to further the end of terrorists. Like Lieberman, some philanthropists might object to content that is distributed to further the end of a terrorist or extremist organization. YouTube’s Community Guidelines advises uploaders not to post content that contains “graphic or gratuitous violence” and yet YouTube decided that First Amendment rights and the right of the public to know what had happened to Daniel Pearl trumped other concerns. In a similar situation, what position would Pharos take? Should the board remove the video to refuse satisfying the whims of a terrorist organization? Or should a larger interest in human rights or in circulating the truth trump other considerations? What positions would the former political leaders on the Pharos board take and how would those views influence Pharos’s decision?

Defining the scope of acceptable content presents a particularly thorny problem for Pharos’s advisory board even though Pharos could skirt some issues by abiding by a policy that is as inclusive and as politically neutral as possible.

One set of policy guidelines the Pharos team intended to model was the CNN iReport Community Guidelines. CNN iReport asks individuals to not post:

  • Content that infringes someone's copyright.
  • Content that you know to be untrue.
  • Spam, or repeated uploads that flood the site with duplicate versions of the same or similar content.
  • Pornography/sexually explicit content.
  • Obscene/lewd content.
  • Content that advocates violent behavior.
  • Content that contains violent images of killing or physical abuse that appear to have been captured solely, or principally, for exploitive, prurient or gratuitous purposes.
  • Content that advocates dangerous, illegal or predatory acts or poses a reasonable threat to personal or public safety.
  • Hate Speech/Racially or ethnically offensive content.

If adopted, several of the guidelines listed above would be difficult to interpret. As mentioned before, would video content showing the capture of a civilian journalist and his beheading violate the guideline that bars content “contain[ing] violent images of killing or physical abuse . . . captured solely, or principally, for exploitive . . . or gratuitous purposes”?

Furthermore, what is a human rights violation in one country might very well be venerated tradition in another society and determining the right level of deference to pay a particular society will be challenging. The Satere-Mawe, an indigenous people in Brazil, perform a male adulthood initiation ritual in which boys stick their hands in a glove filled with bullet ants whose bites inject venom into their victims, causing severe pain, temporary paralysis in the arms, and uncontrollable shaking that may last up to twenty-four hours. In other societies, un-anaesthetized male circumcision is performed on teenage boys without sterilized medical instruments as part of a religious ritual, and botched circumcisions routinely leads to death. Do these rituals constitute human rights abuses?

A board that consists of former political leaders, philanthropists, and academics is also likely to introduce disagreements that might stall the creation of, or adherence to, a clear policy regarding acceptable content.

Other Challenges

Unintended Consequences

Threats to Human Security

Government Crackdowns on Internet Access

Worsening of Human Rights Conditions


  1. Contra E-mail from Patrick Meier, Director of Crisis Mapping, Ushahidi and Doctoral Research Fellow, Harvard Humanitarian Initiative, Harvard University (Feb. 1, 2011, 18:21 PDT) (on file with author) (stating that there is currently no dearth of footage about human rights abuses circulating on the internet, specifically pointing to the videos on YouTube and documenting government-endorsed brutality against civilian protesters in Egypt as Mubarak struggles to stay in power).