SECOND INTERNATIONAL HARVARD CONFERENCE ON INTERNET & SOCIETY  may 26-29, 1998
 
Internet Filtration: Rights to Listen, Rights to Speak, Rights to
Tune Out

by David Melaugh

The Internet is a wide-open, new medium of information. By the millions, teenagers, political groups, used-car salespeople, and pornographers surf, contribute, download, and do research on the Web. With access to this virtual fire hose of information, it is natural for users to want a method to sort through the information, weeding out the useless or offensive bits.

It is useful first to understand such terms such as "filtration" or "rating." These terms refer to a wide range of techniques for blocking access to a certain class of material. These methods include keyword filtering (blocking material that contains certain keywords), site blocking (blocking sites based on some predetermined list), and rating systems (blocking sties according to the rating they are given, either by the site author or a third party).

So, what is meant by a "certain class of material"? The answer to this question could occupy a whole conference of its own. Discussions of filtering typically arise in context of protecting minors from "objectionable" material containing pornographic or violent content. This is certainly not the only context (see ADL, below), but restricting minors' access to pornography takes up much of the current focus on this issue. In addition, most commercial filtration providers also block sites that depict violence, offer access to online gambling, or have access to otherwise offensive content.

What, if anything, is wrong with such techniques of limiting access to sites? While this is meant to be an impartial presentation, it may be simplest, organizationally, to move through the objections to filtering. Objections to filtering technology typically take one of two forms: (note that these are, in a way, contradictory)

  1. That the technology doesn't work well (and may never work well).
  2. The better the technology gets, the more dangerous to free speech it will become.

Objection 1: Filtering Doesn't Work, and May Never Work

Imagine that the parents of a thirteen-year-old wish to limit her access to the Internet, much in the same way they might try to prevent her from renting certain movies, reading certain magazines, or watching certain TV shows. They would likely turn to a commercial provider of Internet filtering software (see below for a list of publishers). This software will use several techniques to block access to objectionable content. Most programs come with a list of sites that the company has determined contain questionable content. Some programs also use keyword filtering, which blocks access to pages which contain a certain combination of keywords.

Such organizations as Peacefire primarily object to filtering software because, they argue, it is ineffective: it blocks too much (see example). The Electronic Privacy Information Center makes a similar argument in a recent report, Faulty Filters, as does Netly News. Manufacturers of filtering software respond that this is still an infant industry, and some patience is required. Several manufacturers have set up complaint procedures to unblock a site. Further, the industry argues that many of the examples objectors cite are outdated, not even properly reflecting the current state of the technology. (Filtering Facts makes such an argument.)

For manufacturers of filtering technology, see:

Computer Professionals for Social Responsibility has a fairly even-handed FAQ about filtering (as well as a policy statement).



Objection 2: Effective Filter/Rating Poses a Danger to Free Speech

At the other end of the spectrum, objectors argue that effective filtering may pose an even greater danger to free speech. Rating technology holds open the promise of web-wide "meta" information describing a site to potential viewers before the viewer even arrives at a site. W3C, a standard-setting group, is currently promulgating PICSRules, a method by which site authors could incorporate rating tags into their site coding. This opens the way for groups like the Recreational Software Advisory Council and SafeSurf to detail rating schema and authorize sites to incorporate the schema.

What these detractors fear is that effective rating technology will chill speech that would otherwise get out to the casual (or even determined) viewer. The web has the potential to be many things, one of which being an online analog to the classic "public forum"—a place of some visibility which allows citizens to speak their mind and be heard. Objectors hold that effective ratings technology has the potential to allow a viewer to say, for instance, "I would prefer never to come across a left-wing political site." The Anti-Defamation League has already proposed doing just this. ADL is teaming up with CyberPatrol to block hate-speech sites. (See ADL Press Release; CyberPatrol Press Release.)

In some sense, this debate mirrors the classic, and often-repeated "Is new technology neutral?" debate. Rating supporters hold that the technology is inherently neutral, and will merely allow viewers to surf more efficiently (see, for example, Paul Resnick's Ratings FAQ). Objectors respond that technology is often not neutral, and could potentially be used to quash free speech (see the ACLU's "Fahrenheit 451.2: Is Cyberspace Burning?", the Global Internet Liberty Campaign's Submission on PICSRules, and the Electronic Frontier Foundation's Ratings Archive).


Resources:

Related Issues:

Age verification industry

Filtering in Schools/Libraries


Contributed to the Harvard Conference web site by David E. Melaugh, Harvard Law School, Class of 1998, where he is Assistant Editor, Journal of Law & Technology; Intern, Berkman Center. David Melaugh graduated Dartmouth College, class of 1997, double major in Religion and Government. Melaugh is originally from the SF Bay Area, CA and will be working for the Department of Justice, Computer Crimes and Intellectual Property Section in the summer of 1998.