Sentence-sliced Text Chapter 11

From Yochai Benkler - Wealth of Networks
Revision as of 21:31, 24 September 2006 by Imres (talk | contribs) (loading the text (page is too long))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Chapter 11 The Battle Over the Institutional Ecology of the Digital Environment


383

The decade straddling the turn of the twenty-first century has seen high levels of legislative and policy activity in the domains of information and communications.

Between 1995 and 1998, the United States completely overhauled its telecommunications law for the first time in sixty years, departed drastically from decades of practice on wireless regulation, revolutionized the scope and focus of trademark law, lengthened the term of copyright, criminalized individual user infringement, and created new paracopyright powers for rights holders that were so complex that the 1998 Digital Millennium Copyright Act (DMCA) that enacted them was longer than the entire Copyright Act.

Europe covered similar ground on telecommunications, and added a new exclusive right in raw facts in databases.

Both the United States and the European Union drove for internationalization of the norms they adopted, through the new World Intellectual Property Organization (WIPO) treaties and, more important, though the inclusion of intellectual property concerns in the international trade regime.

In the seven years since then, legal battles have raged over the meaning of these changes, as well as over efforts to extend them in other directions.

From telecommunications law to copyrights, from domain name assignment to trespass to server, we have seen a broad range of distinct regulatory moves surrounding the question of control over the basic resources needed to create, encode, transmit, and receive information, knowledge, and culture in the digital environment.

As we telescope up from the details of sundry regulatory skirmishes, we begin to see a broad pattern of conflict over the way that access to these core resources will be controlled.


384

Much of the formal regulatory drive has been to increase the degree to which private, commercial parties can gain and assert exclusivity in core resources necessary for information production and exchange.

At the physical layer, the shift to broadband Internet has been accompanied by less competitive pressure and greater legal freedom for providers to exclude competitors from, and shape the use of, their networks.

That freedom from both legal and market constraints on exercising control has been complemented by increasing pressures from copyright industries to require that providers exercise greater control over the information flows in their networks in order to enforce copyrights.

At the logical layer, anticircumvention provisions and the efforts to squelch peer-to-peer sharing have created institutional pressures on software and protocols to offer a more controlled and controllable environment.

At the content layer, we have seen a steady series of institutional changes aimed at tightening exclusivity.


384

At each of these layers, however, we have also seen countervailing forces.

At the physical layer, the Federal Communications Commission's (FCC's) move to permit the development of wireless devices capable of self-configuring as user-owned networks offers an important avenue for a commons-based last mile.

The open standards used for personal computer design have provided an open platform.

The concerted resistance against efforts to require computers to be designed so they can more reliably enforce copyrights against their users has, to this point, prevented extension of the DMCA approach to hardware design.

At the logical layer, the continued centrality of open standard-setting processes and the emergence of free software as a primary modality of producing mission-critical software provide significant resistance to efforts to enclose the logical layer.

At the content layer, where law has been perhaps most systematically one-sided in its efforts to enclose, the cultural movements and the technical affordances that form the foundation of the transformation described throughout this book stand as the most significant barrier to enclosure.


385

It is difficult to tell how much is really at stake, from the long-term perspective, in all these legal battles.

From one point of view, law would have to achieve a great deal in order to replicate the twentieth-century model of industrial information economy in the new technical-social context.

It would have to curtail some of the most fundamental technical characteristics of computer networks and extinguish some of our most fundamental human motivations and practices of sharing and cooperation.

It would have to shift the market away from developing ever-cheaper general-purpose computers whose value to users is precisely their on-the-fly configurability over time, toward more controllable and predictable devices.

It would have to squelch the emerging technologies in wireless, storage, and computation that are permitting users to share their excess resources ever more efficiently.

It would have to dampen the influence of free software, and prevent people, young and old, from doing the age-old human thing: saying to each other, "here, why don't you take this, you'll like it," with things they can trivially part with and share socially.

It is far from obvious that law can, in fact, achieve such basic changes.

From another viewpoint, there may be no need to completely squelch all these things.

Lessig called this the principle of bovinity: a small number of rules, consistently applied, suffice to control a herd of large animals.

There is no need to assure that all people in all contexts continue to behave as couch potatoes for the true scope of the networked information economy to be constrained.

It is enough that the core enabling technologies and the core cultural practices are confined to small groups-some teenagers, some countercultural activists.

There have been places like the East Village or the Left Bank throughout the period of the industrial information economy.

For the gains in autonomy, democracy, justice, and a critical culture that are described in part II to materialize, the practices of nonmarket information production, individually free creation, and cooperative peer production must become more than fringe practices.

They must become a part of life for substantial portions of the networked population.

The battle over the institutional ecology of the digitally networked environment is waged precisely over how many individual users will continue to participate in making the networked information environment, and how much of the population of consumers will continue to sit on the couch and passively receive the finished goods of industrial information producers.



Institutional Ecology and Path Dependence


386

The century-old pragmatist turn in American legal thought has led to the development of a large and rich literature about the relationship of law to society and economy.

It has both Right and Left versions, and has disciplinary roots in history, economics, sociology, psychology, and critical theory.

Explanations are many: some simple, some complex; some analytically tractable, many not.

I do not make a substantive contribution to that debate here, but rather build on some of its strains to suggest that the process is complex, and particularly, that the relationship of law to social relations is one of punctuated equilibrium-there are periods of stability followed by periods of upheaval, and then adaptation and stabilization anew, until the next cycle.

Hopefully, the preceding ten chapters have provided sufficient reason to think that we are going through a moment of social-economic transformation today, rooted in a technological shock to our basic modes of information, knowledge, and cultural production.

Most of this chapter offers a sufficient description of the legislative and judicial battles of the past few years to make the case that we are in the midst of a significant perturbation of some sort.

I suggest that the heightened activity is, in fact, a battle, in the domain of law and policy, over the shape of the social settlement that will emerge around the digital computation and communications revolution.


386

The basic claim is made up of fairly simple components.

First, law affects human behavior on a micromotivational level and on a macro-social-organizational level.

This is in contradistinction to, on the one hand, the classical Marxist claim that law is epiphenomenal, and, on the other hand, the increasingly rare simple economic models that ignore transaction costs and institutional barriers and simply assume that people will act in order to maximize their welfare, irrespective of institutional arrangements.

Second, the causal relationship between law and human behavior is complex.

Simple deterministic models of the form "if law X, then behavior Y" have been used as assumptions, but these are widely understood as, and criticized for being, oversimplifications for methodological purposes.

Laws do affect human behavior by changing the payoffs to regulated actions directly.

However, they also shape social norms with regard to behaviors, psychological attitudes toward various behaviors, the cultural understanding of actions, and the politics of claims about behaviors and practices.

These effects are not all linearly additive.

Some push back and nullify the law, some amplify its effects; it is not always predictable which of these any legal change will be.

Decreasing the length of a "Walk" signal to assure that pedestrians are not hit by cars may trigger wider adoption of jaywalking as a norm, affecting ultimate behavior in exactly the opposite direction of what was intended.

This change may, in turn, affect enforcement regarding jaywalking, or the length of the signals set for cars, because the risks involved in different signal lengths change as actual expected behavior changes, which again may feed back on driving and walking practices.

Third, and as part of the complexity of the causal relation, the effects of law differ in different material, social, and cultural contexts.

The same law introduced in different societies or at different times will have different effects.

It may enable and disable a different set of practices, and trigger a different cascade of feedback and countereffects.

This is because human beings are diverse in their motivational structure and their cultural frames of meaning for behavior, for law, or for outcomes.

Fourth, the process of lawmaking is not exogenous to the effects of law on social relations and human behavior.

One can look at positive political theory or at the history of social movements to see that the shape of law itself is contested in society because it makes (through its complex causal mechanisms) some behaviors less attractive, valuable, or permissible, and others more so.

The "winners" and the "losers" battle each other to tweak the institutional playing field to fit their needs.

As a consequence of these, there is relatively widespread acceptance that there is path dependence in institutions and social organization.

That is, the actual organization of human affairs and legal systems is not converging through a process of either Marxist determinism or its neoclassical economics mirror image, "the most efficient institutions win out in the end."

Different societies will differ in initial conditions and their historically contingent first moves in response to similar perturbations, and variances will emerge in their actual practices and institutional arrangements that persist over time-irrespective of their relative inefficiency or injustice.


387

The term "institutional ecology" refers to this context-dependent, causally complex, feedback-ridden, path-dependent process.

An example of this interaction in the area of communications practices is the description in chapter 6 of how the introduction of radio was received and embedded in different legal and economic systems early in the twentieth century.

A series of organizational and institutional choices converged in all nations on a broadcast model, but the American broadcast model, the BBC model, and the state-run monopoly radio models created very different journalistic styles, consumption expectations and styles, and funding mechanisms in these various systems.

These differences, rooted in a series of choices made during a short period in the 1920s, persisted for decades in each of the respective systems.

Paul Starr has argued in The Creation of the Media that basic institutional choices-from postage pricing to freedom of the press-interacted with cultural practices and political culture to underwrite substantial differences in the print media of the United States, Britain, and much of the European continent in the late eighteenth and throughout much of the nineteenth centuries.1

Again, the basic institutional and cultural practices were put in place around the time of the American Revolution, and were later overlaid with the introduction of mass-circulation presses and the telegraph in the mid-1800s.

Ithiel de Sola Pool's Technologies of Freedom describes the battle between newspapers and telegraph operators in the United States and Britain over control of telegraphed news flows.

In Britain, this resulted in the nationalization of telegraph and the continued dominance of London and The Times.

In the United States, it resolved into the pooling model of the Associated Press, based on private lines for news delivery and sharing-the prototype for newspaper chains and later network-television models of mass media.2

The possibility of multiple stable equilibria alongside each other evoked by the stories of radio and print media is a common characteristic to both ecological models and analytically tractable models of path dependency.

Both methodological approaches depend on feedback effects and therefore suggest that for any given path divergence, there is a point in time where early actions that trigger feedbacks can cause large and sustained differences over time.


388

Systems that exhibit path dependencies are characterized by periods of relative pliability followed by periods of relative stability.

Institutions and social practices coevolve through a series of adaptations-feedback effects from the institutional system to social, cultural, and psychological frameworks; responses into the institutional system; and success and failure of various behavioral patterns and belief systems-until a society reaches a stage of relative stability.

It can then be shaken out of that stability by external shocks-like Admiral Perry's arrival in Japan-or internal buildup of pressure to a point of phase transition, as in the case of slavery in the United States.

Of course, not all shocks can so neatly be categorized as external or internal-as in the case of the Depression and the New Deal.

To say that there are periods of stability is not to say that in such periods, everything is just dandy for everyone.

It is only to say that the political, social, economic settlement is too widely comfortable for, accepted or acquiesced in, by too many agents who in that society have the power to change practices for institutional change to have substantial effects on the range of lived human practices.


389

The first two parts of this book explained why the introduction of digital computer-communications networks presents a perturbation of transformative potential for the basic model of information production and exchange in modern complex societies.

They focused on the technological, economic, and social patterns that are emerging, and how they differ from the industrial information economy that preceded them.

This chapter offers a fairly detailed map of how law and policy are being tugged and pulled in response to these changes.

Digital computers and networked communications as a broad category will not be rolled back by these laws.

Instead, we are seeing a battle-often but not always self-conscious-over the precise shape of these technologies.

More important, we are observing a series of efforts to shape the social and economic practices as they develop to take advantage of these new technologies.



A Framework for Mapping the Institutional Ecology


389

Two specific examples will illustrate the various levels at which law can operate to shape the use of information and its production and exchange.

The first example builds on the story from chapter 7 of how embarrassing internal e-mails from Diebold, the electronic voting machine maker, were exposed by investigative journalism conducted on a nonmarket and peer-production model.

After students at Swarthmore College posted the files, Diebold made a demand under the DMCA that the college remove the materials or face suit for contributory copyright infringement.

The students were therefore forced to remove the materials.

However, in order keep the materials available, the students asked students at other institutions to mirror the files, and injected them into the eDonkey, BitTorrent, and FreeNet file-sharing and publication networks.

Ultimately, a court held that the unauthorized publication of files that were not intended for sale and carried such high public value was a fair use.

This meant that the underlying publication of the files was not itself a violation, and therefore the Internet service provider was not liable for providing a conduit.

However, the case was decided on September 30, 2004-long after the information would have been relevant to the voting equipment certification process in California.

What kept the information available for public review was not the ultimate vindication of the students' publication.

It was the fact that the materials were kept in the public sphere even under threat of litigation.

Recall also that at least some of the earlier set of Diebold files that were uncovered by the activist who had started the whole process in early 2003 were zipped, or perhaps encrypted in some form.

Scoop, the Web site that published the revelation of the initial files, published-along with its challenge to the Internet community to scour the files and find holes in the system-links to locations in which utilities necessary for reading the files could be found.


390

There are four primary potential points of failure in this story that could have conspired to prevent the revelation of the Diebold files, or at least to suppress the peer-produced journalistic mode that made them available.

First, if the service provider-the college, in this case-had been a sole provider with no alternative physical transmission systems, its decision to block the materials under threat of suit would have prevented publication of the materials throughout the relevant period.

Second, the existence of peer-to-peer networks that overlay the physical networks and were used to distribute the materials made expunging them from the Internet practically impossible.

There was no single point of storage that could be locked down.

This made the prospect of threatening other universities futile.

Third, those of the original files that were not in plain text were readable with software utilities that were freely available on the Internet, and to which Scoop pointed its readers.

This made the files readable to many more critical eyes than they otherwise would have been.

Fourth, and finally, the fact that access to the raw materials-the e-mails-was ultimately found to be privileged under the fair-use doctrine in copyright law allowed all the acts that had been performed in the preceding period under a shadow of legal liability to proceed in the light of legality.


390

The second example does not involve litigation, but highlights more of the levers open to legal manipulation.

In the weeks preceding the American-led invasion of Iraq, a Swedish video artist produced an audio version of Diana Ross and Lionel Richie's love ballad, "Endless Love," lip-synched to news footage of U.S. president George Bush and British prime minister Tony Blair.

By carefully synchronizing the lip movements from the various news clips, the video produced the effect of Bush "singing" Richie's part, and Blair "singing" Ross's, serenading each other with an eternal love ballad.

No legal action with regard to the release of this short video has been reported.

However, the story adds two components not available in the context of the Diebold files context.

First, it highlights that quotation from video and music requires actual copying of the digital file.

Unlike text, you cannot simply transcribe the images or the sound.

This means that access to the unencrypted bits is more important than in the case of text.

Second, it is not at all clear that using the entire song, unmodified, is a "fair use."

While it is true that the Swedish video is unlikely to cut into the market for the original song, there is nothing in the video that is a parody either of the song itself or of the news footage.

The video uses "found materials," that is, materials produced by others, to mix them in a way that is surprising, creative, and creates a genuinely new statement.

However, its use of the song is much more complete than the minimalist uses of digital sampling in recorded music, where using a mere two-second, three-note riff from another's song has been found to be a violation unless done with a negotiated license.3


391

Combined, the two stories suggest that we can map the resources necessary for a creative communication, whether produced on a market model or a nonmarket model, as including a number of discrete elements.

First, there is the universe of "content" itself: existing information, cultural artifacts and communications, and knowledge structures.

These include the song and video footage, or the e-mail files, in the two stories.

Second, there is the cluster of machinery that goes into capturing, manipulating, fixing and communicating the new cultural utterances or communications made of these inputs, mixed with the creativity, knowledge, information, or communications capacities of the creator of the new statement or communication.

These include the physical devices-the computers used by the students and the video artist, as well as by their readers or viewers-and the physical transmission mechanisms used to send the information or communications from one place to another.

In the Diebold case, the firm tried to use the Internet service provider liability regime of the DMCA to cut off the machine storage and mechanical communications capacity provided to the students by the university.

However, the "machinery" also includes the logical components-the software necessary to capture, read or listen to, cut, paste, and remake the texts or music; the software and protocols necessary to store, retrieve, search, and communicate the information across the Internet.


391

As these stories suggest, freedom to create and communicate requires use of diverse things and relationships-mechanical devices and protocols, information, cultural materials, and so forth.

Because of this diversity of components and relationships, the institutional ecology of information production and exchange is a complex one.

It includes regulatory and policy elements that affect different industries, draw on various legal doctrines and traditions, and rely on diverse economic and political theories and practices.

It includes social norms of sharing and consumption of things conceived of as quite different-bandwidth, computers, and entertainment materials.

To make these cohere into a single problem, for several years I have been using a very simple, three-layered representation of the basic functions involved in mediated human communications.

These are intended to map how different institutional components interact to affect the answer to the basic questions that define the normative characteristics of a communications system-who gets to say what, to whom, and who decides?4


392

These are the physical, logical, and content layers.

The physical layer refers to the material things used to connect human beings to each other.

These include the computers, phones, handhelds, wires, wireless links, and the like.

The content layer is the set of humanly meaningful statements that human beings utter to and with one another.

It includes both the actual utterances and the mechanisms, to the extent that they are based on human communication rather than mechanical processing, for filtering, accreditation, and interpretation.

The logical layer represents the algorithms, standards, ways of translating human meaning into something that machines can transmit, store, or compute, and something that machines process into communications meaningful to human beings.

These include standards, protocols, and software-both general enabling platforms like operating systems, and more specific applications.

A mediated human communication must use all three layers, and each layer therefore represents a resource or a pathway that the communication must use or traverse in order to reach its intended destination.

In each and every one of these layers, we have seen the emergence of technical and practical capabilities for using that layer on a nonproprietary model that would make access cheaper, less susceptible to control by any single party or class of parties, or both.

In each and every layer, we have seen significant policy battles over whether these nonproprietary or open-platform practices will be facilitated or even permitted.

Looking at the aggregate effect, we see that at all these layers, a series of battles is being fought over the degree to which some minimal set of basic resources and capabilities necessary to use and participate in constructing the information environment will be available for use on a nonproprietary, nonmarket basis.


393

In each layer, the policy debate is almost always carried out in local, specific terms.

We ask questions like, Will this policy optimize "spectrum management" in these frequencies, or, Will this decrease the number of CDs sold?

However, the basic, overarching question that we must learn to ask in all these debates is: Are we leaving enough institutional space for the social-economic practices of networked information production to emerge?

The networked information economy requires access to a core set of capabilities-existing information and culture, mechanical means to process, store, and communicate new contributions and mixes, and the logical systems necessary to connect them to each other.

What nonmarket forms of production need is a core common infrastructure that anyone can use, irrespective of whether their production model is market-based or not, proprietary or not.

In almost all these dimensions, the current trajectory of technological-economic-social trends is indeed leading to the emergence of such a core common infrastructure, and the practices that make up the networked information economy are taking advantage of open resources.

Wireless equipment manufacturers are producing devices that let users build their own networks, even if these are now at a primitive stage.

The open-innovation ethos of the programmer and Internet engineering community produce both free software and proprietary software that rely on open standards for providing an open logical layer.

The emerging practices of free sharing of information, knowledge, and culture that occupy most of the discussion in this book are producing an ever-growing stream of freely and openly accessible content resources.

The core common infrastructure appears to be emerging without need for help from a guiding regulatory hand.

This may or may not be a stable pattern.

It is possible that by some happenstance one or two firms, using one or two critical technologies, will be able to capture and control a bottleneck.

At that point, perhaps regulatory intervention will be required.

However, from the beginning of legal responses to the Internet and up to this writing in the middle of 2005, the primary role of law has been reactive and reactionary.

It has functioned as a point of resistance to the emergence of the networked information economy.

It has been used by incumbents from the industrial information economies to contain the risks posed by the emerging capabilities of the networked information environment.

What the emerging networked information economy therefore needs, in almost all cases, is not regulatory protection, but regulatory abstinence.


393

The remainder of this chapter provides a more or less detailed presentation of the decisions being made at each layer, and how they relate to the freedom to create, individually and with others, without having to go through proprietary, market-based transactional frameworks.

Because so many components are involved, and so much has happened since the mid-1990s, the discussion is of necessity both long in the aggregate and truncated in each particular category.

To overcome this expositional problem, I have collected the various institutional changes in table 11.1.

For readers interested only in the overarching claim of this chapter-that is, that there is, in fact, a battle over the institutional environment, and that many present choices interact to increase or decrease the availability of basic resources for information production and exchange-table 11.1 may provide sufficient detail.

For those interested in a case study of the complex relationship between law, technology, social behavior, and market structure, the discussion of peer-to-peer networks may be particularly interesting to pursue.

A quick look at table 11.1 reveals that there is a diverse set of sources of openness.

A few of these are legal.

Mostly, they are based on technological and social practices, including resistance to legal and regulatory drives toward enclosure.

Examples of policy interventions that support an open core common infrastructure are the FCC's increased permission to deploy open wireless networks and the various municipal broadband initiatives.

The former is a regulatory intervention, but its form is largely removal of past prohibitions on an entire engineering approach to building wireless systems.

Municipal efforts to produce open broadband networks are being resisted at the state legislation level, with statutes that remove the power to provision broadband from the home rule powers of municipalities.

For the most part, the drive for openness is based on individual and voluntary cooperative action, not law.

The social practices of openness take on a quasi-normative face when practiced in standard-setting bodies like the Internet Engineering Task Force (IETF) or the World Wide Web Consortium (W3C).

However, none of these have the force of law.

Legal devices also support openness when used in voluntaristic models like free software licensing and Creative Commons-type licensing.

However, most often when law has intervened in its regulatory force, as opposed to its contractual-enablement force, it has done so almost entirely on the side of proprietary enclosure.


394

Another characteristic of the social-economic-institutional struggle is an alliance between a large number of commercial actors and the social sharing culture.

We see this in the way that wireless equipment manufacturers are selling into a market of users of WiFi and similar unlicensed wireless devices.

We see this in the way that personal computer manufacturers are competing over decreasing margins by producing the most general-purpose machines that would be most flexible for their users, rather than machines that would most effectively implement the interests of Hollywood and the recording industry.

We see this in the way that service and equipment-based firms, like IBM and Hewlett-Packard (HP), support open-source and free software.

The alliance between the diffuse users and the companies that are adapting their business models to serve them as users, instead of as passive consumers, affects the political economy of this institutional battle in favor of openness.

On the other hand, security consciousness in the United States has led to some efforts to tip the balance in favor of closed proprietary systems, apparently because these are currently perceived as more secure, or at least more amenable to government control.

While orthogonal in its political origins to the battle between proprietary and commons-based strategies for information production, this drive does tilt the field in favor of enclosure, at least at the time of this writing in 2005.


395

Table 11.1: Overview of the Institutional Ecology


(Please follow the link above to access Table 11.1)


396

Over the past few years, we have also seen that the global character of the Internet is a major limit on effective enclosure, when openness is a function of technical and social practices, and enclosure is a function of law.5

When Napster was shut down in the United States, for example, KaZaa emerged in the Netherlands, from where it later moved to Australia.

This force is meeting the countervailing force of international harmonization-a series of bilateral and multilateral efforts to "harmonize" exclusive rights regimes internationally and efforts to coordinate international enforcement.

It is difficult at this stage to predict which of these forces will ultimately have the upper hand.

It is not too early to map in which direction each is pushing.

And it is therefore not too early to characterize the normative implications of the success or failure of these institutional efforts.



The Physical Layer


396

The physical layer encompasses both transmission channels and devices for producing and communicating information.

In the broadcast and telephone era, devices were starkly differentiated.

Consumers owned dumb terminals.

Providers owned sophisticated networks and equipment: transmitters and switches.

Consumers could therefore consume whatever providers could produce most efficiently that the providers believed consumers would pay for.

Central to the emergence of the freedom of users in the networked environment is an erosion of the differentiation between consumer and provider equipment.

Consumers came to use general-purpose computers that could do whatever their owners wanted, instead of special-purpose terminals that could only do what their vendors designed them to do.

These devices were initially connected over a transmission network-the public phone system-that was regulated as a common carrier.

Common carriage required the network owners to carry all communications without differentiating by type or content.

The network was neutral as among communications.

The transition to broadband networks, and to a lesser extent the emergence of Internet services on mobile phones, are threatening to undermine that neutrality and nudge the network away from its end-to-end, user-centric model to one designed more like a five-thousand-channel broadcast model.

At the same time, Hollywood and the recording industry are pressuring the U.S. Congress to impose regulatory requirements on the design of personal computers so that they can be relied on not to copy music and movies without permission.

In the process, the law seeks to nudge personal computers away from being purely general-purpose computation devices toward being devices with factory-defined behaviors vis-à-vis predicted-use patterns, like glorified televisions and CD players.

The emergence of the networked information economy as described in this book depends on the continued existence of an open transport network connecting general-purpose computers.

It therefore also depends on the failure of the efforts to restructure the network on the model of proprietary networks connecting terminals with sufficiently controlled capabilities to be predictable and well behaved from the perspective of incumbent production models.



Transport: Wires and Wireless


397

Recall the Cisco white paper quoted in chapter 5.

In it, Cisco touted the value of its then new router, which would allow a broadband provider to differentiate streams of information going to and from the home at the packet level.

If the packet came from a competitor, or someone the user wanted to see or hear but the owner preferred that the user did not, the packet could be slowed down or dropped.

If it came from the owner or an affiliate, it could be speeded up.

The purpose of the router was not to enable evil control over users.

It was to provide better-functioning networks.

America Online (AOL), for example, has been reported as blocking its users from reaching Web sites that have been advertised in spam e-mails.

The theory is that if spammers know their Web site will be inaccessible to AOL customers, they will stop.6

The ability of service providers to block sites or packets from certain senders and promote packets from others may indeed be used to improve the network.

However, whether this ability will in fact be used to improve service depends on the extent to which the interests of all users, and particularly those concerned with productive uses of the network, are aligned with the interests of the service providers.

Clearly, when in 2005 Telus, Canada's second largest telecommunications company, blocked access to the Web site of the Telecommunications Workers Union for all of its own clients and those of internet service providers that relied on its backbone network, it was not seeking to improve service for those customers' benefit, but to control a conversation in which it had an intense interest.

When there is a misalignment, the question is what, if anything, disciplines the service providers' use of the technological capabilities they possess?

One source of discipline would be a genuinely competitive market.

The transition to broadband has, however, severely constrained the degree of competition in Internet access services.

Another would be regulation: requiring owners to treat all packets equally.

This solution, while simple to describe, remains highly controversial in the policy world.

It has strong supporters and strong opposition from the incumbent broadband providers, and has, as a practical matter, been rejected for the time being by the FCC.

The third type of solution would be both more radical and less "interventionist" from the perspective of regulation.

It would involve eliminating contemporary regulatory barriers to the emergence of a user-owned wireless infrastructure.

It would allow users to deploy their own equipment, share their wireless capacity, and create a "last mile" owned by all users in common, and controlled by none.

This would, in effect, put equipment manufacturers in competition to construct the "last mile" of broadband networks, and thereby open up the market in "middle-mile" Internet connection services.


398

Since the early 1990s, when the Clinton administration announced its "Agenda for Action" for what was then called "the information superhighway," it was the policy of the United States to "let the private sector lead" in deployment of the Internet.

To a greater or lesser degree, this commitment to private provisioning was adopted in most other advanced economies in the world.

In the first few years, this meant that investment in the backbone of the Internet was private, and heavily funded by the stock bubble of the late 1990s.

It also meant that the last distribution bottleneck-the "last mile"-was privately owned.

Until the end of the 1990s, the last mile was made mostly of dial-up connections over the copper wires of the incumbent local exchange carriers.

This meant that the physical layer was not only proprietary, but that it was, for all practical purposes, monopolistically owned.

Why, then, did the early Internet nonetheless develop into a robust, end-to-end neutral network?

As Lessig showed, this was because the telephone carriers were regulated as common carriers.

They were required to carry all traffic without discrimination.

Whether a bit stream came from Cable News Network (CNN) or from an individual blog, all streams-upstream from the user and downstream to the user-were treated neutrally.



Broadband Regulation

399

The end of the 1990s saw the emergence of broadband networks.

In the United States, cable systems, using hybrid fiber-coaxial systems, moved first, and became the primary providers.

The incumbent local telephone carriers have been playing catch-up ever since, using digital subscriber line (DSL) techniques to squeeze sufficient speed out of their copper infrastructure to remain competitive, while slowly rolling out fiber infrastructure closer to the home.

As of 2003, the incumbent cable carriers and the incumbent local telephone companies accounted for roughly 96 percent of all broadband access to homes and small offices.7

In 1999-2000, as cable was beginning to move into a more prominent position, academic critique began to emerge, stating that the cable broadband architecture could be manipulated to deviate from the neutral, end-to-end architecture of the Internet.

One such paper was written by Jerome Saltzer, one of the authors of the paper that originally defined the "end-to-end" design principle of the Internet in 1980, and Lessig and Mark Lemley wrote another.

These papers began to emphasize that cable broadband providers technically could, and had commercial incentive to, stop treating all communications neutrally.

They could begin to move from a network where almost all functions are performed by user-owned computers at the ends of the network to one where more is done by provider equipment at the core.

The introduction of the Cisco policy router was seen as a stark marker of how things could change.


399

The following two years saw significant regulatory battles over whether the cable providers would be required to behave as commons carriers.

In particular, the question was whether they would be required to offer competitors nondiscriminatory access to their networks, so that these competitors could compete in Internet services.

The theory was that competition would discipline the incumbents from skewing their networks too far away from what users valued as an open Internet.

The first round of battles occurred at the municipal level.

Local franchising authorities tried to use their power over cable licenses to require cable operators to offer open access to their competitors if they chose to offer cable broadband.

The cable providers challenged these regulations in courts.

The most prominent decision came out of Portland, Oregon, where the Federal Court of Appeals for the Ninth Circuit held that broadband was part information service and part telecommunications service, but not a cable service.

The FCC, not the cable franchising authority, had power to regulate it.8

At the same time, as part of the approval of the AOL-Time Warner merger, the Federal Trade Commission (FTC) required the new company to give at least three competitors open access to its broadband facilities, should AOL be offered cable broadband facilities over Time Warner.


400

The AOL-Time Warner merger requirements, along with the Ninth Circuit's finding that cable broadband included a telecommunications component, seemed to indicate that cable broadband transport would come to be treated as a common carrier.

This was not to be.

In late 2001 and the middle of 2002, the FCC issued a series of reports that would reach the exact opposite result.

Cable broadband, the commission held, was an information service, not a telecommunications service.

This created an imbalance with the telecommunications status of broadband over telephone infrastructure, which at the time was treated as a telecommunications service.

The commission dealt with this imbalance by holding that broadband over telephone infrastructure, like broadband over cable, was now to be treated as an information service.

Adopting this definition was perhaps admissible as a matter of legal reasoning, but it certainly was not required by either sound legal reasoning or policy.

The FCC's reasoning effectively took the business model that cable operators had successfully used to capture two-thirds of the market in broadband-bundling two discrete functionalities, transport (carrying bits) and higher-level services (like e-mail and Web hosting)-and treated it as though it described the intrinsic nature of "broadband cable" as a service.

Because that service included more than just carriage of bits, it could be called an information service.

Of course, it would have been as legally admissible, and more technically accurate, to do as the Ninth Circuit had done.

That is, to say that cable broadband bundles two distinct services: carriage and information-use tools.

The former is a telecommunications service.

In June of 2005, the Supreme Court in the Brand X case upheld the FCC's authority to make this legally admissible policy error, upholding as a matter of deference to the expert agency the Commission's position that cable broadband services should be treated as information services.9

As a matter of policy, the designation of broadband services as "information services" more or less locked the FCC into a "no regulation" approach.

As information services, broadband providers obtained the legal power to "edit" their programming, just like any operator of an information service, like a Web site.

Indeed, this new designation has placed a serious question mark over whether future efforts to regulate carriage decisions would be considered constitutional, or would instead be treated as violations of the carriers' "free speech" rights as a provider of information.

Over the course of the 1990s, there were a number of instances where carriers-particularly cable, but also telephone companies-were required by law to carry some signals from competitors.

In particular, cable providers were required to carry over-the-air broadcast television, telephone carriers, in FCC rules called "video dialtone," were required to offer video on a common carriage basis, and cable providers that chose to offer broadband were required to make their infrastructure available to competitors on a common carrier model.

In each of these cases, the carriage requirements were subjected to First Amendment scrutiny by courts.

In the case of cable carriage of broadcast television, the carriage requirements were only upheld after six years of litigation.10

In cases involving video common carriage requirements applied to telephone companies and cable broadband, lower courts struck down the carriage requirements as violating the telephone and cable companies' free-speech rights.11

To a large extent, then, the FCC's regulatory definition left the incumbent cable and telephone providers-who control 96 percent of broadband connections to home and small offices-unregulated, and potentially constitutionally immune to access regulation and carriage requirements.


401

Since 2003 the cable access debate-over whether competitors should get access to the transport networks of incumbent broadband carriers-has been replaced with an effort to seek behavioral regulation in the form of "network neutrality."

This regulatory concept would require broadband providers to treat all packets equally, without forcing them to open their network up to competitors or impose any other of the commitments associated with common carriage.

The concept has the backing of some very powerful actors, including Microsoft, and more recently MCI, which still owns much of the Internet backbone, though not the last mile.

For this reason, if for no other, it remains as of this writing a viable path for institutional reform that would balance the basic structural shift of Internet infrastructure from a common-carriage to a privately controlled model.

Even if successful, the drive to network neutrality would keep the physical infrastructure a technical bottleneck, owned by a small number of firms facing very limited competition, with wide legal latitude for using that control to affect the flow of information over their networks.



Open Wireless Networks

402

A more basic and structural opportunity to create an open broadband infrastructure is, however, emerging in the wireless domain.

To see how, we must first recognize that opportunities to control the broadband infrastructure in general are not evenly distributed throughout the networked infrastructure.

The long-haul portions of the network have multiple redundant paths with no clear choke points.

The primary choke point over the physical transport of bits across the Internet is in the last mile of all but the most highly connected districts.

That is, the primary bottleneck is the wire or cable connecting the home and small office to the network.

It is here that cable and local telephone incumbents control the market.

It is here that the high costs of digging trenches, pulling fiber, and getting wires through and into walls pose a prohibitive barrier to competition.

And it is here, in the last mile, that unlicensed wireless approaches now offer the greatest promise to deliver a common physical infrastructure of first and last resort, owned by its users, shared as a commons, and offering no entity a bottleneck from which to control who gets to say what to whom.


402

As discussed in chapter 6, from the end of World War I and through the mid-twenties, improvements in the capacity of expensive transmitters and a series of strategic moves by the owners of the core patents in radio transmission led to the emergence of the industrial model of radio communications that typified the twentieth century.

Radio came to be dominated by a small number of professional, commercial networks, based on high-capital-cost transmitters.

These were supported by a regulatory framework tailored to making the primary model of radio utilization for most Americans passive reception, with simple receivers, of commercial programming delivered with high-powered transmitters.

This industrial model, which assumed large-scale capital investment in the core of the network and small-scale investments at the edges, optimized for receiving what is generated at the core, imprinted on wireless communications systems both at the level of design and at the level of regulation.

When mobile telephony came along, it replicated the same model, using relatively cheap handsets oriented toward an infrastructure-centric deployment of towers.

The regulatory model followed Hoover's initial pattern and perfected it.

A government agency strictly controlled who may place a transmitter, where, with what antenna height, and using what power.

The justification was avoidance of interference.

The presence of strict licensing was used as the basic assumption in the engineering of wireless systems throughout this period.

Since 1959, economic analysis of wireless regulation has criticized this approach, but only on the basis that it inefficiently regulated the legal right to construct a wireless system by using strictly regulated spectrum licenses, instead of creating a market in "spectrum use" rights.12

This critique kept the basic engineering assumptions stable-for radio to be useful, a high-powered transmitter must be received by simple receivers.

Given this engineering assumption, someone had to control the right to emit energy in any range of radio frequencies.

The economists wanted the controller to be a property owner with a flexible, transferable right.

The regulators wanted it to be a licensee subject to regulatory oversight and approval by the FCC.


403

As chapter 3 explained, by the time that legislatures in the United States and around the world had begun to accede to the wisdom of the economists' critique, it had been rendered obsolete by technology.

In particular, it had been rendered obsolete by the fact that the declining cost of computation and the increasing sophistication of communications protocols among end-user devices in a network made possible new, sharing-based solutions to the problem of how to allow users to communicate without wires.

Instead of having a regulation-determined exclusive right to transmit, which may or may not be subject to market reallocation, it is possible to have a market in smart radio equipment owned by individuals.

These devices have the technical ability to share capacity and cooperate in the creation of wireless carriage capacity.

These radios can, for example, cooperate by relaying each other's messages or temporarily "lending" their antennae to neighbors to help them decipher messages of senders, without anyone having exclusive use of the spectrum.

Just as PCs can cooperate to create a supercomputer in SETI@Home by sharing their computation, and a global-scale, peer-to-peer data-storage and retrieval system by sharing their hard drives, computationally intensive radios can share their capacity to produce a local wireless broadband infrastructure.

Open wireless networks allow users to install their own wireless device-much like the WiFi devices that have become popular.

These devices then search automatically for neighbors with similar capabilities, and self-configure into a high-speed wireless data network.

Reaching this goal does not, at this point, require significant technological innovation.

The technology is there, though it does require substantial engineering effort to implement.

The economic incentives to develop such devices are fairly straightforward.

Users already require wireless local networks.

They will gain added utility from extending their range for themselves, which would be coupled with the possibility of sharing with others to provide significant wide-area network capacity for whose availability they need not rely on any particular provider.

Ultimately, it would be a way for users to circumvent the monopoly last mile and recapture some of the rents they currently pay.

Equipment manufacturers obviously have an incentive to try to cut into the rents captured by the broadband monopoly/oligopoly by offering an equipment-embedded alternative.


404

My point here is not to consider the comparative efficiency of a market in wireless licenses and a market in end-user equipment designed for sharing channels that no one owns.

It is to highlight the implications of the emergence of a last mile that is owned by no one in particular, and is the product of cooperation among neighbors in the form of, "I'll carry your bits if you carry mine."

At the simplest level, neighbors could access locally relevant information directly, over a wide-area network.

More significant, the fact that users in a locality coproduced their own last-mile infrastructure would allow commercial Internet providers to set up Internet points of presence anywhere within the "cloud" of the locale.

The last mile would be provided not by these competing Internet service providers, but by the cooperative efforts of the residents of local neighborhoods.

Competitors in providing the "middle mile"-the connection from the last mile to the Internet cloud-could emerge, in a way that they cannot if they must first lay their own last mile all the way to each home.

The users, rather than the middle-mile providers, shall have paid the capital cost of producing the local transmission system-their own cooperative radios.

The presence of a commons-based, coproduced last mile alongside the proprietary broadband network eliminates the last mile as a bottleneck for control over who speaks, with what degree of ease, and with what types of production values and interactivity.


404

The development of open wireless networks, owned by their users and focused on sophisticated general-purpose devices at their edges also offers a counterpoint to the emerging trend among mobile telephony providers to offer a relatively limited and controlled version of the Internet over the phones they sell.

Some wireless providers are simply offering mobile Internet connections throughout their networks, for laptops.

Others, however, are using their networks to allow customers to use their ever-more-sophisticated phones to surf portions of the Web.

These latter services diverge in their styles.

Some tend to be limited, offering only a set of affiliated Web sites rather than genuine connectivity to the Internet itself with a general-purpose device.

Sprint's "News" offerings, for example, connects users to CNNtoGo, ABCNews.com, and the like, but will not enable a user to reach the blogosphere to upload a photo of protesters being manhandled, for example.

So while mobility in principle increases the power of the Web, and text messaging puts e-mail-like capabilities everywhere, the effect of the implementations of the Web on phones is more ambiguous.

It could be more like a Web-enabled reception device than a genuinely active node in a multidirectional network.

Widespread adoption of open wireless networks would give mobile phone manufacturers a new option.

They could build into the mobile telephones the ability to tap into open wireless networks, and use them as general-purpose access points to the Internet.

The extent to which this will be a viable option for the mobile telephone manufacturers depends on how much the incumbent mobile telephone service providers, those who purchased their licenses at high-priced auctions, will resist this move.

Most users buy their phones from their providers, not from general electronic equipment stores.

Phones are often tied to specific providers in ways that users are not able to change for themselves.

In these conditions, it is likely that mobile providers will resist the competition from free open wireless systems for "data minutes" by refusing to sell dual-purpose equipment.

Worse, they may boycott manufacturers who make mobile phones that are also general-purpose Web-surfing devices over open wireless networks.

How that conflict will go, and whether users would be willing to carry a separate small device to enable them to have open Internet access alongside their mobile phone, will determine the extent to which the benefits of open wireless networks will be transposed into the mobile domain.

Normatively, that outcome has significant implications.

From the perspective of the citizen watchdog function, ubiquitous availability of capture, rendering, and communication capabilities are important.

From the perspective of personal autonomy as informed action in context, extending openness to mobile units would provide significant advantages to allow individuals to construct their own information environment on the go, as they are confronting decisions and points of action in their daily lives.



Municipal Broadband Initiatives

405

One alternative path for the emergence of basic physical information transport infrastructure on a nonmarket model is the drive to establish municipal systems.

These proposed systems would not be commons-based in the sense that they would not be created by the cooperative actions of individuals without formal structure.

They would be public, like highways, sidewalks, parks, and sewage systems.

Whether they are, or are not, ultimately to perform as commons would depend on how they would be regulated.

In the United States, given the First Amendment constraints on government preferring some speech to other speech in public fora, it is likely that municipal systems would be managed as commons.

In this regard, they would have parallel beneficial characteristics to those of open wireless systems.

The basic thesis underlying municipal broadband initiatives is similar to that which has led some municipalities to create municipal utilities or transportation hubs.

Connectivity has strong positive externalities.

It makes a city's residents more available for the information economy and the city itself a more attractive locale for businesses.

Most of the efforts have indeed been phrased in these instrumental terms.

The initial drive has been the creation of municipal fiber-to-the-home networks.

The town of Bristol, Virginia, is an example.

It has a population of slightly more than seventeen thousand.

Median household income is 68 percent of the national median.

These statistics made it an unattractive locus for early broadband rollout by incumbent providers.

However, in 2003, Bristol residents had one of the most advanced residential fiber-to-the-home networks in the country, available for less than forty dollars a month.