PRIVACY IN CYBERSPACE


Open Code

End-to-End Design

The end-to-end design of the Internet is a key element to understanding both the privacy-invasive and privacy-enhancing possibilities of the medium. As the universe of Internet stakeholders expands, this core architectural feature is changing rapidly. The self-help opportunities available to individuals depend significantly on these architectural decisions. An introduction to the history and current issues in the end-to-end debate is a necessary foundation for the discussion of free software and self-help techniques. Peer-to-peer concepts are also intrinsically linked to end-to-end questions, and changes in the end-to-end architecture are lilkely to significantly impact peer-to-peer networks and individuals’ ability to disseminate information anonymously.

Many attribute the explosive and democratic growth of the Internet to the end-to-end design principle adopted by early Internet architects. According to the end-to-end principle, intelligence should exist as close as possible to the edge of the network, while the network itself should act as a simple common carrier. Complex processing operations, such as serving and rendering web-pages, or sending and decoding emails, occur on the end user’s computer, while the network itself blindly routes packets towards their destination, with no awareness of the contents or importance of those packets. There is no one central server through which information must pass; instead, each packet contains the addressing information necessary for it to reach its destination. This design is resilient to physical attack (one of the military design goals for the Internet), since there is no single target that could effectively disable the entire network.

By pushing the intelligence to the edges of the network, the Internet architects made it difficult for centralized control structures to develop. As packets are passed from node to node on the Internet, the network does not provide a means for charging costs to the original sender or final recipient, nor does it discriminate against certain types of content over others. Experimentation at the edges was thus encouraged. Anyone was free to start a new network project, without having to get prior permission from any central authority. The result is the wide spectrum of Internet uses we enjoy today.

The fact that anyone could experiment and create new applications also ignited the free software/open source movement. The most successful free software projects all take advantage of the ability to join the network without special permissions: GNU/Linux, Apache, Mozilla (discussed more in the next section).

This architecture enables anonymous communication and dissemination of ideas. Anyone can send an email from any entry point into the network, relying on the network to route the email to its destination, even if the intermediate points have no information about the sender or recipient (while a “return address” is customary, the email will get through fine even if the “return address” is inaccurate). No special credentials are needed to plug in. Although certain geographic information might be revealed based on one’s IP address, the network rarely requires users to disclose identity in order to communicate. Furthermore, even geographic information can be obscured by routing information through proxy servers. Anyone can post to one of the thousands of newsgroups that make up Usenet, and their message will rapidly propagate across the entire network. It remains quite difficult to identify the original poster of a message on Usenet, and virtually impossible to remove all traces of information once it has been posted.

This architecture assumes a fair amount of trust and cooperation between users of the network, and the in the absence of this trust it may enable certain invasions of privacy. Most communications are sent as “clear text,” which means that any intermediate point along the way can view the contents of those communications. It is relatively easy to intercept and modify packets, or falsely pretend to be the intended recipient of a packet (known as a “man-in-the-middle attack”). Packets pass along many untrusted nodes on the way to their destination. It is precisely this sort of openness that is exploited by surveillance systems like Carnivore.

Optional Readings

Next: Changing Architecture...


Please send inquiries to bold@cyber.law.harvard.edu

Welcome | Registration | Discussion | Resources |
The Berkman Center for Internet & Society