Online conversations today exist primarily in the realm of social media and blogging platforms, most of which are owned by private companies. Such privately owned platforms now occupy a significant role in the public sphere, as places in which ideas and information are exchanged and debated by people from every corner of the world. Instead of an unregulated, decentralized Internet, we have centralized platforms serving as public spaces: a quasi-public sphere. This quasi-public sphere is subject to both public and private content controls spanning multiple jurisdictions and differing social mores.
But as private companies increasingly take on roles in the public sphere, the rules users must follow become increasingly complex. In some cases this can be positive, for example, when a user in a repressive society utilizes a platform hosted by a company abroad that is potentially bound to more liberal, Western laws than those to which he is subject in his home country. Such platforms may also allow a user to take advantage of anonymous or pseudonymous speech, offering him a place to discuss taboo topics.
At the same time, companies set their own standards, which often means navigating tricky terrain; companies want to keep users happy but must also operate within a viable business model, all the while working to keep their services available in as many countries as possible by avoiding government censorship. Online service providers have incentive not to host content that might provoke a DDoS attack or raise costly legal issues.1 Negotiating this terrain often means compromising on one or more of these areas, sometimes at the expense of users. This paper will highlight the practices of five platforms—Facebook, YouTube, Flickr, Twitter, and Blogger—in regard to TOS and account deactivations. It will highlight each company’s user policies, as well as examples of each company’s procedures for policing content.