Difference between revisions of "The Impact of Incentives on Notice and Take-down"

From Cybersecurity Wiki
Jump to navigation Jump to search
Line 10: Line 10:
Issues: [[Economics of Cybersecurity]], [[Incentives]], [[Information Sharing/Disclosure]]
*Issues: [[Economics of Cybersecurity]]; [[Incentives]]; [[Information Sharing/Disclosure]]
==Key Words==  
==Key Words==  

Revision as of 16:40, 24 June 2010

Full Title of Reference

The Impact of Incentives on Notice and Take-down

Full Citation

Tyler Moore and Richard Clayton, "The Impact of Incentives on Notice and Take-down" in Managing Information Risk and the Economics of Security (M. Eric Johnson ed, 2009). Web



Key Words

Blacklist, Credit Card Fraud, Cyber Crime, Disclosure Policy, Identity Fraud/Theft, Notice and Take-down, Phishing, Spam


From the paper's Introduction:

Almost all schemes for the removal of undesirable content from the Internet are described as being a ‘notice and take-down’ (NTD) regime, although their actual details vary considerably. In this paper we show that the effectiveness of removal depends rather more on the incentives for this to happen, than on narrow issues such as the legal basis or the type of material involved.

It is impractical for Internet Service Providers (ISPs) to police the entirety of the content that their users place upon the Internet, so it is generally seen as unjust for ISPs to bear strict liability, viz: that they become legally liable for the mere presence of unlawful content. However, the ISPs are in an unrivalled position to suppress content held on their systems by removing access to resources – web- space, connectivity, file access permissions, etc. – from their customers. Hence many content removal regimes make ISPs liable for content once they have been informed of its existence, viz: once they have been put on ‘notice’. If they fail to ‘take-down’ the material then sanctions against them may then proceed.

The ISP is often the only entity that can identify customers in the real world, and so they must necessarily become involved before the true originator can be held accountable for the presence of unlawful content. This gives rise to various complexities because the ISP may be bound by data protection legislation, or by common law notions of confidentiality, from disclosing the information haphaz- ardly. Equally, ISPs are reluctant to drawn into acting as the plaintiffs’ agent against their own customers – and at the very least demand recompense for their efforts, along with immunities when errors are made. Nevertheless, some benefits do accrue from including the ISP in the process. They may be more familiar with the process than their customers, allowing them to reject flawed requests and as- sist in dealing with vexatious claims. The ISP’s experience, along with their as- sessment of the standing of their customer, will enable them to assess the merits of the case, and perhaps advise their customer that the claim should be ignored. An ISP does not have any incentive to annoy a major commercial customer by sus- pending their website merely because of a dubious claim of copyright in a photo- graph it displays.

In fact, when we examine NTD regimes, we find that incentives are at the heart of the effectiveness of every process, outweighing the nature of the material or the legal framework for removal. Where complainants are highly motivated, and hence persistent, content is promptly removed. Where the incentives are weak, or third parties become involved with far less of an incentive to act, then removal is slow or almost non-existent.

In this paper we examine a number of notice and take-down regimes, present- ing data on the speed of removal. We start by considering defamation in Section 2, which has an implicit NTD regime. In Section 3 we look at copyright which has, particularly in the United States, a very formalised NTD mechanism. In Section 4 we consider the removal of child sexual abuse images and show how slow this removal can be in practice. In Section 5 we contrast this with the removal of many different categories of ‘phishing’ websites, where we are able to present extensive data that illustrates many practical difficulties that arise depending upon how the criminals have chosen to create their fake websites. In Section 6 we consider a range of other criminal websites and show that their removal is extremely slow in comparison with phishing websites, and we offer some insights into why this should be so. In Section 7 we consider the issues surrounding the removal of mal- ware from websites and from end-user machines, and discuss the incentives, such as they are, for ISPs to act to force their users to clean up their machines – and in particular to stop them from inadvertently sending out email ‘spam’. Finally, in Section 8 we draw the various threads together to compare and contrast the vari- ous NTD regimes.

Additional Notes and Highlights