The Impact of Incentives on Notice and Take-down: Difference between revisions

From Cybersecurity Wiki
Jump to navigation Jump to search
 
(9 intermediate revisions by 2 users not shown)
Line 9: Line 9:


==Categorization==
==Categorization==
 
* Threats and Actors: [[Criminals and Criminal Organizations]]; [[Financial Institutions and Networks]]
Issues: [[Economics of Cybersecurity]], [[Incentives]], [[Information Sharing/Disclosure]]
* Issues: [[Cybercrime]]; [[Economics of Cybersecurity]]; [[Incentives]]; [[Information Sharing/Disclosure]]; [[Market Failure]]; [[Metrics]]
* Approaches: [[International Cooperation]]; [[Private Efforts/Organizations]]


==Key Words==  
==Key Words==  
 
[[Keyword_Index_and_Glossary_of_Core_Ideas#Blacklist | Blacklist]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Blacklist Blacklist],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Credit_Card_Fraud | Credit Card Fraud]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Credit_Card_Fraud Credit Card Fraud],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Cyber_Crime | Cyber Crime]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Cyber_Crime Cyber Crime],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Cyber_Security_as_an_Externality | Cyber Security as an Externality]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Disclosure_Policy Disclosure Policy],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Disclosure_Policy | Disclosure Policy]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Identity_Fraud.2FTheft Identity Fraud/Theft],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Identity_Fraud/Theft | Identity Fraud/Theft]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Notice_and_Take-down Notice and Take-down],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Internet_Service_Providers | Internet Service Providers]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#Phishing Phishing],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Notice_and_Take-down | Notice and Take-down]],
[http://cyber.law.harvard.edu/cybersecurity/Keyword_Index_and_Glossary_of_Core_Ideas#SPAM Spam]
[[Keyword_Index_and_Glossary_of_Core_Ideas#Phishing | Phishing]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#SPAM | SPAM]]


==Synopsis==
==Synopsis==
Line 27: Line 29:
''From the paper's Introduction:''
''From the paper's Introduction:''


Almost all schemes for the removal of undesirable content from the Internet are  
Almost all schemes for the removal of undesirable content from the Internet are described as being a ‘notice and take-down’ (NTD) regime, although their actual details vary considerably. In this paper we show that the effectiveness of removal depends rather more on the incentives for this to happen, than on narrow issues such as the legal basis or the type of material involved.
described as being a ‘notice and take-down’ (NTD) regime, although their actual  
 
details vary considerably. In this paper we show that the effectiveness of removal  
It is impractical for Internet Service Providers (ISPs) to police the entirety of the content that their users place upon the Internet, so it is generally seen as unjust for ISPs to bear strict liability, viz: that they become legally liable for the mere presence of unlawful content. However, the ISPs are in an unrivalled position to suppress content held on their systems by removing access to resources – web space, connectivity, file access permissions, etc. – from their customers. Hence many content removal regimes make ISPs liable for content once they have been informed of its existence, viz: once they have been put on ‘notice’. If they fail to ‘take-down’ the material then sanctions against them may then proceed.
depends rather more on the incentives for this to happen, than on narrow issues  
 
such as the legal basis or the type of material involved.  
The ISP is often the only entity that can identify customers in the real world, and so they must necessarily become involved before the true originator can be held accountable for the presence of unlawful content. This gives rise to various complexities because the ISP may be bound by data protection legislation, or by common law notions of confidentiality, from disclosing the information haphazardly. Equally, ISPs are reluctant to drawn into acting as the plaintiffs’ agent against their own customers – and at the very least demand recompense for their efforts, along with immunities when errors are made. Nevertheless, some benefits do accrue from including the ISP in the process. They may be more familiar with the process than their customers, allowing them to reject flawed requests and assist in dealing with vexatious claims. The ISP’s experience, along with their assessment of the standing of their customer, will enable them to assess the merits of the case, and perhaps advise their customer that the claim should be ignored. An ISP does not have any incentive to annoy a major commercial customer by suspending their website merely because of a dubious claim of copyright in a photograph it displays.
 
In fact, when we examine NTD regimes, we find that incentives are at the heart of the effectiveness of every process, outweighing the nature of the material or the legal framework for removal. Where complainants are highly motivated, and hence persistent, content is promptly removed. Where the incentives are weak, or third parties become involved with far less of an incentive to act, then removal is slow or almost non-existent.
 
In this paper we examine a number of notice and take-down regimes, presenting data on the speed of removal. We start by considering defamation in Section 2, which has an implicit NTD regime. In Section 3 we look at copyright which has, particularly in the United States, a very formalised NTD mechanism. In Section 4 we consider the removal of child sexual abuse images and show how slow this removal can be in practice. In Section 5 we contrast this with the removal of many different categories of ‘phishing’ websites, where we are able to present extensive data that illustrates many practical difficulties that arise depending upon how the criminals have chosen to create their fake websites. In Section 6 we consider a
range of other criminal websites and show that their removal is extremely slow in comparison with phishing websites, and we offer some insights into why this should be so. In Section 7 we consider the issues surrounding the removal of malware from websites and from end-user machines, and discuss the incentives, such as they are, for ISPs to act to force their users to clean up their machines – and in particular to stop them from inadvertently sending out email ‘spam’. Finally, in Section 8 we draw the various threads together to compare and contrast the various NTD regimes.
 
===Ordinary Phisihing Sites===
We examined the attacks on two very well-known brands by phishing websites that were hosted on compromised machines in January 2008 and found 193 websites with an average lifetime of 49.2 hours and a 0 hour median, which is very similar to the lifetimes we measured for free web-hosting sites.  The similarities between compromised machines and free web hosts continue
once we break down the lifetimes according to whether the brand owner was aware of the website. The 105 phishing websites hosted on compromised machines known to the company are removed within 3.5 hours on average (0 hour median). The 88 websites missed by the company remain for 103 hours on average, with a median of 10 hours.  Thus, for ordinary phishing websites, the main differentiator appears to be whether the organisation responsible for the take-down is aware of the site’s existence.  Free web-hosting companies and the administrators of compromised machines both appear to comply promptly with the take-down requests they received.  However, the website administrators do need to be notified of the problem –
phishing websites that brand owner did not know about, and so did not issue any notices for, remain up for considerably longer.
 
===Rock-phish and Fast-flux Attacks===
The ‘rock-phish’ gang operate in a completely different manner from the ordinary phishing attacks just described. This group of criminals perpetrates phishing attacks on a massive scale (McMillan 2006). The gang purchases a number of domains with meaningless names such as lof80.info. Their spoof emails contain a long URL of the form  http://www.bank.com.id123.lof80.info/vr. Although the URL contains a unique identifier (to evade spam filters), all variants are resolved to a single IP address using ‘wildcard DNS’. The IP address is of a machine that acts as a proxy,relaying web traffic to and from a hidden ‘mothership’ machine. If the proxy is removed, the DNS is adjusted to use another proxy, and so the only practical way to remove the website is to get the appropriate registrar to remove the domain name from the DNS.
 
A related form of attack is dubbed ‘fast-flux’. The mechanism is similar to the one employed by the rock-phish gang, except that the domain name is resolved to many IP addresses in parallel (typically 5 or 10) and the IP addresses used are rapidly changed (sometimes every 20 minutes). For these attacks the only practical approach is to have the domain name suspended.
 
The lifetime of the 821 rock-phish domains we monitored in January 2008 reflects the added difficulty faced during take-down procedures. The domains lasted 70.3 hours on average (median 33 hours), despite the additional attention rock-phish domains attract by impersonating many banks simultaneously. The lifetimes for the 314 fast-flux domains were similar, lasting 96.1 hours on average with a 25.5 hour median.
 
===Take-Down Effectiveness===
It is apparent that the presence of incentives to remove offending material has the greatest impact on website lifetimes. By far, phishing websites are removed fastest. Banks are highly motivated to remove any impersonating website because their continued appearance increases losses due to fraud and erodes customers’ trust in online banking. Solid legal frameworks do not seem to matter as much. Courts almost never get involved in issuing orders to remove phishing websites. By contrast, other clearly illegal activities such as online pharmacies do not appear to be removed at all.
 
However, the banks’ incentives are not perfectly aligned. Most banks remain narrowly focused on actively removing only those websites that directly attack their brand. Another key component of the phishing supply chain, mule recruitment websites, is completely ignored and left to volunteers. Removing mule-recruitment websites is a collective-action problem: many banks are harmed by these websites, yet none takes action because they cannot be sure whether removing them will help themselves or their competitors.


It is impractical for Internet Service Providers (ISPs) to police the entirety of  
===Lifetimes of Child Sexual Abuse Web Sites===
the content that their users place upon the Internet, so it is generally seen as unjust
The long lifetimes of websites hosting child sexual abuse images is particularly striking. In spite of a robust legal framework and a global consensus on the content’s repulsion, these websites are removed much slower than any other type of content being actively taken down for which we have gathered data. An average lifetime of 719 hours is over 150 times slower than phishing websites hosted on free web-hosting and compromised machines.
for ISPs to bear strict liability, viz: that they become legally liable for the mere
presence of unlawful content. However, the ISPs are in an unrivalled position to
suppress content held on their systems by removing access to resources – web-
space, connectivity, file access permissions, etc. – from their customers. Hence
many content removal regimes make ISPs liable for content once they have been
informed of its existence, viz: once they have been put on ‘notice’. If they fail to
‘take-down’ the material then sanctions against them may then proceed.  


The ISP is often the only entity that can identify customers in the real world,
===Conclusion===
and so they must necessarily become involved before the true originator can be
In this paper we have examined a range of notice and take-down regimes. We have developed insights by comparing differing outcomes where underlying commonalities exist. The banks have adopted a narrow focus on phishing while overlooking mule recruitment. The evasive techniques of fast-flux networks appear unimportant, given that seemingly permanent online pharmacies and short lived phishing websites use the same scheme.
held accountable for the presence of unlawful content. This gives rise to various
complexities because the ISP may be bound by data protection legislation, or by  
common law notions of confidentiality, from disclosing the information haphaz-
ardly. Equally, ISPs are reluctant to drawn into acting as the plaintiffs’ agent
against their own customers – and at the very least demand recompense for their
efforts, along with immunities when errors are made. Nevertheless, some benefits do accrue from including the ISP in the process. They may be more familiar with
the process than their customers, allowing them to reject flawed requests and as-
sist in dealing with vexatious claims. The ISP’s experience, along with their as-  
sessment of the standing of their customer, will enable them to assess the merits of
the case, and perhaps advise their customer that the claim should be ignored. An
ISP does not have any incentive to annoy a major commercial customer by sus-
pending their website merely because of a dubious claim of copyright in a photo-
graph it displays.  


In fact, when we examine NTD regimes, we find that incentives are at the heart
The Internet is multi-national. Almost everyone who wants content removed issues requests to ISPs or website owners throughout the world, believing – not always correctly – that the material must be just as illegal ‘there’ as ‘here’. Unexpectedly, in the one case where the material is undoubtedly illegal everywhere, the removal of child sexual abuse image websites is dealt with in a rather different manner. The responsibility for removing material has been divided up on a national basis, and this appears to lead directly to very long website lifetimes.
of the effectiveness of every process, outweighing the nature of the material or the  
legal framework for removal. Where complainants are highly motivated, and  
hence persistent, content is promptly removed. Where the incentives are weak, or
third parties become involved with far less of an incentive to act, then removal is
slow or almost non-existent.  


In this paper we examine a number of notice and take-down regimes, present-
In sum, the evidence we have presented highlights the limited impact of legal frameworks, content types and attack methods on take-down speed. Instead, takedown effectiveness depends on how the responsibility for issuing requests is distributed, and the incentives on the organisations involved to devote appropriate resources to pursue the removal of unwanted content from the Internet.
ing data on the speed of removal. We start by considering defamation in Section 2,  
which has an implicit NTD regime. In Section 3 we look at copyright which has,
particularly in the United States, a very formalised NTD mechanism. In Section 4
we consider the removal of child sexual abuse images and show how slow this
removal can be in practice. In Section 5 we contrast this with the removal of many
different categories of ‘phishing’ websites, where we are able to present extensive
data that illustrates many practical difficulties that arise depending upon how the
criminals have chosen to create their fake websites. In Section 6 we consider a
range of other criminal websites and show that their removal is extremely slow in
comparison with phishing websites, and we offer some insights into why this
should be so. In Section 7 we consider the issues surrounding the removal of mal-
ware from websites and from end-user machines, and discuss the incentives, such
as they are, for ISPs to act to force their users to clean up their machines – and in
particular to stop them from inadvertently sending out email ‘spam’. Finally, in
Section 8 we draw the various threads together to compare and contrast the vari-
ous NTD regimes.


==Additional Notes and Highlights==
==Additional Notes and Highlights==
Expertise Required:  Technical - Low/Moderate; Law - Low

Latest revision as of 16:01, 28 July 2010

Full Title of Reference

The Impact of Incentives on Notice and Take-down

Full Citation

Tyler Moore and Richard Clayton, "The Impact of Incentives on Notice and Take-down" in Managing Information Risk and the Economics of Security (M. Eric Johnson ed, 2009). Web

BibTeX

Categorization

Key Words

Blacklist, Credit Card Fraud, Cyber Crime, Cyber Security as an Externality, Disclosure Policy, Identity Fraud/Theft, Internet Service Providers, Notice and Take-down, Phishing, SPAM

Synopsis

From the paper's Introduction:

Almost all schemes for the removal of undesirable content from the Internet are described as being a ‘notice and take-down’ (NTD) regime, although their actual details vary considerably. In this paper we show that the effectiveness of removal depends rather more on the incentives for this to happen, than on narrow issues such as the legal basis or the type of material involved.

It is impractical for Internet Service Providers (ISPs) to police the entirety of the content that their users place upon the Internet, so it is generally seen as unjust for ISPs to bear strict liability, viz: that they become legally liable for the mere presence of unlawful content. However, the ISPs are in an unrivalled position to suppress content held on their systems by removing access to resources – web space, connectivity, file access permissions, etc. – from their customers. Hence many content removal regimes make ISPs liable for content once they have been informed of its existence, viz: once they have been put on ‘notice’. If they fail to ‘take-down’ the material then sanctions against them may then proceed.

The ISP is often the only entity that can identify customers in the real world, and so they must necessarily become involved before the true originator can be held accountable for the presence of unlawful content. This gives rise to various complexities because the ISP may be bound by data protection legislation, or by common law notions of confidentiality, from disclosing the information haphazardly. Equally, ISPs are reluctant to drawn into acting as the plaintiffs’ agent against their own customers – and at the very least demand recompense for their efforts, along with immunities when errors are made. Nevertheless, some benefits do accrue from including the ISP in the process. They may be more familiar with the process than their customers, allowing them to reject flawed requests and assist in dealing with vexatious claims. The ISP’s experience, along with their assessment of the standing of their customer, will enable them to assess the merits of the case, and perhaps advise their customer that the claim should be ignored. An ISP does not have any incentive to annoy a major commercial customer by suspending their website merely because of a dubious claim of copyright in a photograph it displays.

In fact, when we examine NTD regimes, we find that incentives are at the heart of the effectiveness of every process, outweighing the nature of the material or the legal framework for removal. Where complainants are highly motivated, and hence persistent, content is promptly removed. Where the incentives are weak, or third parties become involved with far less of an incentive to act, then removal is slow or almost non-existent.

In this paper we examine a number of notice and take-down regimes, presenting data on the speed of removal. We start by considering defamation in Section 2, which has an implicit NTD regime. In Section 3 we look at copyright which has, particularly in the United States, a very formalised NTD mechanism. In Section 4 we consider the removal of child sexual abuse images and show how slow this removal can be in practice. In Section 5 we contrast this with the removal of many different categories of ‘phishing’ websites, where we are able to present extensive data that illustrates many practical difficulties that arise depending upon how the criminals have chosen to create their fake websites. In Section 6 we consider a range of other criminal websites and show that their removal is extremely slow in comparison with phishing websites, and we offer some insights into why this should be so. In Section 7 we consider the issues surrounding the removal of malware from websites and from end-user machines, and discuss the incentives, such as they are, for ISPs to act to force their users to clean up their machines – and in particular to stop them from inadvertently sending out email ‘spam’. Finally, in Section 8 we draw the various threads together to compare and contrast the various NTD regimes.

Ordinary Phisihing Sites

We examined the attacks on two very well-known brands by phishing websites that were hosted on compromised machines in January 2008 and found 193 websites with an average lifetime of 49.2 hours and a 0 hour median, which is very similar to the lifetimes we measured for free web-hosting sites. The similarities between compromised machines and free web hosts continue once we break down the lifetimes according to whether the brand owner was aware of the website. The 105 phishing websites hosted on compromised machines known to the company are removed within 3.5 hours on average (0 hour median). The 88 websites missed by the company remain for 103 hours on average, with a median of 10 hours. Thus, for ordinary phishing websites, the main differentiator appears to be whether the organisation responsible for the take-down is aware of the site’s existence. Free web-hosting companies and the administrators of compromised machines both appear to comply promptly with the take-down requests they received. However, the website administrators do need to be notified of the problem – phishing websites that brand owner did not know about, and so did not issue any notices for, remain up for considerably longer.

Rock-phish and Fast-flux Attacks

The ‘rock-phish’ gang operate in a completely different manner from the ordinary phishing attacks just described. This group of criminals perpetrates phishing attacks on a massive scale (McMillan 2006). The gang purchases a number of domains with meaningless names such as lof80.info. Their spoof emails contain a long URL of the form http://www.bank.com.id123.lof80.info/vr. Although the URL contains a unique identifier (to evade spam filters), all variants are resolved to a single IP address using ‘wildcard DNS’. The IP address is of a machine that acts as a proxy,relaying web traffic to and from a hidden ‘mothership’ machine. If the proxy is removed, the DNS is adjusted to use another proxy, and so the only practical way to remove the website is to get the appropriate registrar to remove the domain name from the DNS.

A related form of attack is dubbed ‘fast-flux’. The mechanism is similar to the one employed by the rock-phish gang, except that the domain name is resolved to many IP addresses in parallel (typically 5 or 10) and the IP addresses used are rapidly changed (sometimes every 20 minutes). For these attacks the only practical approach is to have the domain name suspended.

The lifetime of the 821 rock-phish domains we monitored in January 2008 reflects the added difficulty faced during take-down procedures. The domains lasted 70.3 hours on average (median 33 hours), despite the additional attention rock-phish domains attract by impersonating many banks simultaneously. The lifetimes for the 314 fast-flux domains were similar, lasting 96.1 hours on average with a 25.5 hour median.

Take-Down Effectiveness

It is apparent that the presence of incentives to remove offending material has the greatest impact on website lifetimes. By far, phishing websites are removed fastest. Banks are highly motivated to remove any impersonating website because their continued appearance increases losses due to fraud and erodes customers’ trust in online banking. Solid legal frameworks do not seem to matter as much. Courts almost never get involved in issuing orders to remove phishing websites. By contrast, other clearly illegal activities such as online pharmacies do not appear to be removed at all.

However, the banks’ incentives are not perfectly aligned. Most banks remain narrowly focused on actively removing only those websites that directly attack their brand. Another key component of the phishing supply chain, mule recruitment websites, is completely ignored and left to volunteers. Removing mule-recruitment websites is a collective-action problem: many banks are harmed by these websites, yet none takes action because they cannot be sure whether removing them will help themselves or their competitors.

Lifetimes of Child Sexual Abuse Web Sites

The long lifetimes of websites hosting child sexual abuse images is particularly striking. In spite of a robust legal framework and a global consensus on the content’s repulsion, these websites are removed much slower than any other type of content being actively taken down for which we have gathered data. An average lifetime of 719 hours is over 150 times slower than phishing websites hosted on free web-hosting and compromised machines.

Conclusion

In this paper we have examined a range of notice and take-down regimes. We have developed insights by comparing differing outcomes where underlying commonalities exist. The banks have adopted a narrow focus on phishing while overlooking mule recruitment. The evasive techniques of fast-flux networks appear unimportant, given that seemingly permanent online pharmacies and short lived phishing websites use the same scheme.

The Internet is multi-national. Almost everyone who wants content removed issues requests to ISPs or website owners throughout the world, believing – not always correctly – that the material must be just as illegal ‘there’ as ‘here’. Unexpectedly, in the one case where the material is undoubtedly illegal everywhere, the removal of child sexual abuse image websites is dealt with in a rather different manner. The responsibility for removing material has been divided up on a national basis, and this appears to lead directly to very long website lifetimes.

In sum, the evidence we have presented highlights the limited impact of legal frameworks, content types and attack methods on take-down speed. Instead, takedown effectiveness depends on how the responsibility for issuing requests is distributed, and the incentives on the organisations involved to devote appropriate resources to pursue the removal of unwanted content from the Internet.

Additional Notes and Highlights

Expertise Required: Technical - Low/Moderate; Law - Low