Why Information Security is Hard: Difference between revisions

From Cybersecurity Wiki
Jump to navigation Jump to search
No edit summary
 
(24 intermediate revisions by 2 users not shown)
Line 1: Line 1:
==Why Information Security is Hard -- An Economic Perspective==
==Full Title of Reference==
Why Information Security is Hard -- An Economic Perspective


==Full Citation==
==Full Citation==


Ross Anderson, ''Why Information Security is Hard -- An Economic Perspective'', 17th Annual Computer Security Applications Conference (ACSAC'01), IEEE Computer Society, Decembe, 2001. [http://www.acsac.org/2001/papers/110.pdf  ''Web''] [http://www.cl.cam.ac.uk/~rja14/Papers/econ.pdf ''AltUrl'']
Ross Anderson, ''Why Information Security is Hard -- An Economic Perspective'', 17th Annual Computer Security Applications Conference (ACSAC'01), IEEE Computer Society, December, 2001. [http://www.acsac.org/2001/papers/110.pdf  ''Web''] [http://www.cl.cam.ac.uk/~rja14/Papers/econ.pdf ''AltWeb'']


[http://cyber.law.harvard.edu/cybersecurity/?title=Special:Bibliography&view=detailed&startkey=Anderson_R:2001&f=wikibiblio.bib ''BibTeX'']
[http://cyber.law.harvard.edu/cybersecurity/Special:Bibliography?f=wikibiblio.bib&title=Special:Bibliography&view=detailed&action=&keyword=Anderson_R:2001''BibTeX'']


==Categorization==
==Categorization==


Issues: [[Economics of Cybersecurity]]
* Issues: [[Economics of Cybersecurity]]; [[Incentives]]; [[Risk Management and Investment]]
 
* Approaches: [[Regulation/Liability]]


==Key Words==  
==Key Words==  
[[Keyword_Index_and_Glossary_of_Core_Ideas#Black_Hat | Black Hat]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Botnet | Botnet]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#DDoS_Attack | DDoS Attack]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Information_Asymmetries | Information Asymmetries]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Tragedy_of_Commons | Tragedy of Commons]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#White_Hat | White Hat]]
==Synopsis==
===Current Market Practices Undermine Cybersecurity===
According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved. In this note, I put forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.
This paper examines the winner-take-all IT market structure and the strategic practices towards market power that enable poor management decisions and large-spread security failures. The focus is on the constant competitive struggles to entrench or undermine monopolies and to segment and control markets that de facto determines many of the environmental conditions that
make the security engineer’s work harder. The paper also suggests that it is likely that over time, government interference in information security standards will be motivated by broader competition issues, as well as by narrow issues of the effectiveness of information security product markets.
===Offensive Operations Are Easier Than Defensive Ones===
The paper also examines information warfare from a theoretical perspective to show the weakness of information security, and why offensive operations are more attractive than defensive ones:
Let us suppose a large, complex product such as Windows 2000 has 1,000,000 bugs,
each with an MTBF of 1,000,000,000 hours. Suppose
that Paddy works for the Irish Republican Army, and
his job is to break into the British Army’s computer
to get the list of informers in Belfast; while Brian is
the army assurance guy whose job is to stop Paddy.
So he must learn of the bugs before Paddy does.
Paddy has a day job so he can only do 1000 hours
of testing a year. Brian has full Windows source code,
dozens of PhDs, control of the commercial evaluation labs, an inside track on CERT, an information
sharing deal with other UKUSA member states – and
he also runs the government’s scheme to send round
consultants to critical industries such as power and
telecomms to advise them how to protect their sys-
tems. Suppose that Brian benefits from 10,000,000
hours a year worth of testing.
After a year, Paddy finds a bug, while Brian has
found 100,000. But the probability that Brian has
found Paddy’s bug is only 10%. After ten years he
will find it – but by then Paddy will have found nine
more, and it’s unlikely that Brian will know of all of
them. Worse, Brian’s bug reports will have become
such a firehose that Microsoft will have killfiled him.
In other words, Paddy has thermodynamics on his
side. Even a very moderately resourced attacker can
break anything that’s at all large and complex. There
is nothing that can be done to stop this, so long as
there are enough different security vulnerabilities to
do statistics: different testers find different bugs.
There are various ways in which one might hope to
escape this statistical trap.


''See the article itself for any key words as a starting point''
• First, although it’s reasonable to expect a
35,000,000 line program like Windows 2000 to
have 1,000,000 bugs, perhaps only 1% of them are
security-critical. This changes the game slightly,
but not much; Paddy now needs to recruit 100
volunteers to help him (or, more realistically,
swap information in a grey market with other subversive elements). Still, the effort required of the
attacker is still much less than that needed for  
effective defense.


==Synopsis==
• Second, there may be a single fix for a large number of the security critical bugs. For example,
if half of them are stack overflows, then perhaps
these can all be removed by a new compiler.
 
• Third, you can make the security critical part of
the system small enough that the bugs can be
found. This was understood, in an empirical way,
by the early 1970s. However, the discussion in the
above section should have made clear that a minimal TCB is unlikely to be available anytime soon,
as it would make applications harder to develop
and thus impair the platform vendors’ appeal to
developers.


According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved. In this note, I put forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the common.
Information warfare looks rather like air warfare looked in the 1920s and 1930s. Attack is simply easier than defense. Defending a modern information system could also be likened to defending a large, thinly-populated territory like the nineteenth
century Wild West: the men in black hats can strike
anywhere, while the men in white hats have to defend
everywhere. Another possible relevant analogy is the
use of piracy on the high seas as an instrument of state
policy by many European powers in the sixteenth and
seventeenth centuries. Until the great powers agreed to
deny pirates safe haven, piracy was just too easy.
The technical bias in favor of attack is made even
worse by asymmetric information. Suppose that you
head up a U.S. agency with an economic intelligence
mission, and a computer scientist working for you has
just discovered a beautiful new exploit on Windows
2000. If you report this to Microsoft, you will protect
250 million Americans; if you keep quiet, you will be
able to conduct operations against 400 million Europeans and 100 million Japanese. What’s more, you
will get credit for operations you conduct successfully
against foreigners, while the odds are that any operations that they conduct successfully against U.S.  
targets will remain unknown to your superiors. This
further emphasizes the motive for attack rather than
defense. Finally – and this appears to be less widely
realized – the balance in favor of attack rather than
defense is still more pronounced in smaller countries.
They have proportionally fewer citizens to defend, and  
more foreigners to attack.


==Additional Notes and Highlights==
==Additional Notes and Highlights==
Expertise Required: None


'' * Outline key points of interest
[http://www.cl.cam.ac.uk/~rja14/ Ross Anderson's home page]

Latest revision as of 15:49, 19 August 2010

Full Title of Reference

Why Information Security is Hard -- An Economic Perspective

Full Citation

Ross Anderson, Why Information Security is Hard -- An Economic Perspective, 17th Annual Computer Security Applications Conference (ACSAC'01), IEEE Computer Society, December, 2001. Web AltWeb

BibTeX

Categorization

Key Words

Black Hat, Botnet, DDoS Attack, Information Asymmetries, Tragedy of Commons, White Hat

Synopsis

Current Market Practices Undermine Cybersecurity

According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved. In this note, I put forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.

This paper examines the winner-take-all IT market structure and the strategic practices towards market power that enable poor management decisions and large-spread security failures. The focus is on the constant competitive struggles to entrench or undermine monopolies and to segment and control markets that de facto determines many of the environmental conditions that make the security engineer’s work harder. The paper also suggests that it is likely that over time, government interference in information security standards will be motivated by broader competition issues, as well as by narrow issues of the effectiveness of information security product markets.

Offensive Operations Are Easier Than Defensive Ones

The paper also examines information warfare from a theoretical perspective to show the weakness of information security, and why offensive operations are more attractive than defensive ones:

Let us suppose a large, complex product such as Windows 2000 has 1,000,000 bugs, each with an MTBF of 1,000,000,000 hours. Suppose that Paddy works for the Irish Republican Army, and his job is to break into the British Army’s computer to get the list of informers in Belfast; while Brian is the army assurance guy whose job is to stop Paddy. So he must learn of the bugs before Paddy does. Paddy has a day job so he can only do 1000 hours of testing a year. Brian has full Windows source code, dozens of PhDs, control of the commercial evaluation labs, an inside track on CERT, an information sharing deal with other UKUSA member states – and he also runs the government’s scheme to send round consultants to critical industries such as power and telecomms to advise them how to protect their sys- tems. Suppose that Brian benefits from 10,000,000 hours a year worth of testing.

After a year, Paddy finds a bug, while Brian has found 100,000. But the probability that Brian has found Paddy’s bug is only 10%. After ten years he will find it – but by then Paddy will have found nine more, and it’s unlikely that Brian will know of all of them. Worse, Brian’s bug reports will have become such a firehose that Microsoft will have killfiled him. In other words, Paddy has thermodynamics on his side. Even a very moderately resourced attacker can break anything that’s at all large and complex. There is nothing that can be done to stop this, so long as there are enough different security vulnerabilities to do statistics: different testers find different bugs. There are various ways in which one might hope to escape this statistical trap.

• First, although it’s reasonable to expect a 35,000,000 line program like Windows 2000 to have 1,000,000 bugs, perhaps only 1% of them are security-critical. This changes the game slightly, but not much; Paddy now needs to recruit 100 volunteers to help him (or, more realistically, swap information in a grey market with other subversive elements). Still, the effort required of the attacker is still much less than that needed for effective defense.

• Second, there may be a single fix for a large number of the security critical bugs. For example, if half of them are stack overflows, then perhaps these can all be removed by a new compiler.

• Third, you can make the security critical part of the system small enough that the bugs can be found. This was understood, in an empirical way, by the early 1970s. However, the discussion in the above section should have made clear that a minimal TCB is unlikely to be available anytime soon, as it would make applications harder to develop and thus impair the platform vendors’ appeal to developers.

Information warfare looks rather like air warfare looked in the 1920s and 1930s. Attack is simply easier than defense. Defending a modern information system could also be likened to defending a large, thinly-populated territory like the nineteenth century Wild West: the men in black hats can strike anywhere, while the men in white hats have to defend everywhere. Another possible relevant analogy is the use of piracy on the high seas as an instrument of state policy by many European powers in the sixteenth and seventeenth centuries. Until the great powers agreed to deny pirates safe haven, piracy was just too easy. The technical bias in favor of attack is made even worse by asymmetric information. Suppose that you head up a U.S. agency with an economic intelligence mission, and a computer scientist working for you has just discovered a beautiful new exploit on Windows 2000. If you report this to Microsoft, you will protect 250 million Americans; if you keep quiet, you will be able to conduct operations against 400 million Europeans and 100 million Japanese. What’s more, you will get credit for operations you conduct successfully against foreigners, while the odds are that any operations that they conduct successfully against U.S. targets will remain unknown to your superiors. This further emphasizes the motive for attack rather than defense. Finally – and this appears to be less widely realized – the balance in favor of attack rather than defense is still more pronounced in smaller countries. They have proportionally fewer citizens to defend, and more foreigners to attack.

Additional Notes and Highlights

Expertise Required: None

Ross Anderson's home page