Why Information Security is Hard
Full Title of Reference
Why Information Security is Hard -- An Economic Perspective
Current Market Practices Undermine Cybersecurity
According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved. In this note, I put forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.
This paper examines the winner-take-all IT market structure and the strategic practices towards market power that enable poor management decisions and large-spread security failures. The focus is on the constant competitive struggles to entrench or undermine monopolies and to segment and control markets that de facto determines many of the environmental conditions that make the security engineer’s work harder. The paper also suggests that it is likely that over time, government interference in information security standards will be motivated by broader competition issues, as well as by narrow issues of the effectiveness of information security product markets.
Offensive Operations Are Easier Than Defensive Ones
The paper also examines information warfare from a theoretical perspective to show the weakness of information security, and why offensive operations are more attractive than defensive ones:
Let us suppose a large, complex product such as Windows 2000 has 1,000,000 bugs, each with an MTBF of 1,000,000,000 hours. Suppose that Paddy works for the Irish Republican Army, and his job is to break into the British Army’s computer to get the list of informers in Belfast; while Brian is the army assurance guy whose job is to stop Paddy. So he must learn of the bugs before Paddy does. Paddy has a day job so he can only do 1000 hours of testing a year. Brian has full Windows source code, dozens of PhDs, control of the commercial evaluation labs, an inside track on CERT, an information sharing deal with other UKUSA member states – and he also runs the government’s scheme to send round consultants to critical industries such as power and telecomms to advise them how to protect their sys- tems. Suppose that Brian benefits from 10,000,000 hours a year worth of testing.
After a year, Paddy ﬁnds a bug, while Brian has found 100,000. But the probability that Brian has found Paddy’s bug is only 10%. After ten years he will ﬁnd it – but by then Paddy will have found nine more, and it’s unlikely that Brian will know of all of them. Worse, Brian’s bug reports will have become such a ﬁrehose that Microsoft will have killﬁled him. In other words, Paddy has thermodynamics on his side. Even a very moderately resourced attacker can break anything that’s at all large and complex. There is nothing that can be done to stop this, so long as there are enough different security vulnerabilities to do statistics: different testers ﬁnd different bugs. There are various ways in which one might hope to escape this statistical trap.
• First, although it’s reasonable to expect a 35,000,000 line program like Windows 2000 to have 1,000,000 bugs, perhaps only 1% of them are security-critical. This changes the game slightly, but not much; Paddy now needs to recruit 100 volunteers to help him (or, more realistically, swap information in a grey market with other subversive elements). Still, the effort required of the attacker is still much less than that needed for effective defense.
• Second, there may be a single ﬁx for a large number of the security critical bugs. For example, if half of them are stack overﬂows, then perhaps these can all be removed by a new compiler.
• Third, you can make the security critical part of the system small enough that the bugs can be found. This was understood, in an empirical way, by the early 1970s. However, the discussion in the above section should have made clear that a minimal TCB is unlikely to be available anytime soon, as it would make applications harder to develop and thus impair the platform vendors’ appeal to developers.
Information warfare looks rather like air warfare looked in the 1920s and 1930s. Attack is simply easier than defense. Defending a modern information system could also be likened to defending a large, thinly-populated territory like the nineteenth century Wild West: the men in black hats can strike anywhere, while the men in white hats have to defend everywhere. Another possible relevant analogy is the use of piracy on the high seas as an instrument of state policy by many European powers in the sixteenth and seventeenth centuries. Until the great powers agreed to deny pirates safe haven, piracy was just too easy. The technical bias in favor of attack is made even worse by asymmetric information. Suppose that you head up a U.S. agency with an economic intelligence mission, and a computer scientist working for you has just discovered a beautiful new exploit on Windows 2000. If you report this to Microsoft, you will protect 250 million Americans; if you keep quiet, you will be able to conduct operations against 400 million Europeans and 100 million Japanese. What’s more, you will get credit for operations you conduct successfully against foreigners, while the odds are that any operations that they conduct successfully against U.S. targets will remain unknown to your superiors. This further emphasizes the motive for attack rather than defense. Finally – and this appears to be less widely realized – the balance in favor of attack rather than defense is still more pronounced in smaller countries. They have proportionally fewer citizens to defend, and more foreigners to attack.