Difference between revisions of "The Economics of Information Security"
|Line 17:||Line 17:|
Revision as of 18:27, 21 June 2010
The Economics of Information Security
Ross Anderson and Tyler Moore, The Economics of Information Security, 314 Sci. 610 (2006). Web
The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, we find that incentives are becoming as important as technical design in achieving dependability. The new field provides valuable insights not just into "security" topics (such as bugs, spam, phishing, and law enforcement strategy) but into more general areas such as the design of peer-to-peer systems, the optimal balance of effort by programmers and testers, why privacy gets eroded, and the politics of digital rights management.
Over the past 6 years, people have realized that security failure is caused at least as often by bad incentives as by bad design. Systems are particularly prone to failure when the person guarding them is not the person who sufers when they fail. The growing use of security mechanisms to enable one system user to exert power over another user, rather than simply to exclude people who should not be users at all, introduces many strategic and policy issues. The tools and concepts of game theory and microeconomic theory are becoming just as important as the mathematics of cryptography to the security engineer.
The paper explores the overlap and gap between security practices and market practices. An example:
- Platform vendors commonly ignore security in the beginning, as they are building their market position; later, once they have captured a lucrative market, they add excessive security in order to lock their customers in tightly
The paper charts standard occurrences and outcomes for system security and reliability, and matches these up with game theory and microeconomic theory, to determine and rationalize security failings. Consider this scenario:
- Consider a medieval city. If the main threat is a siege, and each family is responsible for maintaining and guarding one stretch of the wall, then the city’s security will depend on the efforts of the laziest and most cowardly family. If, however, disputes are settled by single combat between champions, then its security depends on the strength and courage of its most valiant knight. But if wars are a matter of attrition, then it is the sum of all the citizens’ efforts that matters. System reliability is no different; it can depend on the sum of individual efforts, the minimum effort anyone makes, or the maximum effort anyone makes. Program correctness can depend on minimum effort (the most careless programmer introducing a vulnerability), whereas software validation and vulnerability testing might depend on the sum of everyone’s efforts.
What are the consequences?
- In the minimum-effort case, the agent with the lowest benefit-cost ratio dominates. As more agents are added, systems become increasingly reliable in the total-effort case but increasingly unreliable in the weakest-link case. What are the implications? One is that software companies should hire more software testers and fewer (but more competent) programmers.
Additional Notes and Highlights
* Outline key points of interest