National Cyber Leap Year Summit 2009, Co-Chairs' Report
Full Title of Reference
National Cyber Leap Year Summit 2009, Co-Chairs' Report
Networking and Information Technology Research and Development, National Cyber Leap Year Summit 2009: Co-Chairs' Report (2009). Web
- Resource by Type: US Government Reports and Documents
- Issues: Identity Management; Information Sharing/Disclosure; Insurance
- Approaches: Government Organizations; Private Efforts/Organizations; Technology
The Nation’s economic progress and social well-being now depend as heavily on cyberspace assets as on interest rates, roads, and power plants, yet our digital infrastructure and its foundations are still far from providing the guarantees that can justify our reliance on them. The inadequacy of today’s cyberspace mechanisms to support the core values underpinning our way of life has become a national problem. To respond to the President’s call to secure our nation’s cyber infrastructure, the White House Office of Science and Technology Policy (OSTP) and the agencies of the Federal Networking and Information Technology Research and Development (NITRD) Program have developed the Leap-Ahead Initiative. NITRD agencies include AHRQ, DARPA, DOE, EPA, NARA, NASA, NIH, NIST, NOAA, NSA, NSF, OSD, and the DOD research labs.) As part of this initiative, the Government in October 2008 launched a National Cyber Leap Year to address the vulnerabilities of the digital infrastructure. That effort has proceeded on the premise that, while some progress on cybersecurity will be made by finding better solutions for today’s problems, some of those problems may prove to be too difficult. The Leap Year has pursued a complementary approach: a search for ways to avoid having to solve the intractable problems. We call this approach changing the game, as in “if you are playing a game you cannot win, change the game!” During the Leap Year, via a Request for Information (RFI) process coordinated by the NITRD Program, the technical community had an opportunity to submit ideas for changing the cyber game, for example, by:
- Morphing the board: changing the defensive terrain (permanently or adaptively) to make it harder for the attacker to maneuver and achieve his goals, or
- Changing the rules: laying the foundation for cyber civilization by changing norms to favor our society’s values, or
- Raising the stakes: making the game less advantageous to the attacker by raising risk, lowering value, etc.
The 238 RFI responses that were submitted were synthesized by the NITRD Senior Steering Group for Cybersecurity R&D and five new games were identified. These new games have been chosen both because the change shifts our focus to new problems, and because there appear to be technologies and/or business cases on the horizon that would promote a change:
Basing trust decisions on verified assertions (Digital Provenance)
The Cyberspace Policy Review calls for building a cybersecurity-based identity management vision and strategy that addresses privacy and civil liberties interests, leveraging privacy-enhancing technologies for the Nation.
Digital Provenance (DP) is a set of technologies, incentives, and policies which, in combination, provide an appropriate level of attribution to users of -- and/or resources accessible via -- the Internet, allowing for trust decisions to be based on verified identity assertions.
Identity is a unique reference to a distinct (possibly composite) entity. It is a recursive concept based on the context; any attribute of an entity may be considered an identity. Provenance of an object is the set of identities, labels, and events associated with the object.
We envision an end state in which DP enables identification, authentication, and reputation for entities and objects with appropriate granularity at many layers of the protocol hierarchy. For example, networked entities will be capable of authenticating the origin(s) and integrity of communications traffic. Also, DP will enable users to identify and authenticate the origins of data objects. This mitigates spoofing, phishing, denial of service (DoS), and impersonation attacks.
Introduction of DP can result in loss of anonymity, i.e., the increase in trust would be undercut by the decrease in privacy. Government surveillance of individuals, whether for law enforcement or intelligence purposes, would become easier and more comprehensive. Industry would be able to better market products because of the tremendous insight into the most personal aspects of individuals' lives. At the same time, individuals and their actions would be significantly more exposed than they are today.
To protect the individual from the transparency resulting from DP, strong information governance (privacy) constraints must be established. Further, it may be advisable to place retention periods on information associated with individual provenance, in order to limit individual exposure. Such constraints and restrictions may be less important if DP is limited to business-to-business or intra-/inter-government transactions. Also, different levels of DP granularity (e.g., organization, job function, age) can be used to mitigate exposure of the individual and obtain certain measures of privacy.
Attacks only work once if at all (Moving-target Defense)
In the current game, our systems are built to operate in a relatively static configuration. For ex-ample, addresses, names, software stacks, networks, and various configuration parameters re-main relatively static over relatively long periods of time. This static approach is a legacy of information technology system design for simplicity and elegance in a time when malicious exploitation of system vulnerabilities was not a concern.
In order to be effective, adversaries must know a particular vulnerability of a system. The longer the vulnerability of a system exists, the more likely it is to be discovered and then exploited. Many system vulnerabilities are published by researchers and software vendors in order for system owners to patch those vulnerabilities. A system that remains unpatched is vulnerable to exploitation. Vulnerabilities that are not publicly disclosed are called zero-day vulnerabilities, and are known to a limited set of people. Zero-day vulnerabilities present a large risk to system owners because without knowledge of the vulnerability, they have no way to patch it.
It is now clear that static systems present a substantial advantage to attackers. Attackers can ob-serve the operation of key IT systems over long periods of time and plan attacks at their leisure, having mapped out an inventory of assets, vulnerabilities, and exploits. Additionally, attackers can anticipate likely responses and deploy attacks that escalate in sophistication as defenders de-ploy better defenses. Attackers can afford to invest significant resources in developing attacks since they can often be used repeatedly from one system to another.
Current approaches to addressing this problem are to remove bugs from software at the source, patch software as rapidly and uniformly as possible, and identify malicious attacks against soft-ware. The first approach of perfect software development does not scale to complete protection because the complexity of software precludes perfection. The second approach of patch distribution is now standard practice in large enterprises and has proven difficult to keep ahead of the threat. It also does not provide protection against zero-day attacks. The last approach is predicated on having a signature or definition of the malicious attack in order to find it and potentially block it. However, the speed and agility of adversaries as well as simple polymorphic mechanisms that continuously change the signatures of attacks renders signature-based approaches largely ineffective. The magnitude of this problem suggests that we need a radically new approach, or “game change," for IT system defense. To visualize the elements of the new game, observe that for attackers to exploit a system today, they must learn a vulnerability and hope that it is present long enough to exploit. For defenders to defeat attacks today, they must develop a signature of mal-ware or attacks and hope it is static long enough to block that attack. Malware writers developed mechanisms to rapidly change malware in order to defeat detection mechanisms. We, as defenders, should learn from this approach, and build systems that rapidly change, never allowing the exploitation of a particular vulnerability to impair the ability of a system to perform its mission/function, or if exploited once, not allowed to be exploitable again. If done correctly, this “moving target" defense can present a formidable obstacle to attackers since they depend on knowing a system's vulnerabilities a priori.
Therefore, a game-changing approach to building self-defending systems can and must be developed. Protecting systems (thus avoiding exposed vulnerabilities) to the greatest extent possible should still be the first goal. However, recognizing that absolute perfection in software or hard-ware is untenable, we propose an alternate strategy that continuously shifts the attack surface of the system.
We call this game-changing approach "Moving Target Defense." An important benefit of moving target defense is to decrease the known attack surface area of our systems to adversaries while simultaneously shifting it; a key challenge of moving target defense is to ensure that our systems remain dependable to their users and maintainable by their owners. By making the attack surface of software appear chaotic to adversaries, we force them to significantly increase the work effort to exploit vulnerabilities for every desired target. For instance, by the time an adversary discovers a vulnerability in a service, the service will have changed its attack surface area so that an-other exploit against that vulnerability will be ineffective.
The Moving Target approach will enable an "end state" in which systems can actively evade attacks, becoming substantially more secure even if they have vulnerabilities. It will result in systems in which fewer attacks will be successful. Those that are successful will be less likely to negatively impact a system's mission/function and less likely to be successful again, while other systems will automatically reconfigure themselves to be resilient to the same attack vector.
Moving target strategies employ architectures where one or more system attributes are automatically changed in a way that make the system attack surface area appear unpredictable to attackers. These strategies are beneficial at both the level of individual, high-value systems as well as large, national scale systems that may employ them collectively in a coordinated manner. They first make it much harder for attackers to identify vulnerabilities in targets and second, prevent them from repeating the attack on the same system or other similar systems.
Knowing when we have been had (Hardware-enabled Trust)
Hardware can be the final sanctuary and foundation of trust in the computing environment, based on the technologies that can be developed in the area of hardware-enabled trust and security. With cyber threats steadily increasing in sophistication, hardware can provide a game-changing foundation upon which to build tomorrow’s cyber infrastructure. But today’s hardware still provides limited support for security and capabilities that do exist are often not fully utilized by software. The hardware of the future also must exhibit greater resilience to function effectively under attack.
Within ten years, based on game-changing research:
- We will build a computer that will not execute malware, just as the human body can harbor certain viruses without ill-effect.
- We will build hardware that is itself more trustworthy.
- We will be able to determine, by technical means, whether to trust a device, a software package or a network based on dynamically acquired trust information rooted in hardware and user-defined security policies.
- We will build a computer that functions even under attack, through built-in resiliency that guarantees critical services in the face of compromise.
Move from forensics to real-time diagnosis (Nature-inspired Cyber Health)
Our working group focused on Nature-inspired approaches to cybersecurity because we believe that one of the best ways to generate novel ideas is to look to natural systems for inspiration—these systems evolved to face specific threats and have undergone millions of year of evolutionary selection to select the best fit. There are many natural systems that are far more complex than our cyber-systems but are none-the-less extremely robust, resilient, and effective. One notable example is the biological immune system that many organisms use to defend against invaders. Systems such function remarkably well in distributed, complex and ever-changing environments, even when subject to a continuous barrage of attacks. They exhibit a wealth of interesting mechanisms that could be the inspiration for many new methods for securing cyber-systems.
Although most of the currently active research into cybersecurity that is inspired by nature focuses on the immune system, there are many other natural systems that could serve as inspiration for cybersecurity. In this report, we present two novel concepts, both inspired by a study of natural systems. The first concept involves developing a national information-sharing and warning system for cybersecurity, using the Centers for Disease Control (CDC) as a model. The second concept is a controversial one that involves using attack vectors to secure vulnerable computers. In particular, we suggest new approaches to mitigate the effect of cyber-worms by emulating biological concepts concerning ‘phage therapy15’ that uses viruses to attack bacterial pathogens, ‘oncolytic viral therapy’16 that uses viruses to attack cancerous tumors, and interfering particle therapy17 that uses sub-viruses to attack pathogenic viruses.
One of the most important recommendations to emerge from this working group is that cross-disciplinary research has the potential to truly change the game for cybersecurity. We believe it is vital to promote such research through the establishment of communities, research programs, and even multi-disciplinary institutes focused on cybersecurity.
Crime does not pay (Cyber Economics)
The economics of cybersecurity reflects the recognition that information security problems are, fundamentally, issues of misaligned incentives and misallocated resources - and therefore economic problems that require economic, more than merely technical, game changing solutions. Accordingly, the Cyber-Economics group at the 2009 National Cyber Leap Year Summit identified four economic strategies through which research and policy efforts may spur game changes in cybersecurity:
- MITIGATING INCOMPLETE INFORMATION: Mitigate incomplete and asymmetric information barriers that hamper efficient security decision-making at the individual and organizational levels.
- INCENTIVES AND LIABILITIES: Leverage incentives and impose or redistribute liabilities to promote secure behavior and decision making among stakeholders.
- REDUCING ATTACKERS’ PROFITABILITY: Promote legal, technical, and social changes that reduce attackers’ revenues or increase their costs, thus lowering the overall profitability (and attractiveness) of cybercrime.
- MARKET ENFORCEABILITY: Ensure that proposed changes are enforceable with market mechanisms.
Additional Notes and Highlights
Expertise Required: None