Toward a Safer and More Secure Cyberspace: Difference between revisions

From Cybersecurity Wiki
Jump to navigation Jump to search
 
(46 intermediate revisions by the same user not shown)
Line 4: Line 4:
==Full Citation==
==Full Citation==


Nat'l Research Council, ''Toward a Safer and More Secure Cyberspace'' (2007).  [http://www.cyber.st.dhs.gov/docs/Toward_a_Safer_and_More_Secure_Cyberspace-Full_report.pdf  ''Web'']
Nat'l Research Council, ''Toward a Safer and More Secure Cyberspace'' (2007).  [http://www.cyber.st.dhs.gov/docs/Toward_a_Safer_and_More_Secure_Cyberspace-Full_report.pdf  ''Web''] [http://www.nap.edu/catalog.php?record_id=11925 ''AltWeb'']


[http://cyber.law.harvard.edu/cybersecurity/Special:Bibliography?f=wikibiblio.bib&title=Special:Bibliography&view=&action=&keyword=Goodman_Lin:2007 ''BibTeX'']
[http://cyber.law.harvard.edu/cybersecurity/Special:Bibliography?f=wikibiblio.bib&title=Special:Bibliography&view=&action=&keyword=Goodman_Lin:2007 ''BibTeX'']


==Categorization==
==Categorization==
 
* [[Overview]]
Overview: [[Independent Reports]]
* Resource by Type: [[Independent Reports]]
* Threats and Actors: [[Electricity, Oil and Natural Gas]]
* Issues: [[Attribution]]; [[Cybercrime]]; [[Incentives]]; [[Metrics]]; [[Risk Management and Investment]]
* Approaches: [[Deterrence]]; [[Regulation/Liability]]


==Key Words==  
==Key Words==  
[http://cyber.law.harvard.edu/cybersecurity/Glossary_of_Core_Ideas#Research_.26_Development Research & Development]
[[Keyword_Index_and_Glossary_of_Core_Ideas#Botnet | Botnet]],
 
[[Keyword_Index_and_Glossary_of_Core_Ideas#DDoS_Attack | DDoS Attack]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Digital_Pearl_Harbor | Digital Pearl Harbor]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Honeypot | Honeypot]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Research_&_Development | Research & Development]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Risk_Modeling | Risk Modeling]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#SCADA_Systems | SCADA Systems]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Software_Vulnerability | Software Vulnerability]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#SPAM | SPAM]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Worm | Worm]]


==Synopsis==
==Synopsis==
This report was prepared by the Committee on Improving Cybersecurity Research, established by the National Research Council of the National Academies in response to a congressional request and with the financial support of NSF, DARPA, NIST, DHS, the National Academy of Engineering, and F. Thomas and Bonnie Berger Leighton. The basic premise underlying the committee’s task is that research can produce a better understanding of why cyberspace is as vulnerable as it is and that it can lead to new technologies and policies and their effective implementation to make things better.
This report was prepared by the Committee on Improving Cybersecurity Research, established by the National Research Council of the National Academies in response to a congressional request and with the financial support of NSF, DARPA, NIST, DHS, the National Academy of Engineering, and F. Thomas and Bonnie Berger Leighton. The basic premise underlying the committee’s task is that research can produce a better understanding of why cyberspace is as vulnerable as it is and that it can lead to new technologies and policies and their effective implementation to make things better.
===Purpose===


Given the growing importance of cyberspace to nearly all aspects of national life, a secure cyberspace is vitally important to the nation, but cyberspace is far from secure today. The United States faces the real risk that adversaries will exploit vulnerabilities in the nation's critical information systems, thereby causing considerable suffering and damage. Online e-commerce business, government agency files, and identity records are all potential security targets.  "Toward a Safer and More Secure Cyberspace" examines these Internet security vulnerabilities and offers a strategy for future research aimed at countering cyber attacks. It also explores the nature of online threats and some of the reasons why past research for improving cybersecurity has had less impact than anticipated, and considers the human resource base needed to advance the cybersecurity research agenda. The target audience of this work is Internet security professionals, information technologists, policy makers, data stewards, e-commerce providers, consumer protection advocates, and others interested in digital security and safety.
Given the growing importance of cyberspace to nearly all aspects of national life, a secure cyberspace is vitally important to the nation, but cyberspace is far from secure today. The United States faces the real risk that adversaries will exploit vulnerabilities in the nation's critical information systems, thereby causing considerable suffering and damage. Online e-commerce business, government agency files, and identity records are all potential security targets.  "Toward a Safer and More Secure Cyberspace" examines these Internet security vulnerabilities and offers a strategy for future research aimed at countering cyber attacks. It also explores the nature of online threats and some of the reasons why past research for improving cybersecurity has had less impact than anticipated, and considers the human resource base needed to advance the cybersecurity research agenda. The target audience of this work is Internet security professionals, information technologists, policy makers, data stewards, e-commerce providers, consumer protection advocates, and others interested in digital security and safety.
===A Cyberspace Bill of Rights===


The committee addressed the question: What would a safer and more secure cyberspace look like?  In response, the has formulated a Cyberspace Bill of Rights (CBoR). It consists of 10 basic provisions that the committee believes users should have as reasonable expectations for their online safety and security.  The CBoR articulated in this report is distinctly user-centric, enabling individuals to draw for themselves the contrast between that vision and their own personal cyberspace experiences.
The committee addressed the question: What would a safer and more secure cyberspace look like?  In response, the has formulated a Cyberspace Bill of Rights (CBoR). It consists of 10 basic provisions that the committee believes users should have as reasonable expectations for their online safety and security.  The CBoR articulated in this report is distinctly user-centric, enabling individuals to draw for themselves the contrast between that vision and their own personal cyberspace experiences.


The first three provisions relate to properties of holistic systems, including availability, recoverability, and control of systems:
'''The first three provisions relate to properties of holistic systems, including availability, recoverability, and control of systems:'''
* I. Availability of system and network resources to legitimate users.
* I. ''Availability of system and network resources to legitimate users''.<br />Users of information technology systems (from individuals to groups to society, and including programs and applications) should be able to use the computational resources to which they are entitled and systems that depend on those resources. Attacks intended to deny, seriously degrade, or reduce the timeliness of information technology-based services should not succeed.<br /><br />
* II. Easy and convenient recovery from successful attacks.
* II. ''Easy and convenient recovery from successful attacks.''<br />Because cybersecurity measures will sometimes fail, recovery from a security compromise will be necessary from time to time.  When necessary, such recovery should be easy and convenient for individual users, systems administrators, and other operators. Recovery is also an essential element of survivability and fault tolerance. Recovery should be construed broadly to include issues related to long-term availability in the face of “bit rot” and incompatible upgrades.<br /><br />
* III. Control over and knowledge of one’s own computing environment.
* III. ''Control over and knowledge of one’s own computing environment.''<br />Users expect to be in control of events and actions in their own immediate environment, where control refers to taking actions that influence what happens in that environment. Knowledge refers to knowing how things that are happening compare to user expectations about what is happening. To the extent that events and actions are occurring that are not initiated by the user, a breach in security may be occurring.
 
'''The next three provisions relate to the traditional security properties of confidentiality, authentication (and its extension, provenance), and authorization:'''
* IV. ''Confidentiality of stored information and information exchange.''<br />One central function of information technology is the communication and storage of information. Just as most people engage in telephone conversations and store paper files with some reasonable assurance that the content will remain private even without their taking explicit action, users should expect electronic systems to communicate and store information in accordance with clear confidentiality policies and with reasonable and comprehensible default behavior. Systems for application in a particular problem domain should be able to support the range of privacy policies relevant to that domain.<br /><br />
* V. ''Authentication and provenance.''<br />Mutual authentication of the senders and receivers involved in an information exchange is an essential part of maintaining confdentiality, since passing information to the wrong party or device is an obvious way in which confidentiality might be violated. As an extension of traditional authentication, users should have access to reliable and relevant provenance (that is, knowledge of the responsible parties) for any electronic information or electronic event, commensurate with their need for security and assurance.<br /><br />
* VI. ''The technological capability to exercise fine-grained control over the flow of information in and through systems.''<br />Authorized parties should be technically able to exercise finegrained control over flows of information. For example, it should be technologically possible for an individual to conduct certain online transactions with technologically guaranteed anonymity, and for putative partners in such transactions to decline to participate if anonymity is offered. It should also be technologically possible for individuals to know who collects what information about them. And, they should have the technical ability to restrict the types, amounts, and recipients of personal information.
 
'''The next three provisions relate to crosscutting properties of systems:'''
* VII.'' Security in using computing directly or indirectly in important applications, including financial, health care, and electoral transactions and real-time remote control of devices that interact with physical processes.''<br />Security is especially important in certain kinds of transactions, such as those involving financial, medical, or electoral matters. Further, computational devices increasingly control physical processes as well as information processes, and such devices may have the potential to act dangerously in the physical world. It is thus especially important that cyberattackers be precluded from impairing the safe operation of physical devices. In this context, security refers to the availability, integrity, appropriate privacy controls on information, sufficient guarantees about the identities of involved parties to prevent masquerading and other attack, and nonrepudiation guarantees so that parties can be assured of their interactions.<br /><br />
* VIII. ''The ability to access any source of information (e.g., e-mail, Web page, file) safely.''<br />Today, many security vulnerabilities are exploited as the result of some user action in accessing some source of information. In this context, safe access means that nothing unexpected happens and that nothing happens to compromise the expected confidentiality, integrity, and availability of the user’s information or computational resources. Safety cannot be assured with 100 percent certainty under any circumstances (for example, a user may take an allowed but unintended action that results in compromised confidentiality), but with proper attention to technology and to usability, the accessing of information can be made much less risky than it is today.<br /><br />
* IX. ''Awareness of what security is actually being delivered by a system or component.''<br />Users generally have expectations about the security-relevant behavior of a system, even if these expectations are implicit, unstated, or unfounded. System behavior that violates these expectations is often responsible for security problems. Thus, users have a right to know what security policies and assurances are actually being delivered by a system or component so that they can adjust their own expectations and subsequent behavior accordingly. As an illustration, nonexpert users need to know how security settings map onto policies being enforced, as well as how settings need to be specified in order to achieve a particular policy. Such awareness also implies the ability to make informed judgments about the degree of security that different systems provide. If individuals and organizations are to improve their cybersecurity postures, they need to know how to compare the security of different systems and the impact of changes on those systems. To a great degree, quantitative risk assessments, rational investment strategies, and cybersecurity insurance all depend on the ability to characterize the security of systems.
 
'''The last provision relates to justice:'''
* X. ''Justice for security problems caused by another party.''<br />In most of society, there is an expectation that victims of harm are entitled to some kind of justice—such as appropriate punishment of the perpetrator of harm. But today in cyberspace, there is no such expectation owing largely to the difficulty of identifying perpetrators and the lack of a legal structure for pursuing perpetrators. In addition, individuals who are victimized or improperly implicated because of cybersecurity problems should have access to due process that would make them whole. Society in its entirety should also have the ability to impose legal penalties on cyberattackers regardless of where they are located.
 
===Roadblocks to Providing a CBoR===
 
However, providing these "rights" to users will be difficult. Even even assuming that everything known about cybersecurity technologies and practices today was immediately put into practice, the resulting cybersecurity posture — though it would be stronger and more resilient than it is now — would still be inadequate against today’s threat, let alone tomorrow’s. Research is needed both to develop new knowledge and to make such knowledge more usable and transferable to the field. Furthermore, cybersecurity will be a continuing issue: threats evolve (both on their own and as defenses against them are discovered), and new vulnerabilities often emerge as innovation changes underlying system architectures, implementation, or basic assumptions.
 
===Proposed Research Agenda===
 
The recommended research agenda to make progress toward the vision embedded in the Cybersecurity Bill of Rights has six broad areas of focus:
 
# ''Blocking and limiting the impact of compromise''. This category includes secure information systems and networks that resist technical compromise; convenient and ubiquitous encryption that can prevent unauthorized parties from obtaining sensitive or confidential data; containment, backup, mitigation, and recovery; and system lockdowns under attack. One illustrative example of research in this category is secure design, development, and testing. Research is needed that will facilitate the design of systems that are “secure by design.” Research is also needed for security evaluation, for good implementation practices and tools that reduce the likelihood of program flaws (bugs) and make it easier for developers to  implement secure systems, and for improved testing and evaluation for functionality that has not been included in the specification of a system’s requirements and that may result in security vulnerabilities.<br /><br />
# ''Enabling accountability.'' This category includes matters such as remote authentication, access control and policy management, auditing and traceability, maintenance of provenance, secure associations between system components, intrusion detection, and so on. In general, the objective is to hold anyone or anything that has access to a system component—a computing device, a sensor, an actuator, a network—accountable for the results of such access. One illustrative example of research in this category is attribution. Anonymous attackers cannot be held responsible for their actions and do not suffer any consequences for the harmful actions that they may initiate. But many computer operations are inherently anonymous, which means that associating actors with actions must be done explicitly. Attribution technology enables such associations to be easily ascertained, captured, and preserved. At the same time, attribution mechanisms do not solve the important problem of the unwittingly compromised or duped user, although these mechanisms may be necessary in conducting forensic investigations that lead to such a user.<br /><br />
# ''Promoting deployment.'' This category is focused on ensuring that the technologies and procedures in Categories 1 and 2 are actually used to promote and enhance security. Category 3 includes technologies that facilitate ease of use by both end users and system implementers, incentives that promote the use of security technologies in the relevant contexts, and the removal of barriers that impede the use of security technologies. One illustrative example of research in this category is usable security. Security functionality is often turned off, disabled, bypassed, and not deployed because it is too complex for individuals and enterprise organizations to manage effectively or to use conveniently. Thus, an effort to develop more usable security mechanisms and approaches would have substantial payoff. Usable security has social and organizational dimensions as well as technological and psychological ones.<br /><br />
# ''Deterring would-be attackers and penalizing attackers.'' This category includes legal and policy measures that could be employed to penalize or impose consequences on  cyberattackers, and technologies that support such measures. In principle, this category could also include technical measures to retaliate against a cyberattacker. One illustrative example of research in this category would facilitate the prosecution of cybercriminals across international borders. Many cybercrime perpetrators are outside of U.S. jurisdiction, and the applicable laws may not criminalize the particulars of the crime perpetrated. Even if they do, logistical difficulties in identifying a perpetrator across national boundaries may render him or her practically immune to prosecution. Research is needed to further harmonize laws across many national boundaries to enable international prosecutions and to reduce the logistical difficulties involved in such activities.<br /><br />
# ''Illustrative crosscutting problem-focused research areas.'' This category focuses elements of research in Categories 1 through 4 onto specific important problems in cybersecurity. These include security for legacy systems, the role of secrecy in cyberdefense, coping with the insider threat, and security for new computing environments and in application domains.<br /><br />
# ''Speculative research.'' This category focuses on admittedly speculative approaches to cybersecurity that are unorthodox, “out-of-the-box,” and also that arguably have some potential for revolutionary and nonincremental gains in cybersecurity. The areas described in this report are merely illustrative of such ideas—of primary importance is the idea that speculative ideas are worth some investment in any broad research portfolio.


The next three provisions relate to the traditional security properties of confidentiality, authentication (and its extension, provenance), and authorization:
===Priorities for Immediate Action===
* IV. Confidentiality of stored information and information exchange.
* V. Authentication and provenance.
* VI. The technological capability to exercise fine-grained control over the flow of information in and through


The next three provisions relate to crosscutting properties of systems:
Finally, the report outlines its recommended priorities for immediate action:
* VII. Security in using computing directly or indirectly in important applications, including financial, health care, and electoral transactions and real-time remote control of devices that interact with physical processes.
* VIII. The ability to access any source of information (e.g., e-mail, Web page, file) safely.
* IX. Awareness of what security is actually being delivered by a system or component.


The last provision relates to justice:
* ''Create a sense of urgency about the cybersecurity problem.'' One element will be to provide as much information as possible about the scope and nature of the threat. A second element will be to change the decision-making calculus that excessively focuses vendor and enduser attention on short-term costs of improving their cybersecurity postures.<br /><br />
* X . Justice for security problems caused by another party.
* ''Commensurate with a rapidly growing cybersecurity threat, support a broad, robust, and sustained research agenda at levels which ensure that a large fraction of good ideas for cybersecurity research can be explored.''  Discretionary budgets for the foreseeable future will be very tight, but even in such times, program growth is possible if the political will is present to designate these directions as priorities. Both the scope and scale of federally funded cybersecurity research are seriously inadequate. To execute fully the broad strategy articulated in this report, a substantial increase in federal budgetary resources devoted to cybersecurity research will be needed. Nor should cybersecurity research remain in the computer science domain alone, and additional funding might well be used to support the pursuit of cybersecurity considerations in other closely related research endeavors, such as those related to creating high-assurance systems and the engineering of secure systems and software across entire system life cycles.<br /><br />
* ''Establish a mechanism for continuing follow-up on a research agenda.'' Today, the scope and nature of cybersecurity research across the federal government are not well understood, least of all by government decision makers. An important first step would be for the government to build on the efforts of the National Coordination Office for Networking and Information Technology Research and Development to develop a reasonably complete picture of the cybersecurity research efforts that the government supports from year to year. To the best of the committee’s knowledge, no such coordinated picture exists.<br /><br />
* ''Support research infrastructure.'' Making progress on any cybersecurity research agenda requires substantial attention to infrastructural issues. In this context, a cybersecurity research infrastructure refers to the collection of open testbeds, tools, data sets, and other things that enable research to progress and which allow research results to be implemented in actual IT products and services. Without an adequate research infrastructure, there is little hope for realizing the full potential of any research agenda.<br /><br />
* ''Sustain and grow the human resource base.'' When new ideas are needed, human capital is particularly important. For the pool of cybersecurity researchers to expand to a sufficiently large level, would-be researchers must believe that there is a future to working in this field, a point suggesting the importance of adequate and stable research support for the field. Increasing the number of researchers in a field necessarily entails increased support for that field, since no amount of prioritization within a fixed budget will result in significantly more researchers. In addition, potential graduate students see stable or growing levels of funding as a signal about the importance of the field and the potential for professional advancement.<br /><br />


==Additional Notes and Highlights==
==Additional Notes and Highlights==


Expertise Required: None
Expertise Required: Research Processes - Low; Technology - Low

Latest revision as of 15:41, 20 August 2010

Full Title of Reference

Toward a Safer and More Secure Cyberspace

Full Citation

Nat'l Research Council, Toward a Safer and More Secure Cyberspace (2007). Web AltWeb

BibTeX

Categorization

Key Words

Botnet, DDoS Attack, Digital Pearl Harbor, Honeypot, Research & Development, Risk Modeling, SCADA Systems, Software Vulnerability, SPAM, Worm

Synopsis

This report was prepared by the Committee on Improving Cybersecurity Research, established by the National Research Council of the National Academies in response to a congressional request and with the financial support of NSF, DARPA, NIST, DHS, the National Academy of Engineering, and F. Thomas and Bonnie Berger Leighton. The basic premise underlying the committee’s task is that research can produce a better understanding of why cyberspace is as vulnerable as it is and that it can lead to new technologies and policies and their effective implementation to make things better.

Purpose

Given the growing importance of cyberspace to nearly all aspects of national life, a secure cyberspace is vitally important to the nation, but cyberspace is far from secure today. The United States faces the real risk that adversaries will exploit vulnerabilities in the nation's critical information systems, thereby causing considerable suffering and damage. Online e-commerce business, government agency files, and identity records are all potential security targets. "Toward a Safer and More Secure Cyberspace" examines these Internet security vulnerabilities and offers a strategy for future research aimed at countering cyber attacks. It also explores the nature of online threats and some of the reasons why past research for improving cybersecurity has had less impact than anticipated, and considers the human resource base needed to advance the cybersecurity research agenda. The target audience of this work is Internet security professionals, information technologists, policy makers, data stewards, e-commerce providers, consumer protection advocates, and others interested in digital security and safety.

A Cyberspace Bill of Rights

The committee addressed the question: What would a safer and more secure cyberspace look like? In response, the has formulated a Cyberspace Bill of Rights (CBoR). It consists of 10 basic provisions that the committee believes users should have as reasonable expectations for their online safety and security. The CBoR articulated in this report is distinctly user-centric, enabling individuals to draw for themselves the contrast between that vision and their own personal cyberspace experiences.

The first three provisions relate to properties of holistic systems, including availability, recoverability, and control of systems:

  • I. Availability of system and network resources to legitimate users.
    Users of information technology systems (from individuals to groups to society, and including programs and applications) should be able to use the computational resources to which they are entitled and systems that depend on those resources. Attacks intended to deny, seriously degrade, or reduce the timeliness of information technology-based services should not succeed.

  • II. Easy and convenient recovery from successful attacks.
    Because cybersecurity measures will sometimes fail, recovery from a security compromise will be necessary from time to time. When necessary, such recovery should be easy and convenient for individual users, systems administrators, and other operators. Recovery is also an essential element of survivability and fault tolerance. Recovery should be construed broadly to include issues related to long-term availability in the face of “bit rot” and incompatible upgrades.

  • III. Control over and knowledge of one’s own computing environment.
    Users expect to be in control of events and actions in their own immediate environment, where control refers to taking actions that influence what happens in that environment. Knowledge refers to knowing how things that are happening compare to user expectations about what is happening. To the extent that events and actions are occurring that are not initiated by the user, a breach in security may be occurring.

The next three provisions relate to the traditional security properties of confidentiality, authentication (and its extension, provenance), and authorization:

  • IV. Confidentiality of stored information and information exchange.
    One central function of information technology is the communication and storage of information. Just as most people engage in telephone conversations and store paper files with some reasonable assurance that the content will remain private even without their taking explicit action, users should expect electronic systems to communicate and store information in accordance with clear confidentiality policies and with reasonable and comprehensible default behavior. Systems for application in a particular problem domain should be able to support the range of privacy policies relevant to that domain.

  • V. Authentication and provenance.
    Mutual authentication of the senders and receivers involved in an information exchange is an essential part of maintaining confdentiality, since passing information to the wrong party or device is an obvious way in which confidentiality might be violated. As an extension of traditional authentication, users should have access to reliable and relevant provenance (that is, knowledge of the responsible parties) for any electronic information or electronic event, commensurate with their need for security and assurance.

  • VI. The technological capability to exercise fine-grained control over the flow of information in and through systems.
    Authorized parties should be technically able to exercise finegrained control over flows of information. For example, it should be technologically possible for an individual to conduct certain online transactions with technologically guaranteed anonymity, and for putative partners in such transactions to decline to participate if anonymity is offered. It should also be technologically possible for individuals to know who collects what information about them. And, they should have the technical ability to restrict the types, amounts, and recipients of personal information.

The next three provisions relate to crosscutting properties of systems:

  • VII. Security in using computing directly or indirectly in important applications, including financial, health care, and electoral transactions and real-time remote control of devices that interact with physical processes.
    Security is especially important in certain kinds of transactions, such as those involving financial, medical, or electoral matters. Further, computational devices increasingly control physical processes as well as information processes, and such devices may have the potential to act dangerously in the physical world. It is thus especially important that cyberattackers be precluded from impairing the safe operation of physical devices. In this context, security refers to the availability, integrity, appropriate privacy controls on information, sufficient guarantees about the identities of involved parties to prevent masquerading and other attack, and nonrepudiation guarantees so that parties can be assured of their interactions.

  • VIII. The ability to access any source of information (e.g., e-mail, Web page, file) safely.
    Today, many security vulnerabilities are exploited as the result of some user action in accessing some source of information. In this context, safe access means that nothing unexpected happens and that nothing happens to compromise the expected confidentiality, integrity, and availability of the user’s information or computational resources. Safety cannot be assured with 100 percent certainty under any circumstances (for example, a user may take an allowed but unintended action that results in compromised confidentiality), but with proper attention to technology and to usability, the accessing of information can be made much less risky than it is today.

  • IX. Awareness of what security is actually being delivered by a system or component.
    Users generally have expectations about the security-relevant behavior of a system, even if these expectations are implicit, unstated, or unfounded. System behavior that violates these expectations is often responsible for security problems. Thus, users have a right to know what security policies and assurances are actually being delivered by a system or component so that they can adjust their own expectations and subsequent behavior accordingly. As an illustration, nonexpert users need to know how security settings map onto policies being enforced, as well as how settings need to be specified in order to achieve a particular policy. Such awareness also implies the ability to make informed judgments about the degree of security that different systems provide. If individuals and organizations are to improve their cybersecurity postures, they need to know how to compare the security of different systems and the impact of changes on those systems. To a great degree, quantitative risk assessments, rational investment strategies, and cybersecurity insurance all depend on the ability to characterize the security of systems.

The last provision relates to justice:

  • X. Justice for security problems caused by another party.
    In most of society, there is an expectation that victims of harm are entitled to some kind of justice—such as appropriate punishment of the perpetrator of harm. But today in cyberspace, there is no such expectation owing largely to the difficulty of identifying perpetrators and the lack of a legal structure for pursuing perpetrators. In addition, individuals who are victimized or improperly implicated because of cybersecurity problems should have access to due process that would make them whole. Society in its entirety should also have the ability to impose legal penalties on cyberattackers regardless of where they are located.

Roadblocks to Providing a CBoR

However, providing these "rights" to users will be difficult. Even even assuming that everything known about cybersecurity technologies and practices today was immediately put into practice, the resulting cybersecurity posture — though it would be stronger and more resilient than it is now — would still be inadequate against today’s threat, let alone tomorrow’s. Research is needed both to develop new knowledge and to make such knowledge more usable and transferable to the field. Furthermore, cybersecurity will be a continuing issue: threats evolve (both on their own and as defenses against them are discovered), and new vulnerabilities often emerge as innovation changes underlying system architectures, implementation, or basic assumptions.

Proposed Research Agenda

The recommended research agenda to make progress toward the vision embedded in the Cybersecurity Bill of Rights has six broad areas of focus:

  1. Blocking and limiting the impact of compromise. This category includes secure information systems and networks that resist technical compromise; convenient and ubiquitous encryption that can prevent unauthorized parties from obtaining sensitive or confidential data; containment, backup, mitigation, and recovery; and system lockdowns under attack. One illustrative example of research in this category is secure design, development, and testing. Research is needed that will facilitate the design of systems that are “secure by design.” Research is also needed for security evaluation, for good implementation practices and tools that reduce the likelihood of program flaws (bugs) and make it easier for developers to implement secure systems, and for improved testing and evaluation for functionality that has not been included in the specification of a system’s requirements and that may result in security vulnerabilities.

  2. Enabling accountability. This category includes matters such as remote authentication, access control and policy management, auditing and traceability, maintenance of provenance, secure associations between system components, intrusion detection, and so on. In general, the objective is to hold anyone or anything that has access to a system component—a computing device, a sensor, an actuator, a network—accountable for the results of such access. One illustrative example of research in this category is attribution. Anonymous attackers cannot be held responsible for their actions and do not suffer any consequences for the harmful actions that they may initiate. But many computer operations are inherently anonymous, which means that associating actors with actions must be done explicitly. Attribution technology enables such associations to be easily ascertained, captured, and preserved. At the same time, attribution mechanisms do not solve the important problem of the unwittingly compromised or duped user, although these mechanisms may be necessary in conducting forensic investigations that lead to such a user.

  3. Promoting deployment. This category is focused on ensuring that the technologies and procedures in Categories 1 and 2 are actually used to promote and enhance security. Category 3 includes technologies that facilitate ease of use by both end users and system implementers, incentives that promote the use of security technologies in the relevant contexts, and the removal of barriers that impede the use of security technologies. One illustrative example of research in this category is usable security. Security functionality is often turned off, disabled, bypassed, and not deployed because it is too complex for individuals and enterprise organizations to manage effectively or to use conveniently. Thus, an effort to develop more usable security mechanisms and approaches would have substantial payoff. Usable security has social and organizational dimensions as well as technological and psychological ones.

  4. Deterring would-be attackers and penalizing attackers. This category includes legal and policy measures that could be employed to penalize or impose consequences on cyberattackers, and technologies that support such measures. In principle, this category could also include technical measures to retaliate against a cyberattacker. One illustrative example of research in this category would facilitate the prosecution of cybercriminals across international borders. Many cybercrime perpetrators are outside of U.S. jurisdiction, and the applicable laws may not criminalize the particulars of the crime perpetrated. Even if they do, logistical difficulties in identifying a perpetrator across national boundaries may render him or her practically immune to prosecution. Research is needed to further harmonize laws across many national boundaries to enable international prosecutions and to reduce the logistical difficulties involved in such activities.

  5. Illustrative crosscutting problem-focused research areas. This category focuses elements of research in Categories 1 through 4 onto specific important problems in cybersecurity. These include security for legacy systems, the role of secrecy in cyberdefense, coping with the insider threat, and security for new computing environments and in application domains.

  6. Speculative research. This category focuses on admittedly speculative approaches to cybersecurity that are unorthodox, “out-of-the-box,” and also that arguably have some potential for revolutionary and nonincremental gains in cybersecurity. The areas described in this report are merely illustrative of such ideas—of primary importance is the idea that speculative ideas are worth some investment in any broad research portfolio.

Priorities for Immediate Action

Finally, the report outlines its recommended priorities for immediate action:

  • Create a sense of urgency about the cybersecurity problem. One element will be to provide as much information as possible about the scope and nature of the threat. A second element will be to change the decision-making calculus that excessively focuses vendor and enduser attention on short-term costs of improving their cybersecurity postures.

  • Commensurate with a rapidly growing cybersecurity threat, support a broad, robust, and sustained research agenda at levels which ensure that a large fraction of good ideas for cybersecurity research can be explored. Discretionary budgets for the foreseeable future will be very tight, but even in such times, program growth is possible if the political will is present to designate these directions as priorities. Both the scope and scale of federally funded cybersecurity research are seriously inadequate. To execute fully the broad strategy articulated in this report, a substantial increase in federal budgetary resources devoted to cybersecurity research will be needed. Nor should cybersecurity research remain in the computer science domain alone, and additional funding might well be used to support the pursuit of cybersecurity considerations in other closely related research endeavors, such as those related to creating high-assurance systems and the engineering of secure systems and software across entire system life cycles.

  • Establish a mechanism for continuing follow-up on a research agenda. Today, the scope and nature of cybersecurity research across the federal government are not well understood, least of all by government decision makers. An important first step would be for the government to build on the efforts of the National Coordination Office for Networking and Information Technology Research and Development to develop a reasonably complete picture of the cybersecurity research efforts that the government supports from year to year. To the best of the committee’s knowledge, no such coordinated picture exists.

  • Support research infrastructure. Making progress on any cybersecurity research agenda requires substantial attention to infrastructural issues. In this context, a cybersecurity research infrastructure refers to the collection of open testbeds, tools, data sets, and other things that enable research to progress and which allow research results to be implemented in actual IT products and services. Without an adequate research infrastructure, there is little hope for realizing the full potential of any research agenda.

  • Sustain and grow the human resource base. When new ideas are needed, human capital is particularly important. For the pool of cybersecurity researchers to expand to a sufficiently large level, would-be researchers must believe that there is a future to working in this field, a point suggesting the importance of adequate and stable research support for the field. Increasing the number of researchers in a field necessarily entails increased support for that field, since no amount of prioritization within a fixed budget will result in significantly more researchers. In addition, potential graduate students see stable or growing levels of funding as a signal about the importance of the field and the potential for professional advancement.

Additional Notes and Highlights

Expertise Required: Research Processes - Low; Technology - Low