Four Grand Challenges in Trustworthy Computing: Difference between revisions

From Cybersecurity Wiki
Jump to navigation Jump to search
No edit summary
 
(17 intermediate revisions by the same user not shown)
Line 9: Line 9:


==Categorization==
==Categorization==
* Resource by Type: [[Independent Reports]]
* Issues: [[Identity Management]]; [[Information Sharing/Disclosure]]; [[Risk Management and Investment]]; [[Usability/Human Factors]]


* Overview: [[Independent Reports]]
==Key Words==
[[Keyword_Index_and_Glossary_of_Core_Ideas#Antivirus | Antivirus]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Botnet | Botnet]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Cyber_Crime | Cyber Crime]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Cyber_Security_as_a_Public_Good | Cyber Security as a Public Good]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#DDoS_Attack | DDoS Attack]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Malware | Malware]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Patching | Patching]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Research_&_Development | Research & Development]],
[[Keyword_Index_and_Glossary_of_Core_Ideas#Software_Vulnerability | Software Vulnerability]]


==Key Words==  
==Synopsis==
The goal of the CRA Grand Research Challenges conferences is to encourage thinking beyond incremental improvements. Some important problems simply cannot be solved by narrow investigation aimed at short-term payoffs. Multiple approaches, carried out over a long period of time, will be required. The community is looking for big advances that require vision and cannot be achieved by small evolutionary steps. The [[Cyber Security: A Crisis of Prioritization | February 2005 report by the President’s Information Technology Advisory Committee (PITAC)]] supported a long-term view of research by agencies such as DARPA and NSA, arguing that the trends “favoring short-term research over long-term research . . . should concern policymakers because they threaten to constrict the pipeline of fundamental cyber security research that . . . is vital to securing the Nation’s IT infrastructure.”  Rather, long term research is needed to innovate to ensure trustworthy computing.
 
Nearly fifty technology and policy experts in security, privacy and networking met November 16-19, 2003, at Airlie House in Northern Virginia in a Gordon-style research conference under
the sponsorship of CRA and the National Science Foundation (NSF). This report describes Four Grand Challenges in trustworthy computing identified by the conference participants, why these challenges were selected, why progress may be possible in each area, and the potential barriers in addressing them.
 
===Overarching Vision===
Our overarching vision for trustworthy computing is that it should be:
* Intuitive
* Controllable
* Reliable
* Predictable
 
A key to achieving this vision is identity. As in the real world, cybersecurity demands a trust relationship between individuals. The reason that spam spreads so easily in the current Internet is the difficulty of determining the identity of an email sender. Virus authors have become expert at “scanning”—that is, determining the identity and capabilities of millions of Internet-attached computers. Owners of digital property legitimately want to know to whom their property has been licensed. Identity must be shared to be useful, but individuals should make individual choices about their personal privacy and the technology should support those choices. 


This vision is only achievable if security and trust are designed into systems as integral properties, rather than as afterthoughts. It is, in fact, one of the brewing tragedies of the digital world that existing infrastructure was not designed with trust as a primary consideration. We are on the verge of creating a new wave of digital technology; if we are to avoid repeating the mistakes of the past decade, it is essential that these new systems be designed to operate securely “out of the box.” That is to say, security should be the default condition, not an option.


The immediacy of the threat has led to a focus on near-term needs. Because near-term needs mainly address methods for securing existing systems, this has led to investment in patching existing infrastructure rather than technological innovation of the sort that will be needed to devise the next-generation trustworthy computing base. Policy tends to lag innovation, so too much focus on near-term problems has also hindered the development of effective policy at all levels. 


==Synopsis==
Innovation requires focus on long-term research, a kind of investment in which progress is measured by the extent and level of investment. In trustworthy computing, this focus has been episodic and so progress has not been sustained. Furthermore, the main source of long-term research funding for information security has been the defense agencies, and the problems of cybersecurity clearly go beyond the needs of any single federal agency.
The goal of the CRA Grand Research Challenges conferences is to encourage thinking beyond incremental improvements. Some important problems simply cannot be solved by narrow investigation
aimed at short-term payoffs. Multiple approaches, carried out over a long period of time, will be required. The community is looking for big advances that require vision and cannot be achieved by small evolutionary steps. The [[Cyber Security: A Crisis of Prioritization | February 2005 report by the President’s Information Technology Advisory Committee (PITAC)]] supported a long-term view of research by agencies such as DARPA and NSA, arguing that the trends “favoring short-term research over long-term research . . . should concern policymakers because they threaten to constrict the pipeline of fundamental cyber security research that . . . is vital to securing the Nation’s IT infrastructure.


Nearly fifty technology and policy experts in security, privacy and networking met November 16-19, 2003, at Airlie House in Northern Virginia in a Gordon-style research conference under
The long-term Grand Research Challenges we have identified are:
the sponsorship of CRA and the National Science Foundation (NSF). This report describes Four Grand Challenges in trustworthy computing identified by the conference participants, why these challenges were selected, why progress may be possible in each area, and the potential barriers in addressing them.


===Challenge 1: Eliminate Epidemic Attacks by 2014===
===Challenge 1: Eliminate Epidemic Attacks by 2014===
Epidemic attacks are intended to provoke catastrophic losses in the worldwide computing and communications infrastructure. Characterized by their extremely rapid spread, the costs (to legitimate users) of successful epidemics have risen in recent years to billions of dollars per attack.  The challenge is to eliminate the threat of all epidemic-style attacks such as viruses and worms, spam, and denial-of-service attacks within the decade.
It is possible to imagine approaches that might be effective at deterring epidemic attacks if they were further developed and deployed:
* Immune System for Networks – capabilities built into a network to recognize, respond to and disable viruses and worms by dynamically managed connectivity.
* Composability – rules guaranteeing that two systems operating together will not introduce vulnerabilities that neither have individually.
* Knowledge Confinement – partitioning information in such way that an attacker never has enough knowledge to successfully propagate through the network.
* Malice Tolerance – much as many current systems can tolerate failures by continuing to operate even when many components do fail, tolerate malice by continuing to operate in spite of arbitrarily destructive behavior of a minority of system components.
* Trusted Hardware – tie software and service guarantees to the physical security of hardware devices.
However, the distributed, decentralized nature of the Internet means that there is no simple formula that guarantees success. Experimentation will be needed and the only acceptable laboratory environments are those testbeds with the scale and the complexity of the Internet itself. If such testbeds cannot be constructed, extremely high fidelity simulations will be required. Currently, the technology to devise such simulations is not available. The February 2005 PITAC report placed the improvement in system modeling and the creation of testbeds on its list of priorities.
Much productivity is lost in the current environment, and eliminating epidemics will enable the redirection of significant human, financial and technical capital to other value-producing activities.


===Challenge 2: Enable Trusted Systems for Important Societal Applications===
===Challenge 2: Enable Trusted Systems for Important Societal Applications===
There are many new systems planned or currently under design that have significant societal impact, and there is a high probability that we will come to rely on these systems immediately upon their deployment. Among these systems are electronic voting systems, healthcare record databases, and information systems to enable effective law enforcement. A grand research challenge is to ensure that these systems are highly trustworthy despite being attractive targets for attackers.
Critical systems such as the ones mentioned above are being designed today, and it is a challenge to the research community to ensure that the mistakes of the past are not repeated. Despite many advances in computer and communications hardware and software, existing technology has not enabled us to build systems that resist failures and repel attacks. Decision-makers are today mandating the widespread deployment of electronic and Internet-based systems for uses that — should widespread attacks succeed — would undermine public institutions and structures to a catastrophic degree.  There is very little reason to believe that such systems, if developed under current technology, will be trustworthy.
The barriers to be overcome in developing effective tools are as much social and political as technological.  Any successful attack on the problems posed above will have to reconcile various legal regimes with new technology. Social and cultural concerns may delay the widespread deployment of privacy-enhancing measures, for example.
The likely approaches to solving these problems are also certain to increase the cost and complexity of the technology. The new trust capabilities may have to achieve unprecedented levels of protection in order to be politically acceptable.
Lastly, all of the systems we have looked at involve integrating new technologies with legacy applications  that have little or no protection. The enhanced technologies will have to provide strong “end-to-end” guarantees in spite of this.


===Challenge 3: Develop Accurate Risk Analysis for Cybersecurity===
===Challenge 3: Develop Accurate Risk Analysis for Cybersecurity===
Even the best technology for enabling trustworthy computing will be ineffective if it is not deployed and used in practice. Executives, corporate boards of directors and chief information officers are jointly responsible for balancing investments and risks, and use many metrics for determining whether or not a projected return on investment (ROI) justifies the investment. Spending for new information security measures is such an investment for most organizations.
ROI analysis has proved to be remarkably ineffective in spurring effective investments in information technology. Despite CERT/CC data and daily newspaper headlines that document the increasing costs of attacks and vulnerabilities of the computing base, investments for information security in all types of organizations, expressed as a percentage of overall IT spending, has actually decreased since September 11, 2001.
A major reason for this seemingly irrational behavior on the part of decision-makers is the lack of effective models of risk. In other words, ROI analysis is only valid if there is some assurance that increasing spending on security measures actually increases security. In this regard, trustworthy computing is still in its infancy and models are needed that put information security on the same level as financial systems with regard to accurate risk modeling.
The challenge of the research community is to develop, within ten years, quantitative IT risk management that is at least as effective as quantitative financial risk management. The February 2005 PITAC report placed quantitative benefit-cost modeling at number nine on its list of “Cyber Security Research Priorities.”
Being able to accurately model and ultimately predict such failures is the ultimate goal of this challenge. Without a widely accepted system of risk measurement, effective management of investment and risk is quite hopeless. Without an effective model, decision-makers will either over-invest in security measures that do not pay off or will under-invest and risk devastating consequences. 
Two aspects of measurement that will need the most determined attention are:
# Measuring the wrong thing is ultimately worse than not measuring anything at all.
# Because choices and decisions need to be made by many organizations over long periods of time, the measures need to be consistent, unbiased and unambiguous.


===Challenge 4: Secure the Ubiquitous Computing Environments of the Future===
===Challenge 4: Secure the Ubiquitous Computing Environments of the Future===
The fourth and final grand challenge is to protect our future technological base. For the dynamic, pervasive computing environments of the future, we will give computer end-users security they can understand and privacy they can control. Technology can easily outrun comprehensibility, and a trustworthy computing base should not make this worse. By the same token, identity will be many-faceted and ubiquitous in a world of pervasive computing, and individuals should be able to maintain control of it.
Experience teaches us that it is important to treat security as a driving concern from the earliest stages of system design. Also, our experience with the adoption of the Internet is evidence that information security has to reflect the sensibilities of the underlying social systems as opposed to simple technological systems. If the systems of the future are deployed without adequate security, or if our privacy is compromised in the name of technological change, we may not be able to regain it. It is important to issue a call to the security community to assert a leadership role now.


==Additional Notes and Highlights==
==Additional Notes and Highlights==
Expertise Required: None

Latest revision as of 16:12, 23 July 2010

Full Title of Reference

Four Grand Challenges in Trustworthy Computing: Second in a Series of Conferences on Grand Research Challenges in Computer Science and Engineering

Full Citation

Computing Research Assoc., Four Grand Challenges in Trustworthy Computing: Second in a Series of Conferences on Grand Research Challenges in Computer Science and Engineering (2003). Web

BibTeX

Categorization

Key Words

Antivirus, Botnet, Cyber Crime, Cyber Security as a Public Good, DDoS Attack, Malware, Patching, Research & Development, Software Vulnerability

Synopsis

The goal of the CRA Grand Research Challenges conferences is to encourage thinking beyond incremental improvements. Some important problems simply cannot be solved by narrow investigation aimed at short-term payoffs. Multiple approaches, carried out over a long period of time, will be required. The community is looking for big advances that require vision and cannot be achieved by small evolutionary steps. The February 2005 report by the President’s Information Technology Advisory Committee (PITAC) supported a long-term view of research by agencies such as DARPA and NSA, arguing that the trends “favoring short-term research over long-term research . . . should concern policymakers because they threaten to constrict the pipeline of fundamental cyber security research that . . . is vital to securing the Nation’s IT infrastructure.” Rather, long term research is needed to innovate to ensure trustworthy computing.

Nearly fifty technology and policy experts in security, privacy and networking met November 16-19, 2003, at Airlie House in Northern Virginia in a Gordon-style research conference under the sponsorship of CRA and the National Science Foundation (NSF). This report describes Four Grand Challenges in trustworthy computing identified by the conference participants, why these challenges were selected, why progress may be possible in each area, and the potential barriers in addressing them.

Overarching Vision

Our overarching vision for trustworthy computing is that it should be:

  • Intuitive
  • Controllable
  • Reliable
  • Predictable

A key to achieving this vision is identity. As in the real world, cybersecurity demands a trust relationship between individuals. The reason that spam spreads so easily in the current Internet is the difficulty of determining the identity of an email sender. Virus authors have become expert at “scanning”—that is, determining the identity and capabilities of millions of Internet-attached computers. Owners of digital property legitimately want to know to whom their property has been licensed. Identity must be shared to be useful, but individuals should make individual choices about their personal privacy and the technology should support those choices.

This vision is only achievable if security and trust are designed into systems as integral properties, rather than as afterthoughts. It is, in fact, one of the brewing tragedies of the digital world that existing infrastructure was not designed with trust as a primary consideration. We are on the verge of creating a new wave of digital technology; if we are to avoid repeating the mistakes of the past decade, it is essential that these new systems be designed to operate securely “out of the box.” That is to say, security should be the default condition, not an option.

The immediacy of the threat has led to a focus on near-term needs. Because near-term needs mainly address methods for securing existing systems, this has led to investment in patching existing infrastructure rather than technological innovation of the sort that will be needed to devise the next-generation trustworthy computing base. Policy tends to lag innovation, so too much focus on near-term problems has also hindered the development of effective policy at all levels.

Innovation requires focus on long-term research, a kind of investment in which progress is measured by the extent and level of investment. In trustworthy computing, this focus has been episodic and so progress has not been sustained. Furthermore, the main source of long-term research funding for information security has been the defense agencies, and the problems of cybersecurity clearly go beyond the needs of any single federal agency.

The long-term Grand Research Challenges we have identified are:

Challenge 1: Eliminate Epidemic Attacks by 2014

Epidemic attacks are intended to provoke catastrophic losses in the worldwide computing and communications infrastructure. Characterized by their extremely rapid spread, the costs (to legitimate users) of successful epidemics have risen in recent years to billions of dollars per attack. The challenge is to eliminate the threat of all epidemic-style attacks such as viruses and worms, spam, and denial-of-service attacks within the decade.

It is possible to imagine approaches that might be effective at deterring epidemic attacks if they were further developed and deployed:

  • Immune System for Networks – capabilities built into a network to recognize, respond to and disable viruses and worms by dynamically managed connectivity.
  • Composability – rules guaranteeing that two systems operating together will not introduce vulnerabilities that neither have individually.
  • Knowledge Confinement – partitioning information in such way that an attacker never has enough knowledge to successfully propagate through the network.
  • Malice Tolerance – much as many current systems can tolerate failures by continuing to operate even when many components do fail, tolerate malice by continuing to operate in spite of arbitrarily destructive behavior of a minority of system components.
  • Trusted Hardware – tie software and service guarantees to the physical security of hardware devices.

However, the distributed, decentralized nature of the Internet means that there is no simple formula that guarantees success. Experimentation will be needed and the only acceptable laboratory environments are those testbeds with the scale and the complexity of the Internet itself. If such testbeds cannot be constructed, extremely high fidelity simulations will be required. Currently, the technology to devise such simulations is not available. The February 2005 PITAC report placed the improvement in system modeling and the creation of testbeds on its list of priorities.

Much productivity is lost in the current environment, and eliminating epidemics will enable the redirection of significant human, financial and technical capital to other value-producing activities.

Challenge 2: Enable Trusted Systems for Important Societal Applications

There are many new systems planned or currently under design that have significant societal impact, and there is a high probability that we will come to rely on these systems immediately upon their deployment. Among these systems are electronic voting systems, healthcare record databases, and information systems to enable effective law enforcement. A grand research challenge is to ensure that these systems are highly trustworthy despite being attractive targets for attackers.

Critical systems such as the ones mentioned above are being designed today, and it is a challenge to the research community to ensure that the mistakes of the past are not repeated. Despite many advances in computer and communications hardware and software, existing technology has not enabled us to build systems that resist failures and repel attacks. Decision-makers are today mandating the widespread deployment of electronic and Internet-based systems for uses that — should widespread attacks succeed — would undermine public institutions and structures to a catastrophic degree. There is very little reason to believe that such systems, if developed under current technology, will be trustworthy.

The barriers to be overcome in developing effective tools are as much social and political as technological. Any successful attack on the problems posed above will have to reconcile various legal regimes with new technology. Social and cultural concerns may delay the widespread deployment of privacy-enhancing measures, for example.

The likely approaches to solving these problems are also certain to increase the cost and complexity of the technology. The new trust capabilities may have to achieve unprecedented levels of protection in order to be politically acceptable.

Lastly, all of the systems we have looked at involve integrating new technologies with legacy applications that have little or no protection. The enhanced technologies will have to provide strong “end-to-end” guarantees in spite of this.

Challenge 3: Develop Accurate Risk Analysis for Cybersecurity

Even the best technology for enabling trustworthy computing will be ineffective if it is not deployed and used in practice. Executives, corporate boards of directors and chief information officers are jointly responsible for balancing investments and risks, and use many metrics for determining whether or not a projected return on investment (ROI) justifies the investment. Spending for new information security measures is such an investment for most organizations.

ROI analysis has proved to be remarkably ineffective in spurring effective investments in information technology. Despite CERT/CC data and daily newspaper headlines that document the increasing costs of attacks and vulnerabilities of the computing base, investments for information security in all types of organizations, expressed as a percentage of overall IT spending, has actually decreased since September 11, 2001.

A major reason for this seemingly irrational behavior on the part of decision-makers is the lack of effective models of risk. In other words, ROI analysis is only valid if there is some assurance that increasing spending on security measures actually increases security. In this regard, trustworthy computing is still in its infancy and models are needed that put information security on the same level as financial systems with regard to accurate risk modeling.

The challenge of the research community is to develop, within ten years, quantitative IT risk management that is at least as effective as quantitative financial risk management. The February 2005 PITAC report placed quantitative benefit-cost modeling at number nine on its list of “Cyber Security Research Priorities.”

Being able to accurately model and ultimately predict such failures is the ultimate goal of this challenge. Without a widely accepted system of risk measurement, effective management of investment and risk is quite hopeless. Without an effective model, decision-makers will either over-invest in security measures that do not pay off or will under-invest and risk devastating consequences.

Two aspects of measurement that will need the most determined attention are:

  1. Measuring the wrong thing is ultimately worse than not measuring anything at all.
  2. Because choices and decisions need to be made by many organizations over long periods of time, the measures need to be consistent, unbiased and unambiguous.

Challenge 4: Secure the Ubiquitous Computing Environments of the Future

The fourth and final grand challenge is to protect our future technological base. For the dynamic, pervasive computing environments of the future, we will give computer end-users security they can understand and privacy they can control. Technology can easily outrun comprehensibility, and a trustworthy computing base should not make this worse. By the same token, identity will be many-faceted and ubiquitous in a world of pervasive computing, and individuals should be able to maintain control of it.

Experience teaches us that it is important to treat security as a driving concern from the earliest stages of system design. Also, our experience with the adoption of the Internet is evidence that information security has to reflect the sensibilities of the underlying social systems as opposed to simple technological systems. If the systems of the future are deployed without adequate security, or if our privacy is compromised in the name of technological change, we may not be able to regain it. It is important to issue a call to the security community to assert a leadership role now.

Additional Notes and Highlights

Expertise Required: None