Four Grand Challenges in Trustworthy Computing

From Cybersecurity Wiki
Jump to navigation Jump to search

Full Title of Reference

Four Grand Challenges in Trustworthy Computing: Second in a Series of Conferences on Grand Research Challenges in Computer Science and Engineering

Full Citation

Computing Research Assoc., Four Grand Challenges in Trustworthy Computing: Second in a Series of Conferences on Grand Research Challenges in Computer Science and Engineering (2003). Web

BibTeX

Categorization

Key Words

Synopsis

The goal of the CRA Grand Research Challenges conferences is to encourage thinking beyond incremental improvements. Some important problems simply cannot be solved by narrow investigation aimed at short-term payoffs. Multiple approaches, carried out over a long period of time, will be required. The community is looking for big advances that require vision and cannot be achieved by small evolutionary steps. The February 2005 report by the President’s Information Technology Advisory Committee (PITAC) supported a long-term view of research by agencies such as DARPA and NSA, arguing that the trends “favoring short-term research over long-term research . . . should concern policymakers because they threaten to constrict the pipeline of fundamental cyber security research that . . . is vital to securing the Nation’s IT infrastructure.” Rather, long term research is needed to innovate to ensure trustworthy computing.

Nearly fifty technology and policy experts in security, privacy and networking met November 16-19, 2003, at Airlie House in Northern Virginia in a Gordon-style research conference under the sponsorship of CRA and the National Science Foundation (NSF). This report describes Four Grand Challenges in trustworthy computing identified by the conference participants, why these challenges were selected, why progress may be possible in each area, and the potential barriers in addressing them.

Overarching Vision

Our overarching vision for trustworthy computing is that it should be:

  • Intuitive
  • Controllable
  • Reliable
  • Predictable

A key to achieving this vision is identity. As in the real world, cybersecurity demands a trust relationship between individuals. The reason that spam spreads so easily in the current Internet is the difficulty of determining the identity of an email sender. Virus authors have become expert at “scanning”—that is, determining the identity and capabilities of millions of Internet-attached computers. Owners of digital property legitimately want to know to whom their property has been licensed. Identity must be shared to be useful, but individuals should make individual choices about their personal privacy and the technology should support those choices.

This vision is only achievable if security and trust are designed into systems as integral properties, rather than as afterthoughts. It is, in fact, one of the brewing tragedies of the digital world that existing infrastructure was not designed with trust as a primary consideration. We are on the verge of creating a new wave of digital technology; if we are to avoid repeating the mistakes of the past decade, it is essential that these new systems be designed to operate securely “out of the box.” That is to say, security should be the default condition, not an option.

The immediacy of the threat has led to a focus on near-term needs. Because near-term needs mainly address methods for securing existing systems, this has led to investment in patching existing infrastructure rather than technological innovation of the sort that will be needed to devise the next-generation trustworthy computing base. Policy tends to lag innovation, so too much focus on near-term problems has also hindered the development of effective policy at all levels.

Innovation requires focus on long-term research, a kind of investment in which progress is measured by the extent and level of investment. In trustworthy computing, this focus has been episodic and so progress has not been sustained. Furthermore, the main source of long-term research funding for information security has been the defense agencies, and the problems of cybersecurity clearly go beyond the needs of any single federal agency.

The long-term Grand Research Challenges we have identified are:

Challenge 1: Eliminate Epidemic Attacks by 2014

Epidemic attacks are intended to provoke catastrophic losses in the worldwide computing and communications infrastructure. Characterized by their extremely rapid spread, the costs (to legitimate users) of successful epidemics have risen in recent years to billions of dollars per attack. The4 challenge is to eliminate the threat of all epidemic-style attacks such as viruses and worms, spam, and denial-of-service attacks within the decade.

It is possible to imagine approaches that might be effective at deterring epidemic attacks if they were further developed and deployed:

  • Immune System for Networks – capabilities built into a network to recognize, respond to and disable viruses and worms by dynamically managed connectivity.
  • Composability – rules guaranteeing that two systems operating together will not introduce vulnerabilities that neither have individually.
  • Knowledge Confinement – partitioning information in such way that an attacker never has enough knowledge to successfully propagate through the network.
  • Malice Tolerance – much as many current systems can tolerate failures by continuing to operate even when many components do fail, tolerate malice by continuing to operate in spite of arbitrarily destructive behavior of a minority of system components.
  • Trusted Hardware – tie software and service guarantees to the physical security of hardware devices.

However, the distributed, decentralized nature of the Internet means that there is no simple formula that guarantees success. Experimentation will be needed and the only acceptable laboratory environments are those testbeds with the scale and the complexity of the Internet itself. If such testbeds cannot be constructed, extremely high fidelity simulations will be required. Currently, the technology to devise such simulations is not available. The February 2005 PITAC report placed the improvement in system modeling and the creation of testbeds on its list of priorities.

Much productivity is lost in the current environment, and eliminating epidemics will enable the redirection of significant human, financial and technical capital to other value-producing activities.

Challenge 2: Enable Trusted Systems for Important Societal Applications

Challenge 3: Develop Accurate Risk Analysis for Cybersecurity

Challenge 4: Secure the Ubiquitous Computing Environments of the Future

Additional Notes and Highlights

Expertise Required: