During the July 4 holiday weekend, the latest in a series of cyberattacks was launched against popular government Web sites in the United States and South Korea, effectively shutting them down for several hours. It is unlikely that the real culprits will ever be identified or caught. Most disturbing, their limited success may embolden future hackers to attack critical infrastructure, such as power generators or air-traffic-control systems, with devastating consequences for the U.S. economy and national security.
As Defense Secretary Robert Gates wrote earlier this year in these pages, "The United States cannot kill or capture its way to victory" in the conflicts of the future. When it comes to cybersecurity, Washington faces an uphill battle. And as a recent Center for Strategic and International Studies report put it, "It is a battle we are losing."
There is no form of military combat more irregular than an electronic attack: it is extremely cheap, is very fast, can be carried out anonymously, and can disrupt or deny critical services precisely at the moment of maximum peril. Everything about the subtlety, complexity, and effectiveness of the assaults already inflicted on the United States' electronic defenses indicates that other nations have thought carefully about this form of combat. Disturbingly, they seem to understand the vulnerabilities of the United States' network infrastructure better than many Americans do.
It is tempting for policymakers to view cyberwarfare as an abstract future threat. After all, the national security establishment understands traditional military threats much better than it does virtual enemies. The problem is that an electronic attack can be large, widespread, and sudden -- far beyond the capabilities of conventional predictive models to anticipate. The United States is already engaged in low-intensity cyberconflicts, characterized by aggressive enemy efforts to collect intelligence on the country's weapons, electrical grid, traffic-control system, and even its financial markets.
Fortunately, the Obama administration recognizes that the United States is utterly dependent on Internet-based systems and that its information assets are therefore precariously exposed. Accordingly, it has made electronic network security a crucial defense priority.
But networks are only the tip of the iceberg. Not only does Washington have a limited ability to detect when data has been pilfered, but the physical hardware components that undergird the United States' information highway are becoming increasingly insecure.
INTO THE BREACH
In 2007, there were almost 44,000 reported incidents of malicious cyberactivity -- one-third more than the previous year and more than ten times as many as in 2001. Every day, millions of automated scans originating from foreign sources search U.S. computers for unprotected communications ports -- the built-in channels found in even the most inexpensive personal computers. For electronically advanced adversaries, the United States' information technology (IT) infrastructure is an easy target.
In 2004, for example, the design of NASA's Mars Reconnaissance Orbiter, including details of its propulsion and guidance systems, was discovered on inadequately protected "zombie" computer servers in South Korea. Mimicking the tactics of money launderers, hackers had downloaded them there in order to pilfer the data from a seemingly legitimate source. Breaches of cybersecurity and data theft have plagued other U.S. agencies as well: in 2006, between 10 and 20 terabytes of data -- equivalent to the contents of approximately 100 laptop hard drives -- were illegally downloaded from the Pentagon's nonclassified network, and the State Department suffered similarly large losses the same year.
Russia has already perpetrated denial-of-service attacks against entire countries, including Estonia, in the spring of 2007 -- an attack that blocked the Web sites of several banks and the prime minister's Web site -- and Georgia, during the war of August 2008. In fact, shortly before the violence erupted, Georgia's government claimed that a number of state computers had been commandeered by Russian hackers and that the Georgian Ministry of Foreign Affairs had been forced to relocate its Web site to Blogger, a free service run by Google.
The emergence of so-called peer-to-peer (P2P) networks poses yet another threat. These networks are temporary on-demand connections that are terminated once the data service has been provided or the requested content delivered, much like a telephone call. Some popular P2P services, such as Napster and BitTorrent, have raised a host of piracy and copyright infringement issues, mostly because of recreational abuse. From a security perspective, P2P networks offer an easy way to disguise illegitimate payloads (the content carried in digital packets); through the use of sophisticated protocols, they can divert network traffic to arbitrary ports. Data containing everything from music to financial transactions or weapons designs can be diverted to lanes that are created for a few milliseconds and then disappear without a trace, posing a crippling challenge to Washington's ability to monitor Internet traffic. Estimates vary, but P2P may consume as much as 60 percent of the Internet's bandwidth; no one knows how much of this traffic is legitimate, how much violates copyright laws, and how much is a threat to national security.
The commercially available systems that carry nearly all international data traffic are high quality: they are structurally reliable, globally available, and highly automated. However, the networking standards that enable cross-border electronic exchange were designed in stages over the last four decades to ensure compatibility, not security, and network designers have been playing catch-up for years. To the extent that they paid any attention to security, it was largely to prevent unauthorized, inauthentic, or parasitic access, not a widespread paroxysm of national or even international networks -- the IT equivalent of a seizure that strikes suddenly and without warning.
The price of perpetrating a cyberattack is just a fraction of the cost of the economic and physical damage such an attack can produce. Because they are inexpensive to plan and execute, and because there is no immediate physical danger to the perpetrators, cyberattacks are inherently attractive to adversaries large and small. Indeed, for the most isolated (and therefore resource-deprived) actors, remote, network-borne disruptions of critical national infrastructure -- terrestrial and airborne traffic, energy generation and distribution, water- and wastewater-treatment facilities, all manner of electronic communication, and, of course, the highly automated U.S. financial system -- may be their primary means of aggression.
From isolated intrusions to coordinated attacks, the number of network-based threats is growing. Dan Geer, the chief information security officer at In-Q-Tel, the nonprofit private investment arm of the CIA, points out that the perpetrators are no longer teenagers motivated by lunchroom bragging rights but highly paid professionals. He also believes that after spending billions of dollars on commercial research and development, the United States will still have less, and perhaps much less, than 90 percent protection against network attacks -- an unacceptably bad result. And this pessimistic estimate only considers software; it does not take into account the pernicious threat to hardware.
HARDWARE'S SOFT SPOT
In 1982, a three-kiloton explosion tore apart a natural gas pipeline in Siberia; the detonation was so large it was visible from outer space. Two decades later, the New York Times columnist William Safire reported that the blast was caused by a cyber-operation planned and executed by the CIA. Safire's insider sources claimed that the United States carefully placed faulty chips and tainted software into the Soviet supply chain, causing the chips to fail in the field. More recently, unconfirmed reports in IEEE Spectrum, a mainstream technical magazine, attributed the success of Israel's September 2007 bombing raid on a suspected Syrian nuclear facility to a carefully planted "kill switch" that remotely turned off Syrian surveillance radar.
Although networks and software attract most of the media's attention when it comes to cybersecurity, chip-level hardware is similarly vulnerable: deliberate design deficiencies or malicious tampering can easily creep in during the 400-step process required to produce a microchip.
Integrated circuits are etched onto silicon wafers in a process that simultaneously produces tens, or even hundreds, of identical chips. In fact, each chip may contain as many as a billion transistors. At the rate of one transistor per second, it would take one person 75 years to inspect the transistors on just two devices; even a typical cell phone has a couple of chips with a hundred million transistors each. Finding a few tainted transistors among so many is an exceedingly tedious, difficult, and error-prone task, and in principle an entire electronic system of many chips can be undermined by just a few rogue transistors. This is why chip-level attacks are so attractive to adversaries, so difficult to detect, and so dangerous to the nation.
Modern automated equipment can test certain kinds of manufacturing fidelity within integrated circuits at the rate of millions of transistors per second. The problem is that such equipment is designed to detect deviations from a narrow set of specifications; it cannot detect unknown unknowns. An apparently perfect device can provide a safe harbor for numerous threats -- in the form of old and vulnerable chip designs, embedded Trojan horses, or kill switches -- that are difficult or impossible to detect. The theoretical number of potential misbehaviors and possible hardware alterations is simply too large, and no mathematical formulas to constrain the problem have yet been invented.
Moreover, the timeline of a hardware attack is altogether different from that of a software or network attack. With the important exception of infection by symbiotic malware (unauthorized software that depends on the host to survive), pervasive network infections are generally detectable, are mostly curable, and, until now, have been largely containable through the use of software patches, which are now ubiquitous. In contrast, compromised hardware is almost literally a time bomb, because the corruption occurs well before the attack -- during design implementation or manufacturing -- and is detonated sometime in the future, most likely from a faraway location. Sabotaged circuits cannot be patched; they are the ultimate sleeper cell.
MODEL AIRPLANES
Sadly, research in hardware security has been anemic, with relatively few institutions allocating very few dollars. But one researcher, the Stanford University aeronautics professor Per Enge, has looked to the civilian aviation industry as a model for enhancing hardware security. Aircraft companies have historically focused intensely on systemic weakness and potential vectors of attack on the airframe of airplanes, its many components, and the flight-control infrastructure. It takes months or even years to assess danger in hardware-bound systems, which are common in the transportation industry. Therefore, the aviation sector has always preferred deliberate and quiet responses to vulnerabilities as they are revealed, in part to make sure that the vulnerabilities are not exploited and in part to maintain public trust in an otherwise excellent system. In contrast, the cryptography and software-development communities believe that full disclosure is the path to safety and security. In their view, a threat that is subject to the full scrutiny of academic, industrial, and governmental experts will be neutralized more quickly and mitigated more fully.
For many years, aviation companies believed they could not fully rely on such collaborative failure detection because the equipment they produced was not easily replaced, reused, or repaired. The cost of doing so was so prohibitive to those outside the industry that few even bothered to try. Today, however, with the advent of publicly available GPS technology, even the aviation community is beginning to absorb the lessons of open security standards.
Most computer hardware engineers have traditionally approached the problem in a similar manner: test, stress, and break, but keep discoveries low key so as to avoid exposing a weak flank to the public or to competitors. The long cycles of detection and remediation that characterize hardware, as opposed to software, are the fundamental reason why practically all large mainframe computer systems -- from those on airplanes to those in hospitals -- still require human intervention to detect and cope with failures.
The difference between a chip and an airplane is that an engineer's ability to absorb knowledge and reconfigure hardware in order to make it more secure is much greater in silicon than in aluminum, especially if the internal response is both adaptive and intelligent.
The need to endow U.S. networks, software, and even hardware with a digital immune system -- one that is openly described and freely discussed -- is one of the most important lessons to be learned from the open-source community, and it could help hardware engineers make their products more secure.
IMMUNIZATION DRIVES
Comparing cyberthreats to biological diseases helps illustrate the potency of electronic attacks and point the way toward possible cures. As Stephanie Forrest and her colleagues at the University of New Mexico have shown, bodily immune systems work best when they are autonomous, adaptable, distributed, and diversified; so, too, with electronic security. Perhaps the biggest reason to focus on hardware assurance is that it provides a resilient form of immunoprotection and dramatically extends the range of potential responses to an attack. As with their biological analogues, healthy electronic systems will focus protection at the gateways to the outside world (such as a computer's ports), rapidly implement sequential reactions to invading agents, learn from new assaults, remember previous victories, and perhaps even learn to tolerate and coexist with foreign intruders. In other words, healthy hardware can adapt to infection, but sick hardware is an incurable liability -- a remote-controlled malignancy that can strike at any time.
Natural science also provides a framework to understand the dangerous implications of static thinking. The aphorism "nature abhors a vacuum" applies strikingly well to cybersecurity: if there is a weak point, whether it is there intentionally or unintentionally, a cybercriminal will find it. Because of its inherent complexity, modern electronic infrastructure is exposed to foreign intrusion. Eventually, the temptation to deliberately build in deficiencies -- to leave the door unlocked, so to speak -- will likely prove irresistible to professional saboteurs. And even when doors are not left unlocked, an adversary can still deliberately design all the locks to be fundamentally similar, making intrusion easier at some point in the future.
A hardware breach is more difficult to detect and much more difficult to defend against than a network or software intrusion. There are two primary challenges when it comes to enhancing security in chips: ensuring their authenticity (because designs can be copied) and detecting malevolent function inside the device (because designs can be changed). One could easily imagine a kill switch disabling the fire-control logic inside a missile once it had been armed or its guidance system had been activated, effectively disabling the tactical attack capability of a fighter jet. Inauthentic parts are also a threat. In January 2008, for example, the FBI reported that 3,600 counterfeit Cisco network components were discovered inside U.S. defense and power systems. As many as five percent of all commercially available chips are not genuine -- having been made with inferior materials that do not stand up under extreme conditions, such as high temperatures or high speeds.
Even well-intentioned security efforts cannot provide ironclad safety. With only $10,000 worth of off-the-shelf parts, a research group led by Christof Paar at Ruhr-Universität Bochum, in Germany, built a code-breaking machine that was able to exploit a hardware vulnerability and, within ten seconds, crack the encryption scheme of the electronic passport chip in European Union passports. This breach could have exposed sensitive personal information to financial criminals and passport counterfeiters. The original design of the passport chip was not fundamentally flawed, but it was inadequately hardened, and no software upgrade could solve the problem.
Adversaries planning cyberattacks on the United States enjoy two other advantages. The first, and most dangerous, is Americans' false sense of security: the self-delusion that since nothing terrible has happened to the country's IT infrastructure, nothing will. Such thinking, and the fact that so few scientists are focused on the problem, undercuts the United States' ability to respond to this threat. Overcoming a complacent mentality will be as difficult a challenge as actually allocating the resources for genuine hardware assurance. Second, the passage of time will allow adversaries and cybercriminals to optimize the stealth and destructiveness of their weapons; the longer the U.S. government waits, the more devastating the eventual assault is likely to be.
THE TECHNOLOGICAL RAIN FOREST
Seeking to completely obliterate the threats of electronic infiltration, data theft, and hardware sabotage is neither cost-effective nor technically feasible; the best the United States can achieve is sensible risk management. Washington must develop an integrated strategy that addresses everything from the sprawling communications network to the individual chips inside computers.
The U.S. government must begin by diversifying the country's digital infrastructure; in the virtual world, just as in a natural habitat, a diversity of species offers the best chance for an ecosystem's survival in the event of an outside invasion. In the early years of the Internet, practically all institutions mandated an electronically monocultural forest of computers, storage devices, and networks in order to keep maintenance costs down. The resulting predominance of two or three operating systems and just a few basic hardware architectures has left the United States' electronic infrastructure vulnerable. As a result, simple viruses injected into the network with specific targets -- such as an apparently normal and well-trusted Web site that has actually been infiltrated -- have caused billions of dollars in lost productivity and economic activity.
Recently, national intelligence authorities mandated a reduction in the number of government Internet access points in order to better control and monitor them. This sounds attractive in principle. The problem, of course, is that bundling the channels in order to better inspect them limits the range of possible responses to future crises and therefore increases the likelihood of a catastrophic breakdown. Such "stiff" systems are not resilient because they are not diverse. By contrast, the core design principle of any multifaceted system is that diversity fortifies defenses. By imposing homogeneity onto the United States' computing infrastructure, generations of public- and private-sector systems operators have -- in an attempt to keep costs down and increase control -- exposed the country to a potential catastrophe. Rethinking Washington's approach to cybersecurity will require rebalancing fixed systems with dynamic, responsive infrastructure.
In addition to building diverse, resilient IT infrastructure, it is crucial to secure the supply chain for hardware. This is a politically delicate issue that pits pro-trade politicians against national security hawks. Since most of the billions of chips that comprise the global information infrastructure are produced in unsecured facilities outside the United States, national security authorities are especially sensitive about the possibility of sabotage.
Some observers have pointed to the Clinton-era Information Technology Management Reform Act as a leaky crack in the levee of secure hardware infrastructure because it explicitly encouraged the acquisition of foreign-made parts. They are wrong. In fact, streamlining procurement of IT components is in no way related to the integrity of the components themselves; how the government purchases components is unrelated to what is actually delivered, tested, and deployed.
Moreover, the enormous cost of maintaining a parallel domestic production capability to match the tremendous manufacturing advances of the private sector abroad would never pass muster in even the most hawkish appropriations review; such dedicated production facilities would also make an easy target for sabotage or direct attacks. A disruption in the supply chain would exact an incalculable price, not least in terms of the United States' defensive readiness, and would violate the principle of having a layered, diversified response. It makes sense now -- just as it made sense during the Clinton years -- to purchase components, even those made offshore. The problem is not foreign sourcing; it is ensuring that foreign-made products are authentic and secure.
None of this will require a fundamental change in the way computer networks are currently configured and deployed. Because hardware itself can now be reconfigured -- and is therefore adaptable -- electronic defenses within actual devices can be augmented without domestic chip designers' revealing more than they already do to the foreign manufacturers who actually produce the chips.
Of course, adversaries could build in hardware deficiencies during production that could hurt the United States later. But there are some very elegant ways to detect those deficiencies without the adversaries' knowing that Washington is watching. Promising strategies in the near term, such as embedding compact authentication codes directly into devices and configuring anti-tamper safeguards after the devices are produced, will enhance protection by tightening control of the supply chain and making the hardware more "self-aware."
The Bush administration's classified Comprehensive National Cyber Security Initiative, which led to a reported commitment of $30 billion by 2015 to bolster electronic defenses and which the Obama administration is expected to support, is a solid first step toward managing the risk.
Unfortunately, much of the relevant information -- such as the Defense Advanced Research Projects Agency's TRUST in Integrated Circuits program -- is classified. Confidentiality will not necessarily help ensure that the nation's information assets are well protected or that its cyberdefense resources are well deployed. In fact, because many of the best-trained and most creative experts work in the private sector, blanket secrecy will limit the government's ability to attract new innovations that could serve the public interest. Washington would be better off following a more "open-source" approach to information sharing.
The cybersecurity threat is real. Adversaries can target networks, application software, operating systems, and even the ubiquitous silicon chips inside computers, which are the bedrock of the United States' public and private infrastructure.
All evidence indicates that the country's defenses are already being pounded, and the need to extend protection from computer networks and software to computer hardware is urgent. The U.S. government can no longer afford to ignore the threat from computer-savvy rivals or technologically advanced terrorist groups, because the consequences of a major breach would be catastrophic.
You are reading a free article.
Subscribe to Foreign Affairs to get unlimited access.
- Paywall-free reading of new articles and a century of archives
- Unlock access to iOS/Android apps to save editions for offline reading
- Six issues a year in print, online, and audio editions