Read Black Code: Inside the Battle for Cyberspace Online
Authors: Ronald J. Deibert
Tags: #Social Science, #True Crime, #Computers, #Nonfiction, #Cybercrime, #Security, #Retail
A remarkable component of Stuxnet was its ability to cross “air-gapped” computing systems that are not actually connected to the Internet. In April 2012, the website
Isssource.com
, belonging to Industrial Safety and Security Source, published an article alleging that “former and serving U.S. intelligence officials” had said that
an Iranian double agent working for Israel had inserted Stuxnet into the Iranian control systems using a corrupt memory stick. The article’s author, former United Press International journalist Richard Sale, stated that the double agent was probably a member of the Iranian dissident group, the Mujahedeen-e Khalq (MEK), a shadowy organization with Israeli government connections that is believed to be behind the assassinations of key Iranian nuclear scientists.
Stuxnet was specifically designed to infect only certain types of supervisory control and data acquisition (SCADA) systems used for real-time data collection, and to control and monitor critical infrastructure – hydro-electrical facilities, power plants, nuclear enrichment systems, and so on. The programs used to control the physical components of SCADA systems are called programmable logic controllers (PLCS), and Stuxnet was developed in such a way as to target only two types of PLC models controlled by the Siemens Step 7 software –S7–315 and S7–417 – both of which are used in the Iranian nuclear centrifuges.
Stuxnet was designed to disable the centrifuges by inducing rapid fluctuations in the rotation speed of their motors. Unchecked, this would eventually cause them to blow apart, and one of the most remarkable aspects of the virus was a piece of deception created to confuse Iranian personnel monitoring the plants. Stuxnet secretly recorded what normal operations at the plant looked like, and then played these readings back to the plant operators (like a pre-recorded security tape) so that everything seemed to be in good order. While the operators were watching a normal set of operating results on their monitors, the centrifuges were actually spinning out of control. According to the
New York Times
, over and over again the Iranians sent teams of scientists down to the centrifuges with two-way radios, reporting back to the operators what they witnessed. They were utterly bewildered by the discrepancy between what they were seeing first-hand in the physical plant and what the monitors were reporting to the operators. Stuxnet was designed, as one insider put it, to make the Iranians “feel stupid.”
• • •
While remarkably complex
in some ways, Stuxnet is hardly extraordinary in others. Some analysts have described it as a
Frankenstein of existing cyber criminal tradecraft – bits and pieces of existing knowledge patched together to create a chimera. The analogy is apt and, just like the literary Frankenstein, the monster may come back to haunt its creators. The virus leaked out and infected computers in India, Indonesia, and even the U.S., a leak that occurred through an error in the code of a new variant of Stuxnet sent into the Natanz nuclear enrichment facility. This error allowed the Stuxnet worm to spread into an engineer’s computer when it was hooked up to the centrifuges, and when he left the facility and connected his computer to the Internet the worm did not realize that its environment had changed. Stuxnet began spreading and replicating itself around the world. The Americans blamed the Israelis, who admitted nothing, but whoever was at fault, the toothpaste was out of the tube.
The real significance of Stuxnet lies not in its complexity, or in the political intrigue involved (including the calculated leaks), but in the threshold that it crossed: major governments taking at least implicit credit for a cyber weapon that sabotaged a critical infrastructure facility through computer coding. No longer was it possible to counter the Kasperskys and Clarkeses of the world with the retort that their fears were simply “theoretical.” Stuxnet had demonstrated just what type of damage can be done with black code.
• • •
For some
, Stuxnet represents a dangerous and highly unpredictable new form of conflict; for others, it taps into something far more attractive, the prospect of “clean” or “civilized” warfare: precise, surgical, virtual, and, most importantly, bloodless. “You’re seeing an evolution of warfare that’s really intriguing,” argues Phil Lieberman, a security consultant and chief executive of Lieberman Software in Los Angeles, “… warfare where no one [dies].” The
minister in charge of Britain’s armed forces, Nick Harvey, echoes a similar sentiment: “[If] a government has arrived at the conclusion that it needs, out of its sense of national interest or national security, to deliver an effect against an adversary … arguably this [Stuxnet] is quite a civilized option.”
The appeal of this argument is intuitive. If we can undertake acts of sabotage without killing or physically harming people, this does seem to represent progress, a new, gentler form of warfare. In this respect, the argument is the exact inverse of the neutron bomb debates of the 1970s and 1980s. The neutron bomb was an enhanced radiation weapon under development during the Carter and Reagan administrations that would kill people while leaving buildings and infrastructure intact, through a highly concentrated dispersal of radioactive material. (Soviet General Secretary Leonid Brezhnev memorably described it as a “capitalist bomb” because it would destroy people, but not property.) Stuxnet-type weapons, on the other hand, are more like something inspired by Unabomber Ted Kaczynski: they would target industrial-technological systems, but leave people alone.
The attraction of technology that allows one to believe in sanitary or “virtual war” has a long pedigree. Political scientist James Der Derian has spent considerable time turning over the argument, and believes that the appeal of
high-tech means of fighting clean wars comes from it being “the closest we moderns have [come] to a
deus ex machina
swooping in from the skies to fix the dilemmas of world politics, virtually solving intractable political problems through technological means.” But the “solutions” offered by virtual war mask the violence that invariably accompanies the use of high-impact technological weapons, and ignores the new problems and unforeseen consequences that arise. When high-tech weapons are marketed, available, and perceived as “clean,” there are strong pressures to adopt military over diplomatic
solutions in times of crisis. “When war becomes the first, rather than the last, means to achieve security in the new global disorder,” says Der Derian, “what one technologically can do begins to dominate what one legally, ethically, and pragmatically should do.” Meanwhile, the actual killing involved in warfare recedes into the background the more the application of force resembles a machinelike simulation or a computer game. “Virtuous war is anything
but
less destructive, deadly or bloody for those on the receiving end of the big technological stick.”
Stuxnet-style attacks may seem like a higher order of sanitized conflict, but the Iranians undoubtedly do not feel that way. The question is, how will they react to Stuxnet? They may continue to develop and refine their own cyber warriors who will attack back with their own black code. In response to Stuxnet, Brigadier General Gholamreza Jalali, the head of Iran’s Passive Defense Organization, said that the Iranian military was prepared “to fight our enemies [in] cyberspace and Internet warfare.”
Writing in the
Bulletin of the Atomic Scientists
, R. Scott Kemp argues, “Each new cyberattack becomes a template for other nations – or sub-national actors – looking for ideas. Stuxnet revealed numerous clever solutions that are now part of a standard playbook. A Stuxnet-like attack can now be replicated by merely competent programmers, instead of requiring innovative hacker elites. It is as if with every bomb dropped, the blueprints for how to make it immediately follow. In time, the strategic advantage will slowly fade and once-esoteric cyberweapons will slowly become weapons of the weak.” And the Iranians’ response may not come via cyberspace at all, but rather in a way that is as spectacular and grotesque as Stuxnet was stealthy and clean. We can now only wait and see.
Apart from unintended blowback, another dynamic bears closer scrutiny: the politically calculated revelations about Stuxnet being a U.S. and Israeli operation will most certainly fan arguments for
the legitimacy – indeed, the urgency – of governments developing their own cyber warfare capabilities, or risk being left behind. Stuxnet did not start the cyber arms race, but it marks a major milestone and raises the bar considerably. And this is only the beginning. In October 2012, President Obama signed Presidential Policy Directive 20, authorizing the U.S. military to engage in cyber operations abroad to thwart cyber attacks on U.S. government and private networks. The directive establishes the “rules of engagement” to guide the operations. An unnamed senior administration official told the
Washington Post
: “What it does, really for the first time, is explicitly talk about how we will use cyber operations … Network defense is what you’re doing inside your own networks … Cyber operations is stuff outside that space.”
The world’s most powerful state is now generally perceived as having been responsible for using computer code to successfully sabotage another country’s critical infrastructure, and for ramping up offensive operations across the board. Not surprisingly, other countries are following suit. A 2011 study undertaken by James A. Lewis and Katrina Timlin of the Center for Strategic and International Studies – notably, done prior to the 2012 Stuxnet revelations – found that
thirty-three states included cyber warfare in their military planning and organization, with twelve already having plans to establish cyber commands in their armed forces.
Some, like India, boast about developing offensive cyber attack capabilities, while others are no doubt just being more discreet.
• • •
A few weeks after the
Stuxnet revelations hit the news, there was a brief event that passed quickly through the news cycle but deserved more attention. Twitter went dark for a few minutes, leaving the global Twitterati at a complete loss. Speculation and
rumours abounded. Was this the work of the hacktivist group Anonymous? The Iranian government? According to Twitter’s company blog post, “Today’s Turbulence Explained,” the outage was due to a “cascading bug,” a bit of malware “with an effect that isn’t confined to a particular software element, but rather … ‘cascades’ into other elements as well.” Tweeting in response, the father of cyberspace, science fiction author William Gibson (known on Twitter as
@GreatDismal
), laid out a simple but alarming hashtag: #andsoitbegins.
And so it begins, a series of cascading bugs reaching deeper and deeper into the infrastructure that surrounds us, bugs that are accidental, partly accidental, and accidental by design. In 2010, under Operation Network Raider, American authorities won thirty convictions and seized $143 million worth of counterfeit network computer equipment manufactured in China. (One man, Ehab Ashoor, bought counterfeit Cisco equipment from an online vendor located in China, and was intending to sell it to the U.S. Marine Corps for combat communications in Iraq.) In 2012, a year-long probe by the Senate Armed Services Committee found
1,800 cases of fake electronic components destined for American military equipment: I million bogus parts, mostly from discarded electronic waste being recycled in China. The report found the bogus parts in SH-60B helicopters, in C-130J and C-27J cargo planes, in the U.S. Navy’s P-8A Poseidon plane. A July 2012 article in
Ars Technica
noted that “more than 500 days after Stuxnet the Siemens S7 has not been fixed.” That same month,
Wired
reported on a Canadian company, RuggedCom, that makes equipment and software for critical industrial control systems. It had planted a backdoor (a means to remotely access a system) into one of their products, by design. The login credentials for the backdoor included a static username, “factory,” assigned by the vendor that couldn’t be changed by customers. The company-generated
password was based on individual media access control (MAC) addresses for devices.
Researcher Justin W. Clarke (no relation to Richard Clarke) has shown how, by searching the Internet
via the SHODAN search tool, anyone could discover MAC addresses for industrial control systems, and then employ a simple computer script he has engineered to log in to those industrial control systems. This is a far cry from the elaborate operational planning that went into Stuxnet: all that is involved is one person, one search, and one script, and the result is total access! Clarke quietly notified RuggedCom, which did nothing for months, leading him to go public with his discovery. “It is esoteric, it is obscure, but this equipment is everywhere,” said Clarke, explaining his reasoning. “
I was walking down the street and they had one of the traffic control cabinets that controls stop lights open and there was a RuggedCom switch, so while you and I may not see it, this is what’s used in electric substations, in train control systems, in power plants and in the military. That’s why I personally care about it so much.” And as if this were not enough, the story ends on another menacing note: “RuggedCom, which is based in Canada, was recently purchased by the German conglomerate Siemens.”
• • •
The evolution of human – computer
interaction has taken many twists and turns over the decades but there is an undeniable trajectory. With the first giant mainframe computers – mechanical structures sprouting wires and vacuum tubes that took up entire rooms – one computer was shared by many people. Today almost half the world has access to several computing devices that they individually own and operate. At home I have a MacBook Air (more like a large mint wafer than a computer), a sleek Power Mac
G5 in my office, and the now omnipresent iPhone in my pocket. We are evolving into a species of ubiquitous computing, with tiny digital devices embedded in just about everything around us, much of it operating without any direct human intervention at all.
Eugene Kaspersky, Richard Clarke, and others may sound like broken records or self-serving fear-mongers, but there is no denying the evolving cyberspace ecosystem around us:
we are building a digital edifice for the entire planet, and it sits above us like a house of cards. We are wrapping ourselves in expanding layers of digital instructions, protocols, and authentication mechanisms, some of them open, scrutinized, and regulated, but many closed, amorphous, and poised for abuse, buried in the black arts of espionage, intelligence gathering, and cyber and military affairs. Is it only a matter of time before the whole system collapses? “If one extrapolates into the future,” Arthur Koestler once said with respect to the nuclear predicament, “the probability of disaster approaches statistical certainty.” Is cyberspace any different?