Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon (33 page)

BOOK: Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon
3.22Mb size Format: txt, pdf, ePub
ads

This expansion in government bug-hunting operations highlights an important issue that got little consideration when the DoD task force was first developing its offensive doctrine a decade ago, and that even today has received little public attention and no debate at all in Congress—that is, the ethical and security issues around stockpiling zero-day vulnerabilities and exploits in the service of offensive operations. In amassing zero-day exploits for the government to use in attacks, instead of passing the information about holes to vendors to be fixed, the government has put critical-infrastructure owners and computer users in the United States at risk of attack from criminal hackers, corporate spies, and foreign intelligence agencies who no doubt will discover and use the same vulnerabilities for their own operations.

As noted previously, when researchers uncover vulnerabilities, they generally disclose them to the public or privately to the vendor in question so that patches can be distributed to computer users. But when military and intelligence agencies need a zero-day vulnerability for offensive operations, the last thing they want to do is have it patched. Instead, they keep fingers crossed that no one else will discover and disclose it before they’ve finished exploiting it. “If you’ve built a whole operational capability based on the existence of that vulnerability, man, you’ve just lost a system that you may have invested millions of dollars and thousands of man hours in creating,” Andy Pennington, a cybersecurity consultant for K2Share said at a conference in 2011. Pennington is a former weapons-systems officer in the Air Force whose job before retiring in 1999 was to review new
cyberspace technologies and engineer next-generation weapons for the Air Force.
38
“You are not going to hire teams of researchers to go out and find a vulnerability and then put it on the web for everybody to see if you’re trying to develop [an attack for it],” he later said in an interview.
39
“We’re putting millions of dollars into identifying vulnerabilities so that we can use them and keep our tactical advantage.”

But it’s a government model that relies on keeping everyone vulnerable so that a targeted few can be attacked—the equivalent of withholding a vaccination from an entire population so that a select few can be infected with a virus.

Odds are that while Stuxnet was exploiting four zero-day vulnerabilities to attack systems in Iran, a hacker or nation-state cyberwarrior from another country was exploiting them too. “It’s pretty naïve to believe that with a newly discovered zero-day, you are the only one in the world that’s discovered it,” Howard Schmidt, former cybersecurity coordinator for the White House and former executive with Microsoft, has said. “Whether it’s another government, a researcher or someone else who sells exploits, you may have it by yourself for a few hours or for a few days, but you sure are not going to have it alone for long.”
40

Certainly the .LNK vulnerability that Stuxnet used was already known by the Zlob banking gang in 2008, two years before Stuxnet used it. Information about the print-spooler vulnerability was also in the public domain for others to discover and use.
41
Who knows how long the other zero days Stuxnet used might have been known and used by others in different attacks? In 2007, Immunity, a security firm in Florida, determined that the average zero-day exploit survived in the wild 348 days before being discovered on systems. The ones with the longest life-span could
live in hiding for nearly three years.
42
Today the situation isn’t much different, with the average life-span of a zero day now ten months, and others lurking in systems undiscovered for as long as two and a half years.
43

Shortly after he took office in 2009, President Obama announced that cybersecurity in general and securing the nation’s critical infrastructure in particular were top priorities for his administration. But withholding information about vulnerabilities in US systems so that they can be exploited in foreign ones creates a schism in the government that pits agencies that hoard and exploit zero days against those, like the Department of Homeland Security, that are supposed to help secure and protect US critical infrastructure and government systems.

In his remarks at the 2011 conference, Andy Pennington acknowledged that there were “competing interests” in government when it came to the vulnerability issue, but he said when the government found vulnerabilities it wanted to exploit, it used “coordinated vulnerability disclosure”—a kind of limited disclosure—to “facilitate the defense of the United States” in a way that still allowed the government to retain the ability to attack. He said the DoD worked “very closely with Microsoft on the enterprise side,” as well as with the makers of control systems, to let them know about vulnerabilities found in their systems. “But I would like to stress again that the objective is to handle this … so that we can sustain operations,” he said. To that end, you would want to be “very deliberate [in] how you disclose it and how it’s fixed.”
44
Though he didn’t elaborate on what limited disclosure involved, others have suggested it’s about providing information about vulnerabilities to DoD administrators—so they can take steps to protect military systems from being attacked—while still withholding it from the vendor and the public, to prevent adversaries from learning
about them. Microsoft also reportedly gives the government and private companies advance notice when it learns of new security holes found in its software, to help the government take steps to protect its systems before a patch is available. But this can equally serve as a handy tipoff to the NSA to retire any exploits already being used to attack that vulnerability—before Microsoft discloses it publicly—or, conversely, to quickly exploit machines using the vulnerability before it gets patched.
45

Greg Schaffer, former assistant secretary of Homeland Security, told NPR that DHS, which helps protect the government’s nonmilitary systems, does occasionally get assistance “from the organizations that work on the offensive mission,” though he didn’t indicate if this meant sharing information with DHS about vulnerabilities so they could be patched.
46
But “whether they bring their work [to us] is something they have to decide,” he said. “That is not something that we worry about.”

Another DHS official, however, says he can’t recall having “ever seen a vulnerability come to us from DoD in a disclosure.… We would like to have as many vulnerabilities disclosed and coordinated as possible to give us the best defensive posture.” But while it was frustrating not to get such disclosures, he recognized that it was “the nature of the beast” if the government was to still retain its ability to attack adversaries, and he didn’t see any way to resolve it.
47

Though information about vulnerabilities might not get passed from the offensive side to the defensive side to be fixed, there were in fact times when vulnerabilities uncovered by the defensive side got passed to the offensive side. This might occur, for example, to make sure that a vulnerability in a control system already being exploited by the NSA or other agencies wasn’t disclosed and patched too soon. A former DHS official said that this “vulnerabilities equities process” for control systems, as it’s
called, began some time after the Aurora Generator Test was conducted in 2007. Since then, vulnerabilities that government researchers find in other control systems get vetted by an equities panel to make sure their disclosure won’t harm ongoing operations. “If someone is using it … under their authorities for a legitimate purpose … well, we’d have to balance the necessity of disclosing it based on the value of leaving it open for a while,” the former official said.

The equities process in government has a long tradition. In World War II, for example, when the British cracked Germany’s Enigma code and discovered that Allied convoys were being targeted by the Germans, they had to weigh the benefits of redirecting convoys away from attack—and thus risk tipping off the Germans that their code had been cracked—against the cost of sacrificing a convoy to continue exploiting a critical intelligence source.

The US equities process involves a central committee composed of representatives from multiple departments and agencies—DoD, Justice Department, State Department, Homeland Security, the White House, and the intelligence community—and is patterned after one developed by the Committee on Foreign Investment in the United States, known as the CFIUS process, which weighs the national security implications of foreign investments in the United States.

In the case of software vulnerabilities, if government researchers discover a security hole in a PLC that is commonly used, for example, they submit the finding to the committee to see if anyone has an equity interest in it. “Everyone has a say in the likelihood of impacts to companies or systems [from the vulnerability being disclosed or not],” the official said. “It’s all done via e-mail on a classified network, and everyone comes back and says yea or nay. And if there’s a yea, then we discuss it. If everything is nay, then we just go on our normal responsible vulnerability disclosure process.”

Asked if DHS ever passed information about vulnerabilities to the offensive side so that they could specifically be exploited, he said no. But he acknowledged that the very act of discussing vulnerabilities with the
equities committee might inadvertently provide members with ideas about new vulnerabilities to exploit. While he says he never heard anyone on the committee tell an industrial control system representative not to publicly disclose a vulnerability so they could exploit it, he acknowledged that they probably wouldn’t be so overt about it. “They would probably just silently take notes, and we may never ever know [if they developed an exploit for] it,” he said.

1
John Arquilla and David Ronfeldt, “Cyberwar Is Coming!” published by RAND in 1993 and reprinted as chapter 2 in Arquilla and Ronfeldt’s book
In Athena’s Camp: Preparing for Conflict in the Information Age
(RAND, 1997).

2
He was speaking to PBS
Frontline
in 2003 for its show “CyberWar!” Interview available at
pbs.org/wgbh/pages/frontline/shows/cyberwar/interviews/arquilla.html
.

3
The operation was thwarted by a system administrator named Cliff Stoll, who stumbled upon the intrusion while investigating the source of a seventy-five-cent billing discrepancy. Stoll recounts the story in his now-classic book
The Cuckoo’s Egg: Tracking a Spy Through a Maze of Computer Espionage
(New York: Doubleday, 1989).

4
Jonathan Ungoed-Thomas, “How Datastream Cowboy Took U.S. to the Brink of War,”
Toronto Star
, January 1, 1998.

5
Information warfare didn’t just involve offensive and defensive cyber operations, it also included psychological operations, electronic warfare, and physical destruction of information targets.

6
A thirty-nine-page book recounts the history of the 609th. A copy of the book, titled
609 IWS: A Brief History Oct. 1995–June 1999
, was obtained under a FOIA request and is available at
securitycritics.org/wp-content/uploads/2006/03/hist-609.pdf
.

7
John “Soup” Campbell speaking as part of a panel titled “Lessons from Our Cyber Past: The First Military Cyber Units,” at the Atlantic Council, March 5, 2012. Campbell was the first commander of the Joint Task Force-Computer Network Defense in December 1998 and later was principal adviser to the CIA director on military issues. A transcript of the panel discussion can be found at
atlanticcouncil.org/news/transcripts/transcript-lessons-from-our-cyber-past-the-first-military-cyber-units
.

8
Bradley Graham, “U.S. Studies a New Threat: Cyber Attack,”
Washington Post
, May 24, 1998.

9
Ibid.

10
Some of the information about the first task force and the history of the military’s cyber activities comes from a March 2012 interview with Jason Healey, head of the Cyber Statecraft Initiative at the Atlantic Council in Washington, DC, and an original member of the military’s first cyber taskforce. Healey also recounts some of the history of cyber conflict in a book he edited, which is one of the first to examine it. See
A Fierce Domain: Conflict in Cyberspace, 1986 to 2012
(Cyber Conflict Studies Association, 2013).

11
Maj. Gen. James D. Bryan, founding commander of the JTF-Computer Network Operations, speaking on the panel “Lessons from Our Cyber Past: The First Military Cyber Units.”

12
This and other quotes from Sachs come from author interview, March 2012.

13
“HOPE” stands for Hackers on Planet Earth.

14
Electronic warfare, which dates to World War I, involves the use of electromagnetic and directed energy to control the electromagnetic spectrum to retard enemy systems. Computer network attacks, by contrast, are defined as operations designed to disrupt, deny, degrade, or destroy information resident on computers and computer networks, or the computers or networks themselves, according to Department of Defense Directive 3600.1.

15
Author redacted, “IO, IO, It’s Off to Work We Go,”
Cryptolog: The Journal of Technical Health
(Spring 1997): 9.
Cryptolog
is an internal classified quarterly newsletter produced by and for NSA employees that includes everything from book reviews to employee profiles to technical articles about topics of interest. In 2013, the agency declassified issues published between 1974 and 1999 and released them publicly, though parts of them are still redacted. The archive is available at
nsa.gov/public_info/declass/cryptologs.shtml
.

16
Author redacted, “Thoughts on a Knowledge Base to Support Information Operations in the Next Millennium,”
Cryptolog: The Journal of Technical Health
(Spring 1997): 32.

17
William B. Black Jr., “Thinking Out Loud About Cyberspace,”
Cryptolog: The Journal of Technical Health
(Spring 1997): 4.

18
Author redacted, “IO, IO, It’s Off to Work We Go.”

19
William M. Arkin, “A Mouse that Roars?”
Washington Post
, June 7, 1999.

20
In 1999, the DoD’s Office of the General Counsel examined a range of existing treaties and international laws and concluded there was no international legal principle or corpus that clearly addressed the kind of cyber operations the military proposed conducting. Department of Defense Office of the General Counsel,
An Assessment of International Legal Issues in Information Operations
, published May 1999, available at
au.af.mil/au/awc/awcgate/dod-io-legal/dod-lo-legal.pdf
.

21
As an example of how reliant weapons systems are on software, during Operation Desert Storm in 1991, a Patriot missile defense system installed in Dhahran, Saudi Arabia, failed to intercept incoming Scud missiles because of a software problem in the control system that caused it to look for incoming Scuds in the wrong place. The Scud attack killed twenty-eight US soldiers. See “Software Problem Led to System Failure at Dhahran, Saudi Arabia,” US Government Accountability Office, February 4, 1992, available at
gao.gov/products/IMTEC-92-26
.

22
Bryan, “Lessons from Our Cyber Past.”

23
“The Information Operations Roadmap,” dated October 30, 2003, is a seventy-four-page report that was declassified in 2006, though the pages dealing with computer network attacks are heavily redacted. The document is available at
http://information-retrieval.info/docs/DoD-IO.html
.

24
Arquilla
Frontline
“CyberWar!” interview. A
Washington Post
story indicates that attacks on computers controlling air-defense systems in Kosovo were launched from electronic-jamming aircraft rather than over computer networks from ground-based keyboards. Bradley Graham, “Military Grappling with Rules for Cyber,”
Washington Post
, November 8, 1999.

25
James Risen, “Crisis in the Balkans: Subversion; Covert Plan Said to Take Aim at Milosevic’s Hold on Power,”
New York Times
, June 18, 1999. A
Washington Post
story says the plan never came to fruition. “We went through the drill of figuring out how we would do some of these cyber things if we were to do them,” one senior military officer told the paper. “But we never went ahead with any.” Graham, “Military Grappling with Rules for Cyber.”

26
John Markoff and H. Sanker, “Halted ’03 Iraq Plan Illustrates US Fear of Cyberwar Risk,”
New York Times
, August 1, 2009. According to Richard Clarke, it was the secretary of treasury who vetoed it. See Richard Clarke and Robert Knake,
Cyber War: The Next Threat to National Security and What to Do About It
(New York: Ecco, 2010), 202–3. In general, nations have observed an unspoken agreement against manipulating financial systems and accounts out of concern over the destabilizing effect this could have on global markets and economies.

27
David A. Fulghum, Robert Wall, and Amy Butler, “Israel Shows Electronic Prowess,”
Aviation Week
, November 25, 2007. The article is no longer available on the
Aviation Week
website but has been preserved in full at
warsclerotic.wordpress.com/2010/09/28/israel-shows-electronic-prowess
.

28
“Electronic Warfare: DOD Actions Needed to Strengthen Management and Oversight,” published by the US Government Accountability Office, July 2012.

29
Eric Shmitt and Thom Shanker, “US Debated Cyberwarfare in Attack Plan on Libya,”
New York Times
, October 17, 2011.

30
Greg Miller, Julie Tate, and Barton Gellman, “Documents Reveal NSA’s Extensive Involvement in Targeted Killing Program,”
Washington Post
, October 16, 2013.

31
Barton Gellman and Ellen Nakashima, “U.S. Spy Agencies Mounted 231 Offensive Cyber-Operations in 2011, Documents Show,”
Washington Post
, August 30, 2013.

32
The NSA accomplishes this by installing the implant in the BIOS of machines as well as in the master boot record—core parts of the hard drive that don’t get wiped when software on the computer gets upgraded or erased. See “Interactive Graphic: The NSA’s Spy Catalog,”
Spiegel Online
, available at
spiegel.de/international/world/a-941262.html
.

33
In one case, the NSA and the UK spy agency Government Communications Headquarters, or GCHQ, used a sophisticated method called Quantum Insert to hack the machines of Belgian telecom workers to gain access to the telecom’s network and to a router the company used for processing the traffic of mobile phone users. The elaborate attack involved using high-speed servers the NSA had set up at key internet switching points to intercept the surfing traffic of system administrators who worked for the telecom. The spy agencies first collected extensive intelligence on the workers—their e-mail addresses, IP addresses, and possible surfing habits—then the high-speed servers watched for requests from the employees’ machines for specific web pages, such as the victim’s own LinkedIn profile page. When the victim tried to access the LinkedIn page, the server would intercept the request before it reached LinkedIn and would feed a fake LinkedIn page to the victim that injected malware into his machine. Once on the system administrator’s machine, the spy agencies could then use his credentials to gain access to other parts of the telecom network to subvert the router.

34
Glenn Greenwald and Ewen MacAskill, “Obama Orders US to Draw up Overseas Target List for Cyber-Attacks,”
Guardian
, June 7, 2013. The eighteen-page Presidential Policy Directive 20 was issued in October 2012, and refers to offensive cyberattacks as Offensive Cyber Effects Operations.

35
Gellman and Nakashima, “US Spy Agencies Mounted 231 Offensive Cyber-Operations.”

36
Roger A. Grimes, “In His Own Words: Confessions of a Cyber Warrior,”
InfoWorld
, July 9, 2013.

37
Ibid.

38
Pennington was speaking at the Industrial Control System-Joint Working Group conference in 2011. The conference is sponsored by the Department of Homeland Security.

39
Author interview, November 2011.

40
Joseph Menn, “Special Report: US Cyberwar Strategy Stokes Fear of Blowback,” Reuters, May 10, 2013, available at
reuters.com/article/2013/05/10/us-usa-cyberweapons-specialreport-idUSBRE9490EL20130510
.

41
See
chapter 6
for previous mention of how these two vulnerabilities had already been discovered by others before Stuxnet’s authors used them in their attack.

42
Summer Lemon, “Average Zero-Day Bug Has 348-Day Lifespan, Exec Says,” IDG News Service, July 9, 2007, available at
computerworld.com/s/article/9026598/Average_zero_day_bug_has_348_day_lifespan_exec_says
.

43
Robert Lemos, “Zero-Day Attacks Long-Lived, Presage Mass Exploitation,” Dark Reading, October 18, 2012, available at
darkreading.com/vulnerabilities—threats/zero-day-attacks-long-lived-presage-mass-exploitation/d/d-id/1138557
. The research was conducted by Symantec.

44
Pennington, Industrial Control Systems–Joint Working Group Conference, 2011.

45
Michael Riley, “U.S. Agencies Said to Swap Data with Thousands of Firms,” Bloomberg, June 14, 2013, available at
bloomberg.com/news/2013-06-14/u-s-agencies-said-to-swap-data-with-thousands-of-firms.html
.

46
Tom Gjelten, “Stuxnet Raises ‘Blowback’ Risk in Cyberwar,”
Morning Edition
, NPR, November 2, 2011, available at
npr.org/2011/11/02/141908180/stuxnet-raises-blowback-risk-in-cyberwar
.

47
Author interview, 2012.

BOOK: Countdown to Zero Day: Stuxnet and the Launch of the World's First Digital Weapon
3.22Mb size Format: txt, pdf, ePub
ads

Other books

Sold by K. Lyn
Murder on the Minnesota by Conrad Allen
Tricking Tara by Viola Grace
Magic and Macaroons by Bailey Cates
Shoot the Piano Player by David Goodis
Jackson by Leigh Talbert Moore
Beautiful Americans by Lucy Silag