Cyber War: The Next Threat to National Security and What to Do About It (12 page)

Read Cyber War: The Next Threat to National Security and What to Do About It Online

Authors: Richard A. Clarke,Robert K. Knake

Tags: #General, #Computers, #Technology & Engineering, #Political Science, #Security, #United States, #Political Freedom & Security, #Cyberterrorism, #Political Process, #Law Enforcement, #International Security, #Information warfare, #Military Science, #Terrorism, #Prevention

BOOK: Cyber War: The Next Threat to National Security and What to Do About It
4.03Mb size Format: txt, pdf, ePub

There are bound to be vulnerabilities in anything so large. Today, it has grown so extensive that the Internet is running out of addresses. When the Internet was cobbled together, the inventors came up with a numbering system to identify every device that would connect to the network. They decided that all addresses had to be a 32-bit number, a number so large that it would allow for 4.29 billion addresses. Never did they imagine that we would need more than that.

As of last count, there are nearly 6.8 billion people living on
the planet. On the current standard, that’s more than one address for every two people. And today, that is not enough. As the West grows more dependent on the Internet, and as the Second and Third worlds expand their use, 4.29 billion addresses cannot possibly satisfy all the possible people and devices that will want to connect to the web. That the Internet is running out of addresses on its own may be a manageable problem. If we move quickly to converting to the IPv6 address standard, by the time we run out of IPv4 addresses, in about two years, most devices should be able to operate on the new standard. But step back for a moment and a cause for concern begins to emerge.

The Pentagon envisions a near-future scenario in which every soldier on the battlefield will be a hub in a network, and as many as a dozen devices carried by that soldier will be plugged into the network and require their own addresses. If you stroll through the appliance aisle at a high-end home-goods store, you will notice that many of the washing machines, dryers, dishwashers, stoves, and refrigerators are advertising that they can be controlled through the Internet. If you are at work and want the oven to be preheated to 425 degrees when you arrive home, you could log onto a webpage, access your oven, and set it to the right temperature from your desktop.

What all this means is that as we move beyond 4.29 billion internal web addresses, the degree to which our society will be dependent on the Internet, for everything from controlling our thermostats to defending our nation, is set to explode, and with it the security problem is only going to get worse. What this could mean in a real-world conflict is something that until recently most policy makers in the Pentagon were loath to think about. It means that if you can hack into things on the Internet, you might not just be able to steal money. You might be able to cause some real damage, including damage to our military. So exactly how is it that you can hack into things, and why is that possible?

SOFTWARE AND HARDWARE

Of the three things about cyberspace that make cyber war possible, the most important may be the flaws in the software and hardware. All of those devices on the Internet we just discussed (the computer terminals and laptops, the routers and switches, the e-mail and webpage servers, the data files) are made by a large number of companies. Often, separate companies make the software that run devices. In the U.S. market, most laptops are made by Dell, HP, and Apple. (A Chinese company, Lenovo, is making a dent after having bought IBM’s laptop computer unit.) Most big routers are made by Cisco and Juniper, and now the Chinese company Huawei. Servers are made by HP, Dell, IBM, and a large number of others, depending upon their purpose. The software they run is written mainly by Microsoft, Oracle, IBM, and Apple, but also by many other companies. Although these are all U.S. corporations, the machines (and sometimes the code that runs on them) come from many places.

In
The World Is Flat
, Thomas Friedman traces the production of his Dell Inspiron 600m notebook from the phone order he places with a customer-service representative in India to its delivery at his front door in suburban Maryland. His computer was assembled at a factory in Penang, Malaysia. It was “co-designed” by a team of Dell engineers in Austin and notebook designers in Taiwan. Most of the hard work, the design of the motherboard, was done by the Taiwanese team. For the rest of the thirty key components, Dell used a string of different suppliers. Its Intel processor might have been made in the Philippines, Costa Rica, Malaysia, or China. Its memory might have been made in Korea by Samsung, or by lesser-known companies in Germany or Japan. Its graphic card came from one of two factories in China. The motherboard, while designed in Taiwan, could have been made at a factory there, but probably came from
one of two plants in Mainland China. The keyboard came from one of three factories in China, two of them owned by Taiwanese companies. The wireless card was made either by an American-owned company in China or by a Chinese-owned company in Malaysia or in Taiwan. The hard drive was probably made by the American company Seagate at a factory in Singapore, or by Hitachi or Fujitsu in Thailand, or by Toshiba in the Philippines.

After all these parts were assembled at the factory in Malaysia, a digital image of the Windows XP operating system (and probably Windows Office) was burned onto the hard drive. The code for that software, amounting to more than 40 million lines for XP alone, was written at a dozen or more locations worldwide. After the system was imprinted with the software, the computer was packaged up, placed on a pallet with 150 similar computers, and flown on a 747 to Nashville. From there, the laptop was picked up by UPS and shipped to Friedman. All told, Friedman proudly reports that “the total supply chain for my computer, including suppliers of suppliers, involved about four hundred companies in North America, Europe, and primarily Asia.”

Why does Friedman spend six pages in a book about geopolitics documenting the supply chain for the computer he wrote the book on? Because he believes that the supply chain that built his computer knits together the countries that were part of that process in a way that makes interstate conflicts of the sort we saw in the twentieth century less likely. Friedman admits this is an update of his “Golden Arches Theory of Conflict Prevention” from his previous book, which argued that two states that both had a McDonald’s would not go to war with each other. This time, Friedman’s tongue-in-cheek argument has a little more meat to it than the hamburger theory. The supply chain is a microeconomic example of the trade that many theorists of international relations believe is so beneficial to the countries involved that even threatening war would not be
worth the potential economic loss. Friedman looks at the averted crisis in 2004, when Taiwanese politicians running on a pro-independence platform were voted out of office. In his cute bumper-sticker-slogan way, Friedman observed that “Motherboards won over motherland,” concluding that the status quo economic relationship was more valuable than independence to the Taiwanese voters.

Or maybe the Taiwanese voters just didn’t want to end up dead after China invaded, which is what China more or less said it would do if Taiwan declared its independence. What Friedman sees as a force that makes conflict less likely, the supply chain for producing computers, may in fact make cyber warfare more likely, or at least make it more likely that the Chinese would win in any conflict. At any point in the supply chain that put together Friedman’s computer (or your computer, or the Apple MacBook Pro that I am writing this book on), vulnerabilities were introduced, most accidentally, but probably some intentionally, that can make it both a target and a weapon in a cyber war.

Software is used as an intermediary between human and machine, to translate the human intention to find movie times online or read a blog, into something that a machine can understand. Computers really are just evolved electronic calculators. Early computer scientists realized that timed electrical pulses could be used to represent 1’s and that the absence of a pulse could be used to represent 0’s, like long and short bursts in Morse code. The base-10 numbers that humans use, because we have ten fingers, could be translated into this binary code that a machine could understand so that when, for instance, the 5 key on an early electronic calculator was depressed, it would close circuits that would send a pulse followed by a pause followed by another pulse in quick succession to represent the 1, 0, and 1 that make up the number 5 in a binary logic system.

All computers today are just evolutions of that same basic process. A simple e-mail message is converted into electric pulses that can be
carried over copper wires and fiber-optic cables and then retranslated into a message readable to a human eye. To make that happen someone needed to provide instructions that a computer could understand. Those instructions are written in programming languages as computer code, and most people who write code make mistakes. The obvious ones get fixed, or else the computer program does not function as intended; but the less-obvious ones are often left in the code and can be exploited later to gain access. As computer systems have gotten faster, computer programs have grown more complex to take advantage of all the new speed and power. Windows 95 had less than 10 million lines of code. Windows XP, 40 million. Windows Vista, more than 50 million. In a little over a decade, the number of lines of code has grown by a factor of five, and with it the number of coding errors. Many of those coding errors allow hackers to make the software do something it was not supposed to, like let them in.

In order to manipulate popular software to do the wrong thing, like let you assume system administrator status, hackers design small applications, “applets,” that are focused on specific software design or system configuration weaknesses and mistakes. Because computer crime is a big business, and getting ready to conduct cyber war is even well-funded, criminal hackers and cyber warriors are constantly generating new ways to trick systems. These hacker applications are called malware. On average in 2009, a new type or variant of malware was entering cyberspace every 2.2 seconds. Do the math. The three or four big antivirus software companies have sophisticated networks to look for the new malware, but they find and issue a “fix” for about one in every ten pieces of malware. The fix is a piece of software designed to block the malware. By the time the fix gets to the antivirus company’s customers, often days, and sometimes weeks, have gone by. During that time, companies, government departments, and home users are entirely vulnerable to the new malware. They won’t even know if they have been hit by it.

Frequently the malware is sitting on innocent websites, waiting for you. Let’s say you surf to the website of a Washington think tank to read their latest analysis of some important public policy issue. Think tanks are notorious for not having enough money and not giving enough attention to creating secure and safe websites. So, as you are reading about the latest machinations over health care or human rights in China, a little piece of malware is downloading itself onto your computer. You have no way of knowing, but now your new friend in Belarus is logging your every keystroke. What happens when you log into your bank account or to the Virtual Private Network of your employer, the Really Big Defense Company? You can probably guess.

The most common software error for years, and one of the easiest to explain, is something called “buffer overflow.” Code for a webpage is supposed to be written in such a way that when a user comes to that webpage, the user can only enter a certain amount of data, like a user name and password. It’s supposed to be like Twitter, a program where you can enter, say, no more than 140 characters. But if the code writer forgets to put in the symbols that limit the number of characters, then a user can put in more. Instead of just putting in a user name or password, you could enter entire lines of instruction code. Maybe you enter instructions to allow you to add an account. Think about those instructions overflowing the limited area where a public user is supposed to be able to add information and then those instructions falling into the application. The instruction code reads as if a systems administrator had entered it and—ping!—you are inside.

Software errors are not easily discovered. Even experts cannot usually visually identify coding errors or intentional vulnerabilities in a few lines of code, let alone in millions. There is now software that checks software, but it is far from able to catch all the glitches in millions of lines. Each line of that code had to be written by a computer
programmer, and each additional line of code increased the number of bugs introduced into the software. In some cases, programmers actually put those bugs in intentionally. The most famous case, and one that illustrates a larger phenomenon, occurred when somebody at Microsoft dumped an entire airplane-simulation program inside the Excel 97 database software. Microsoft only discovered it when people started thanking the company for it. Programmers may do it for fun, for profit, or in the service of a competing company or foreign intelligence service; but whatever their motive, it is a nearly impossible task to ensure that a few lines of code allowing for unauthorized access through a “trapdoor” are kept out of such massive programs. The original Trojan Horse had hidden commandos; today we have hidden commands of malicious code. In the case of the Excel spreadsheet, you began by opening a new blank document, pressing F5, and when a reference box opened, you typed in “X97:L97” and pressed
enter
, then pressed
tab
. This took you to cell M97 on the spreadsheet. Then if you clicked on the chart wizard button while holding down the
control
and
shift
keys—ping!—you activated a flight-simulator program, which popped right up.

Sometimes developers of code leave behind secret trapdoors so they can get back into the code easily later on to update it. Sometimes, unknown to their company, they do it for less reputable reasons. And sometimes other people, like hackers and cyber warriors, do it so they can get into parts of a network where they are not authorized. Thus, when someone hacks into a software product under development (or later), they may not just be stealing a copy, they may be adding to it. Intentional trapdoors, as well as others that occur because of mistakes in code writing, sometimes allow a hacker to gain what is called “root.” Hackers trade or sell each other “root kits.” If you have “root access” to a software program or a network, you have all the permissions and authorities of the software’s creator or the network’s administrator. You can add software. You can
add user accounts. You can do anything. And, importantly, you can erase any evidence that you were ever there. Think of that as a burglar who wipes away his fingerprints and then drags a broom behind him to the door, erasing his footprints.

Other books

The Restless Shore by Davis, James P.
They Who Fell by Kevin Kneupper
Redemption by Erica Stevens
Crucible of Fate by Mary Calmes
Longest Night by Kara Braden
Projection by Risa Green
The Light Fantastic by Terry Pratchett