Cybersecurity and Cyberwar (29 page)

Read Cybersecurity and Cyberwar Online

Authors: Peter W. Singer Allan Friedman,Allan Friedman

BOOK: Cybersecurity and Cyberwar
4.93Mb size Format: txt, pdf, ePub

Many in the private sector have watched these discussions of separating out the Internet or making it anew and started to explore their associated business opportunities. In 2012, the security company
Artemis declared
its intention to buy a new domain of .secure and create a “neighborhood on the Internet where security is required, and users know that.” Rather than focus on any network aspects, the .secure domain would be a brand of sorts. Any website wishing to use the .secure appendage would have to meet Artemis's security standards, including no hosted malware, fully implemented top-of-the-line protections, and rapid vulnerability patching.

Will these approaches work? It depends on two features. The first is what kind of security is actually offered. The commercial .secure domain offers no protection from malicious actors on the network
or on your computer. Instead, it will only secure the websites themselves. Your bank's website won't attack you, but you would still be vulnerable to having your bank account credentials stolen.

Similarly, the government's secure Internet could take one of several forms of added security. The least amount of reengineering and effort would simply be to build a model of opting in at the network level, allowing more government analysis of network traffic to detect threats. This would offer benefits, to be sure, but would hardly create the kind of “more secure Internet” that people talk about as the needed alternative. More rigorous proposals require individual authentication at the network level to support connectivity. Even if we could engineer a mechanism to convey real authentication at the network level (as opposed to the application level), a vulnerable computer could still allow credential theft and undermine security.

The second feature to solve is scale. Network security is generally inversely correlated with size, while network utility is positively correlated. To put it another way, the bigger the network, the more security problems, but the smaller the network, the less useful it is. If the network spans a large set of organizations and risks, its security becomes less and less certain, depending on more and more people not making mistakes. But if the inner circle of protection doesn't extend that far, then it's not all that useful for solving the problems these advocates cite. For example, will the government approach include smaller organizations, a rural power plant for instance, that are clearly vulnerable? But how will those less able organizations live up to the supposedly newly high security standards set for the extremely critical infrastructure? Moreover, how would we treat a large organization with critical functionality? Does the payroll department also have to conform to the new super-duper security standards for the new secure Internet? If not, how do we enforce segregation between those systems that everyone uses and the critical systems that only some people use some of the time? As we've seen, separating secure and insecure systems by “air gapping” them is very difficult in practice, and hasn't been a guarantee of safety. If it was so simple, the Pentagon's Secure Internet (SIPRINET) wouldn't have been repeatedly compromised by relatively unsophisticated cyberattacks, nor would Stuxnet have been a problem to the Iranians.

These same kinds of problems strike the private side, just with added layers. Suppose your mother hears about all these new fangled cyberthreats and decides only to trust websites with the .secure domain. Now any company that wants to reach her must join this group. This seems a positive step for the market, shaping it toward more security. The problem is that the bigger the market gets, the more it increases the number of websites (and all the people behind them) that could deviate from the security goals. What's more, when her grandson comes and surfs the Web on her computer or phone, not always using .secure, she'll lose her protection, and when it no longer works the way she was sold, she will see little reason to continue following the brand.

In practice, too many of the concepts of building an entirely new Internet end up a lot like the idea of relying on Bellovin's joked “evil bit.” This isn't to say that the goal of a less risky Internet isn't worth pursuing. But starting anew just isn't the easy option that is oft portrayed. Instead, the task is to analyze carefully the changes proposed, and compare their costs with their potential to affect specific security goals.

Rethink Security: What Is Resilience, and Why Is It Important?


Hundreds of Thousands May Lose Internet
.”

Articles with this headline not so ironically hit the Internet in the summer of 2012. The story started when the FBI caught the group behind the DNS Changer virus. This cybercriminal ring based out of Estonia had been able to infect more than 570,000 computers worldwide, reprogramming the victim's machines to use DNS servers run by the criminals. They would then steer the computers to fraudulent websites, where the hackers would profit (to the tune of over $14 million) from websites that the victims were tricked into visiting. But when the FBI got ready to shut down the ring, said Tom Grasso, an FBI supervisory special agent, “We started to realize that we might have a little bit of a problem on our hands because.… If we just pulled the plug on their criminal infrastructure and threw everybody in jail, the victims were going to be without Internet service. The average user would open up
Internet Explorer and get ‘page not found' and think
the Internet is broken
.”

Faced with this problem, the FBI entered the Internet server provider business. With the help of the Internet Systems Consortium, on the night of the arrests the agency installed two of its own Internet servers to take over the operations run by rogue servers that the victims' computers had been using, which by then were sitting impounded in an FBI evidence locker. For the next nine months, the FBI tried to let victims know that their computers had been infected and how to fix the problem. But after running its own servers at a cost of $87,000, the FBI said it had to pull the plug on its safety net, hence the media warnings of mass Internet loss. Fortunately, the “Internet Doomsday” that Fox News described was avoided; it turned out many of the systems were no longer used, while other ISPs set up various technical solutions to
steer people to assistance
.

In a world that depends so much on the Internet, the fear of losing access to it is very real. It's not just the lost social connections on venues like Facebook or Twitter, or the emptiness of a day without online cat videos, but the impact it can have on things like politics and economics. As a result, the need to build “resilience” against such shocks has become one of the magic words of cybersecurity.

Resilience is another one of those concepts that is both overused and underexplored. A study by the Homeland Defense Institute identified
119 different definitions
of the term. The general idea behind resilience is to adapt to adverse conditions and recover. It is a wonderful concept, but the problem is that it can apply to everything from computer systems to our own love life. Indeed, there have been over 3,000 books written with “the word ‘resilience'” in their title, most of them in the “self-help” section!

In cybersecurity, we should think about resilience in terms of systems and organizations. Resilient systems and organizations are prepared for attacks and can maintain some functionality and control while under attack. “Intrusion tolerance” is how security expert
Dan Geer frames it
. “We must assume that intrusions have happened and will happen. We must maximize the probability that we can tolerate the direct effect of those intrusions, and that whatever damage is done by the intruder, the system can continue to do its job to the extent possible.”

There are three elements behind the concept. One is the importance of building in “the intentional capacity to
work under degraded conditions
.” Beyond that, resilient systems must also recover quickly, and, finally, learn lessons to deal better with future threats.

For decades, most major corporations have had business continuity plans for fires or natural disasters, while the electronics industry has measured what it thinks of as fault tolerance, and the communications industry has talked about reliability and redundancy in its operations. All of these fit into the idea of resilience, but most assume some natural disaster, accident, failure, or crisis rather than deliberate attack. This is where cybersecurity must go in a very different direction: if you are only thinking in terms of reliability, a network can be made resilient merely by creating redundancies. To be resilient against a hurricane just requires backups located elsewhere, ready to just flip on if there is flooding at your main computer center. But in cybersecurity, an attacker who understands a network can go after the key nodes that connect all these redundancies,
shutting the whole thing down
.

Resilience in cybersecurity starts with the primary goal of preserving the functions of the organization. As a result, the actions that are needed to be resilient vary. Preparedness and continuity might depend on the ability to quickly lock down valuable information or dynamically turn on otherwise onerous defenses. Outward-facing Internet services could be shut down, with some alternative process in place for a less efficient but more secure alternate. In other situations, resilience might depend on mitigation or the ability to “fail gracefully.” One example is having mechanisms that keep attacks on web servers from gaining access to internal servers. It's the difference between your picture on the company website being defaced with a funny mustache, and sensitive data being compromised.

This notion of multiple modes of planned failure is important to resilience planning: systems and organizations should not fail critically from a single attack but should have enough distributed control to continue operations. Another key aspect is that failure must be evident. If the system allows “silent failures,” its operators can't adapt in a timely fashion.

Still, it's not terribly useful for a policymaker or senior manager to send off planners with the dictums to “Be resilient” and
“Expect the unexpected.” We need metrics to support organizational decision-making, influence system design, and guide technical investment. There are dependability metrics that measure, for instance, how critical any component is to the overall system. Similarly, understanding the means and timing of a system's recovery from the accidents or non-attack-related failures that normally happen can inform resilience against targeted attacks. Particularly useful are exercises that test security, such as having an outside “red team” test potential vulnerabilities and the recovery from them, before an actual foe makes a real attack.

Resiliency cannot be separated from the human component, though. Adaptability and recovery cannot happen without individuals, processes, and practices that can quickly assess the situation, handle dynamic environments, and learn from their experiences. As the famous World War II British poster advises, the most resilient response is not to freak out that the Internet sky is falling, but rather to “Keep Calm and Carry On.”

The biggest challenge perhaps is that the structural aspects of resiliency are frequently at odds with other goals. The media that writes headlines warning of an “Internet Doomsday” want to draw in readers, not supply answers. The same government bureaucracies that should be building an ethic of public resilience benefit from public fears that drive up their budgets and powers. And even in private firms, the incentives for efficiency can drive in opposite directions. Redundancy is intuitively wasteful. Meeting day-to-day goals requires looking away from the big picture. And the kind of employee who is best at adapting to changing conditions may not be as good at doing the same thing again and again, which many firms want instead. Indeed, when the World Economic Forum tried to sell its corporate members on the importance of building cyber-resilient organizations, its business case mostly relied on public goodwill, which is nice, but less than compelling to companies
focused on the bottom line
.

There are, however, incentives to adopt a more holistic approach to resilience. Media that continually warn the sky is falling end up losing the trust and attention of their intended audience. Government leaders who can only talk in terms of threats become discounted. And organizational theorists have noticed that businesses that have the greatest apparent efficiency actually are too lean, and suffer from
an inability to
adapt or innovate
. They lack “organizational slack” that drives a positive culture and enables future growth. The same organizational features that create resiliency enable this organizational slack. Similarly, being better at self-assessment and cultivating employees who understand the broader goals of the organization are other areas that help both resiliency and a broader mission.

There is no single definition, path, or strategy for resilience. We need to avoid treating it like a magical buzzword that has no real meaning, whether as a falsely claimed property of every new product or as a black box inserted into organizational planning documents. Instead, resiliency is about understanding how the different pieces fit together and then how they can be kept together or brought back together when under attack. Like all the other cybersecurity solutions, it's not only a matter or architecture and organization, it's about people and processes.

Reframe the Problem (and the Solution): What Can We Learn from Public Health?

In 1947, the seven staff members of what was then known as the Office of Malaria Control in War Areas took up a collection to raise $10. They needed the money to buy fifteen acres of land in DeKalb County, Georgia, where they planned one day to build their new headquarters. Over the next six decades, the organization widened its focus beyond malaria, changed its name, and the $10 investment in real estate more than paid off. Today, the acreage just outside Atlanta houses the Centers for Disease Control (CDC), a program with 15,000 staff members that is considered one of the most successful government agencies in history. The CDC serves as a bulwark of the modern American public health system, having ended the scourge of killers like smallpox and now standing guard against new disease outbreaks like pandemic flu.

Other books

Come, Reza, Ama by Elizabeth Gilbert
Christmas in Paris by Anita Hughes
Tumbledown by Robert Boswell
Sacrifice by David Pilling
Counterpart by Hayley Stone