Surveillance or Security?: The Risks Posed by New Wiretapping Technologies (6 page)

BOOK: Surveillance or Security?: The Risks Posed by New Wiretapping Technologies
9.48Mb size Format: txt, pdf, ePub
ads

Although the numbers used in an IP address are meaningful to a networking expert-to those in the know, an address of the form 18.x.y.z,
where x, y, z are between 0 and 255, designates MIT-most people find the
sequence of numbers largely incomprehensible. Instead people use names
(such as www.mit.edu) rather than IP addresses to define Internet locations; machines around the Internet resolve these names into the numeric
IP addresses.33

Internet routing is very different from PSTN routing. There is no restriction that a path take just five hops. More importantly, the routing of a
communication is dynamic: packets travel whichever way is least congested and, in theory-though not often in practice-the different packets
of a file transfer, VoIP call, email, or any other IP-based communication
could each take different paths. Packet switching puts the destination
address into each packet of data, and the Internet routers and switches
constantly broadcast the best ways to get from here to there.34

The flexibility of the system engendered many uses. The original vision
for the ARPANET was a resource-sharing research network, and users in
one part of the network could employ a computer somewhere on the
network to do their computation. By 1985, the network was not only a
technology used for research and development, it was supporting forms of
collaboration that the original designers had never anticipated.35 Email was
the surprising "killer app, 1136 an application of such great value that its use
alone could be a reason for having the network.37

Network redundancy proved to be magic. "We had the realization that
if there's an overload in one place, traffic will move around it," Baran
recalled years later.38 In other words, the network routes around problems.
Redundancy provided even more. Increased size created increased reliability; indeed, redundancy increased exponentially with network size. No
initial call setup work meant there was high efficiency at any bandwidth,
holding time, or scale. And distributed routing could be deployed on any
network topology (as long as all endpoints are connected to the network).39
That last point means that anyone could deploy an Internet over an existing network. This was bad news for the telephone companies whose role
was left to provide the physical infrastructure-and not much else.

2.3 Creating Other IP-Based Networks

ARPANET's success spurred the development of other networks,40 some
with quite different properties. The U.S. Department of Energy (DoE) set
up MFENet for collaboration in magnetic fusion energy; DoE's high-energy
physicists created one of their own: HEPNet. Academics started CSNET to
enable academic collaboration. Other networks were developed with other
purposes in mind. Two Duke University graduate students set up USENET,
a forum for electronic discussion groups, while the Because It's There NetBITNET-was started for universities to do email, file transfer, and so on.
(CSNET, USENET, and BITNET are all "store-and-forward" networks, in
which the data are stored in the transit machine until it receives an
acknowledgment that the data has arrived at the next device in the
network. This is unlike the Internet best-effort protocols.)41

In the mid-1980s the value of the ARPANET was clear, and it was
time to bring the work to a new level. The U.S. National Science Foundation (NSF) had already connected the nation's supercomputing centers
over what is now a very slow network: 56,000 bits per second. Now
the NSF went a major step further: NSF built a high-speed national
network connecting six supercomputing centers42 to the ARPANET. (Supercomputers are machines that are super fast at the time they are introduced.
They are often used for large scientific or military computations.) That was
not all.

NSF decided to build a general-purpose research network,43 and various
existing regional and local academic networks connected to the system.
The effort was an immediate success: in the first year the system needed
an upgrade in order to cope with traffic. While it took the ARPANET ten
years to reach a thousand computers, NSFNET went from two thousand computers at its start to more than two million in 1993.44 NSFNET had one
rule: traffic could be only for the purposes of research and education.

That limitation spurred private growth, including the development of
companies like AOL, which provided a limited, "walled garden" approach
to the Internet for the public. The growth inside and outside NSFNET
demonstrated that commercialization was needed;45 this occurred in
the mid-1990s. This commercialized network-basically the same one
we use today-is larger and faster than NSFNET, but relies on the same
protocols.46

2.4 The Network Stack

To understand the insecurities of the network, it is necessary to delve a bit
more deeply into the way communication occurs over the network. This
section and related sections in the next chapter on security risks are more
technical than other sections of the book, and some readers may choose
to skip them.

Communications networks are described in terms of "layers," in which
each layer has specific jobs (e.g., routing, transport). At their simplest,
communications systems have two layers: physical and logical. The physical layer consists of the wires and cables that make up the communication
system. These can be copper wires, fiber-optic cables, or even the wireless
communications systems that are supplanting them in many parts of the
world. (Many parts of the world had not built the wired infrastructure and
so skipped over it in favor of completely wireless communications.) Both
the PSTN and the Internet use this same type of physical layer. Indeed,
they use the same type of transmission facilities, often sharing them, and
they use the same type of electronic routing and switching devices to move
communications through the network.

Recall that the two communications systems differ in the logical layers
that direct how communications traverse the network. The PSTN establishes a circuit for a call and this circuit is used by the communication for
the entire duration of the call. Internet communications are broken into
packets and each packet is sent over the network individually. Packets from
a single communication could follow different paths. They are subjected
to the possibility of being subdivided into several smaller packets, or even
dropped entirely, at the discretion of a router handling it. Because packets
may arrive out of order, proper sequencing and reassembly of the packets
into the original message are the responsibility of the receiving end, called
a network host or simply a host.

Engineers talk about the Internet having a protocol stack; this model,
shown in figure 2.2, simplifies envisioning the communication system. The
bottom of the Internet protocol stack is the physical layer, consisting of the
connections between nodes. Above this is the link layer, the logical layer
that communicates with the actual physical hardware of the network. The
link layer provides a simple program that moves data from one device in
the network to the next. Then the question is which way a packet should
exit a link. This is settled by the Internet layer, the next level of the protocol
stack. There are various functionalities at the Internet layer,47 but here I
will focus only on packet routing. This is where the IP part of Cerf and
Kahn's TCP/IP is implemented, using a device called a router.

Figure 2.2

Internet protocol stack. Illustration by Nancy Snyder.

A router is a simple computer connecting two or more networks. Each
arriving packet has the numeric IP address of its source and destination.
The router examines a packet's destination address and determines from
this which link out of the router the packet should take. Of course, the
router cannot possibly know all the best routes to all possible destinations
(of which there are currently billions on the Internet). Instead the router
knows something local: it knows the routers to which it is connected and
it knows a little bit about where these are connected. The beauty of the Cerf and Kahn protocol is that even though each router is mostly concerned only about the packet's next hop, packets make it to their destination efficiently.

Routers store routing information in routing tables. These tables are
small databases, recording locations (addresses) of other network devices
and the most efficient routes to them. Internet routing changes as new
devices are added or taken off the network, new networks created, new
ISPs linked to, communication links fail or are restored to service, and so
on. Thus routing tables are constantly updated through the routers sharing
new routing information with one another. (In this, the Internet differs
from its predecessor, the PSTN, where such routing tables had been determined in advance by the phone company.) This on-the-fly updating provides a potential source of security problems, an issue I will discuss in the
next chapter.

A typical routing table will have entries for many destination subnetworks. This table will include a cost for delivering the packet to a subnetwork via each of its neighboring routers. (There are many possible metrics
for "cost," including bandwidth, number of hops, delay, reliability, and
communications cost.) Because there are billions of potential destinations,
routing tables cannot possibly list an entry for each possible destination.
Instead they aggregate nodes into subnetworks that are numerically adjacent48 and that share similar characteristics. This is sufficient to send the
packet on its way to the next router, which is closer to the packet's destination. If the communication is TCP-based, the packets of a communication, whether it is email, Instant Messaging, or the contents of a web page,
are numbered.

IP does routing on a best-effort basis; it does not guarantee packet delivery. Packets may be lost to congestion, insufficient bandwidth, and various
hiccups in the network. IP also does not guarantee the packets will arrive
in order. IP provides data transport without regard to the type of applications being supported or the type of communications technologies being
used. The protocol constitutes the "narrow waist" of the Internet stack and
this is its strength. By minimizing the number of service interfaces, the IP
hourglass maximizes interoperability. This has been key to the innovation
that has flourished on the network.49

TCP ensures reliability. TCP first determines whether all packets have
arrived. It does so by gently letting the sender know about missed packets;
if, for example, all packets up to number fifteen have arrived and then
packet seventeen appears, TCP sends a message back to the source identifying packet fifteen as the highest-sequence number correctly received. Once all the packets are in, TCP reassembles the communication.50 TCP monitors
not only packet delivery but also network congestion. By examining what
is happening to the connection between two machines, TCP can not only
determine congestion, the protocol can do something about it. Once a TCP
connection has been established, TCP controls the flow of data sent out
to the network, increasing the flow when it appears that bandwidth is
available, throttling back when it appears it is not. TCP does this through
limiting the number of packets that it has sent out but for which it has
not yet received an acknowledgments'

Once packets are reassembled at the recipient's end, the user is in a
position to do something with the transmission. But she needs a way to
interact with the data-to read an email, transfer a file, browse a web page,
Instant Message (IM). That is the role of an application layer. The programs
to do so are developed by someone who does not have to know about
transport on the Internet, or reassembling packets, or how the different
devices sending the communications behaved (or even if the communication devices had changed from last month). All the application designer
has to know is how to write a program to transform the packets into email,
files, web pages, IMs, and so on. The simplicity embodied in the Internet's
layered architecture means that applications can freely use the delivery
functionality of the network while ignoring the mechanics of what is
occurring at lower levels of the protocol stack.

2.5 Mobile Communications

When TCP/IP was being developed, there was no issue about IP addresses
for portable devices: computers were big heavy objects that did not move.
The IP address of a device stayed fixed because the device stayed fixed, and
so did the network routes to it. The world has changed with the advent of
laptop computers, tablets, and other portable devices. In many situations,
the IP address for these devices is assigned dynamically, and it may be
different each time the device connects to the network. This is true even
if the device is connecting to the Internet from the same location; it
happens because the local network through which the device is connecting
is likely to have multiple addresses available to it.

BOOK: Surveillance or Security?: The Risks Posed by New Wiretapping Technologies
9.48Mb size Format: txt, pdf, ePub
ads

Other books

A Timely Concerto by Lee Ann Sontheimer Murphy
Angel's Redemption by Andi Anderson
Just Like That by Erin Nicholas
The Horror in the Museum by H. P. Lovecraft
Crossing by Andrew Xia Fukuda
Inferno: A Novel by Dan Brown