When Kahn and Cerf returned from England, they refined their paper. Both men had a stubborn side. ”We'd get into this argumentative state, then step back and say, âLet's find out what it is we're actually arguing about.' ” Cerf liked to have everything organized before starting to write; Kahn preferred to sit down and write down everything he could think of, in his own logical order; reorganization came later. The collaborative writing process was intense. Recalled Cerf: “It was one of us typing and the other one breathing down his neck, composing as we'd go along, almost like two hands on a pen.”
By the end of 1973, Cerf and Kahn had completed their paper, “A Protocol for Packet Network Intercommunication.” They flipped a coin to determine whose name should appear first, and Cerf won the toss. The paper appeared in a widely read engineering journal the following spring.
Like Roberts's first paper outlining the proposed
ARPANET
seven years earlier, the Cerf-Kahn paper of May 1974 described something revolutionary. Under the framework described in the paper, messages should be encapsulated and decapsulated in “data-grams,” much as a letter is put into and taken out of an envelope, and sent as end-to-end packets. These messages would be called transmission-control protocol, or TCP, messages. The paper also introduced the notion of gateways, which would read only the envelope so that only the receiving hosts would read the contents.
The TCP protocol also tackled the network reliability issues. In the
ARPANET
, the destination IMP was responsible for reassembling all the packets of a message when it arrived. The IMPs worked hard making sure all the packets of a message got through the network, using hop-by-hop acknowledgments and retransmission. The IMPs also made sure separate messages were kept in order. Because of all this work done by the IMPs, the old Network Control Protocol was built around the assumption that the underlying network was completely reliable.
The new transmission-control protocol, with a bow to Cyclades, assumed that the
CATENET
was completely unreliable. Units of information could be lost, others might be duplicated. If a packet failed to arrive or was garbled during transmission, and the sending host received no acknowledgment, an identical twin was transmitted.
The overall idea behind the new protocol was to shift the reliability from the network to the destination hosts. “We focused on end-to-end reliability,” Cerf recalled. ”Don't rely on anything inside those nets. The only thing that we ask the net to do is to take this chunk of bits and get it across the network. That's all we ask. Just take this datagram and do your best to deliver it.”
The new scheme worked in much the same way that shipping containers are used to transfer goods. The boxes have a standard size and shape. They can be filled with anything from televisions to underwear to automobilesâcontent doesn't matter. They move by ship, rail, or truck. A typical container of freight travels by all three modes at various stages to reach its destination. The only thing necessary to ensure cross-compatibility is the specialized equipment used to transfer the containers from one mode of transport to the next. The cargo itself doesn't leave the container until it reaches its destination.
The invention of TCP would be absolutely crucial to networking. Without TCP, communication across networks couldn't happen. If TCP could be perfected, anyone could build a network of any size or form, and as long as that network had a gateway computer that could interpret and route packets, it could communicate with any other network. With TCP on the horizon, it was now obvious that networking had a future well beyond the experimental
ARPANET
. The potential power and reach of what not only Cerf and Kahn, but Louis Pouzin in France and others, were inventing was beginning to occur to people. If they could work out all the details, TCP might be the mechanism that would open up worlds.
As more resources became available over the
ARPANET
and as more people at the sites became familiar with them, Net usage crept upward. For news of the world, early Net regulars regularly logged on to a machine at SRI, which was connected to the Associated Press news wire. During peak times, MIT students logged on at some other computer on the Net to get their work done. Acoustic and holographic images produced at UC Santa Barbara were digitized on machines at USC and brought back over the Net to an image processor at UCSB, where they could be manipulated further. The lab at UCSB was outfitted with custom-built image-processing equipment, and UCSB researchers translated high-level mathematics into graphical output for other sites. By August 1973, while TCP was still in the design phase, traffic had grown to a daily average of 3.2 million packets.
From 1973 to 1975, the Net expanded at the rate of about one new node each month. Growth was proceeding in line with Larry Roberts's original vision, in which the network was deliberately laden with large resource providers. In this respect, DARPA had succeeded wonderfully. But the effect was an imbalance of supply and demand; there were too many resource providers, and not enough customers. The introduction of terminal IMPs, first at Mitre, then at NASA's Ames Research Center and the National Bureau of Standards with up to sixty-three terminals each, helped right the balance. Access at the host sites themselves was loosening up. The host machine at UCSB, for example, was linked to minicomputers in the political science, physics, and chemistry departments. Similar patterns were unfolding across the network map.
Like most of the early
ARPANET
host sites, the Center for Advanced Computation at the University of Illinois was chosen primarily for the resources it would be able to offer other Net users. At the time Roberts was mapping out the network, Illinois was slated to become home to the powerful new ILLIAC IV, a massive, one-of-a-kind high-speed computer under construction at the Burroughs Corporation in Paoli, Pennsylvania. The machine was guaranteed to attract researchers from around the country.
An unexpected twist of circumstances, however, led the University of Illinois to become the network's first large-scale consumer instead of a resource supplier. Students on the Urbana campus were convinced the ILLIAC IV was going to be used to simulate bombing scenarios for the Vietnam War and to perform top-secret research on campus. As campus protests erupted over the impending installation, university officials grew concerned about their ability to protect the ILLIAC IV. When Burroughs finished construction of the machine, it was sent to a more secure facility run by NASA.
But the Center for Advanced Computation already had its IMP and full access to the network. Researchers there took quickly to the newfound ability to exploit remote computing resourcesâso quickly, in fact, that the Center terminated the $40,000 monthly lease on its own high-powered Burroughs B6700. In its place, the university began contracting for computer services over the
ARPANET
. By doing this, the computation center cut its computer bill nearly in half. This was the economy of scale envisioned by Roberts, taken to a level beyond anyone's expectations. Soon, the Center was obtaining more than 90 percent of its computer resources through the network.
Large databases scattered across the Net were growing in popularity. The Computer Corporation of America had a machine called the Datacomputer that was essentially an information warehouse, with weather and seismic data fed into the machine around the clock. Hundreds of people logged in every week, making it the busiest site on the network for several years.
Abetted by the new troves of data, the
ARPANET
was beginning to attract the attention of computer researchers from a variety of fields. Access to the Net was still limited to sites with DARPA contracts, but the diversity of users at those sites was nonetheless creating a community of users distinct from the engineers and computer scientists who built the
ARPANET
. Programmers helping to design medical studies could tie in to the National Library of Medicine's rich MEDLINE database. The UCLA School of Public Health set up an experimental database of mental health program evaluations.
To serve the growing user community, SRI researchers established a unique resource called the
ARPANET
News
in March 1973. Distributed monthly in ink-on-paper form, the journal was also available over the Net. A mix of conference listings, site updates, and abstracts of technical papers, the newsletter read like small-town gossip riddled with computer jargon. One of the more important items in the
ARPANET
News
was the “Featured Site” series, in which system managers from the growing list of host computers described what they had to offer. In May 1973 Case Western Reserve University, which was selling computer services to network users, described its PDP-10 in terms that sounded altogether like an ad from the Personals section: “Case is open to collaborative propositions involving barters of time with other sites for work related to interests here, and sales of time as a service.”
Communicating by computer and using remote resources were still cumbersome processes. For the most part, the Net remained a user-hostile environment, requiring relatively sophisticated programming knowledge and an understanding of the diverse systems running on the hosts. Demand was growing among users for “higher-level” application programs aimed at helping users tap into the variety of resources now available. The file-transfer and Telnet programs existed, but the user community wanted more tools, such as common editors and accounting schemes.
SRI's Network Information Center estimated the number of users at about two thousand. But a newly formed users' interest group, called
USING
, was convinced there was a gap between the design of the network resources and the needs of the people trying to use those resources. Envisioning itself as a lobby group, a consumers'union even,
USING
began immediately to draw up plans and recommendations for improving the delivery of computer services over the
ARPANET
.
But DARPA saw no need to share authority with a tiny self-appointed watchdog group made up of people the agency viewed as passengers on its experimental vehicle. The initiative died after about nine months with a terse memo from a DARPA program manager named Craig Fields, warning the group that it had overstepped its bounds. With neither funding nor official support for their effort forthcoming, members put
USING
into a state of suspended animation from which it never emerged.
Other problems developed for DARPA as the profile of the network began to rise. Like the
USING
insurgency, most were relatively minor affairs. But together they illustrated the ongoing tensions related to DARPA's stewardship of the network. One area of tension had to do with DARPA's Pentagon masters. IPTO in particular managed to steer clear of the most blatantly military research. But while the Illinois students were wrong about the ILLIAC IV being used for simulated bombing missions against North Vietnam, there
were
plans to use it for nuclear attack scenarios against the Soviet Union. Similarly, researchers of all sorts used seismic information stored on the Computer Corporation of America (CCA) database server, information that was being collected to support Pentagon projects involving underground atomic testing.
In the late 1960s, growing political unrestâboth violent and nonviolentâcaught the U.S. military by surprise. Army intelligence knew all about Prague, Berlin, and Moscow, but now the Pentagon was contemplating Newark, Detroit, and Chicago. The Army gathered information from dozens of U.S. cities on the location of police and fire stations, hospitals, and so forth. Someone in the Pentagon thought it would be a good idea to keep track of local troublemakers as well.
In 1972 public outcry erupted over the Army's information gathering, and the order went out for the files to be destroyed immediately. But three years later, allegations surfaced that Army intelligence officers had instead used the
ARPANET
to move the files to a new location. When the story broke, the fact that something like the
ARPANET
existed was news to most Americans. That the story was reported in the most draconian, cloak-and-dagger terms only added to the stormy reaction. The result was a Senate investigation in which DARPA was called upon to explain how it was using the
ARPANET
.
DARPA eventually proved that the Army files had not moved on the
ARPANET
by reviewing hundreds of rolls of Teletype printouts that had been stored in a dusty crawl space at BBN. DARPA was vindicated, but a perceived entanglement with the Army's clandestine operations was the last thing the
ARPANET
needed.
Discussions about how DARPA would ultimately divest operational responsibility for the network had already begun around 1971. DARPA had set out to link the core processing capabilities in America's top computer science research centers, and as far as the agency was now concerned, it had accomplished that. Its mission was research. It wasn't supposed to be in the business of operating a network. Now that the system was up and running, it was becoming a drain on other priorities. It was time for DARPA to shed the service provider's role.
Handling the transition was a touchy matter. The
ARPANET
was now a valuable tool, and Roberts's goal was to ensure its continued development. He commissioned several studies to help determine the best option. The best route, it seemed, was to maintain a networking research effort, but sell off the network itself to a private contractor. But sell to whom? The market for data-communications networks was still largely uncharted, and the big communications companies remained as skeptical as ever about the DARPA technology.