As MAXC's principal designer, Thacker would not hear of it. Not
with the fail-safe system of error correction and memory diagnostics
he had implemented. Summoned down to the basement room where
MAXC hummed away under the powerful draft of high-capacity air
conditioning, he conducted his own tests for the newcomer from Harvard and MIT. As far as he could tell, everything ran flawlessly.
"The machine is reliable," he declared.
"So Chuck left," Metcalfe said. "He insisted MAXC was absolutely
fine and chalked up the problems to me, a guy he didn't think much of
in the first place."
Therefore Metcalfe devised his own test. Calling it "Munger" was his
way of enjoying a private joke at Thacker's expense. The word derived
from "mung," MIT hacker slang that meant "Mash Until No Good" and
signified the making of large, permanent, and (generally) malicious
changes to a computer file. Metcalfe's non-destructive Munger simply fed
a random stream of bits into MAXCs memory and read them out again.
If the sequence mutated along the way by so much as a single bit, the program would clang a bell on a teletype nearby and log the discrepancy.
Metcalfe fired Munger up and waited to see what would happen. He did
not have to wait long. Literally within seconds the teletype went off like a
fire alarm.
That proved it. There
was
a memory flaw in MAXC—only not where
everyone (including Thacker) had been searching for it. What they had
overlooked was that MAXC actually had two ports into memory, one
via the central processor and the other through the disk controller.
Because Thacker's test program ran from inside the machine, it surveyed only the processor port, which worked fine. But Munger, like
the ARPANET link that had stymied Metcalfe, ran as an external program from the disk—where the memory port was indeed broken. "My
fucking program found the bug," Metcalfe recalled, "and Chuck never
forgave me."
"I believe firmly to this day that Metcalfe misread Chuck," Taylor said
later. "Thacker liked to have bugs pointed out, because he loved to fix
them." On the other hand, not many of Thacker’s colleagues displayed
Metcalfe's pure delight in rubbing it in. He even had a rubber stamp
made up echoing a catchphrase from the movie
Love Story.
It read,
"Reliability is never having to say you're sorry."
"I used to stamp that on all the memos I was writing," Metcalfe said
many years later, still grinning at the thought. "Chuck hated it. Poor
Thacker!"
After finally getting MAXC hooked up to the ARPANET, Metcalfe
moved on to the challenge that was to bring him and Boggs together. This
involved finding a simple and reliable way to connect PARC's Altos to
each other. The local network was the
sine qua non
of interactive distributed computing, Taylor believed: He was after more than the symbiosis
of one man and one machine, but rather the unique energy sure to issue
from joining togedier a multitude of people and machines all as one.
Unfortunately, none of the network architectures then in use suited
PARC's specifications. The ARPANET was too large-scale and required
too much extra hardware to link computers together in discrete local networks at a reasonable cost. IBM and several other computer manufacturers had developed their own proprietary systems, but they were
specifically tailored to their own machines and difficult to adapt to others.
They also tended to break down when the local loop got too large. The
POLOS group's adaptation of a network technology provided by Data
General for its Nova minicomputers underscored these shortcomings.
The networks maximum capacity was fifteen computers. POLOS's
attempt to double the number had produced a multi-tentacled horror of
cable and hardware. "We were able to network up to 29 Novas, but that
was the limit," Metcalfe recalled. "The ultimate 29-Nova daisy chain had
twenty-eight 40-conductor cables, sixty 40-conductor connectors, and
the nasty habit of crashing if any one of these fragile devices was disturbed." The basement room at PARC where all these cables came
together was aptly labeled the "rats nest."
Obviously this would not do for Taylor, who envisioned a system linking hundreds of Altos. His other specifications were similarly stringent.
The network had to be cheap—no more than 5 percent of the cost of
the computers it was connecting. It had to be simple, without any fussy
new hardware, in order to promote long-term reliability. It had to be
easily expandable—unlike POLOS, where adding a Nova meant taking
down the network and splicing a new line into the rat s nest. And it had
to be fast, because it would be feeding files to Gary Starkweathers
swift laser printer and would need to keep up.
When Metcalfe first got to PARC he found several networking
schemes already percolating on CSL's back burner, none of them to his
liking. One was a local version of the ARPANET (but 1,000 times faster)
devised by Charles Simonyi, who had finally rejoined his old Berkeley
Computer colleagues at PARC. Simonyi’s design was nicknamed SIGnet,
which stood for "Simonyis Infinitely Glorious Network." Metcalfe studied the specifications for about a week before rejecting it for having "too
many moving parts for a local network."
He started to look elsewhere while a deadline loomed. Thacker’s
Alto schematics, which were coming together around the end of 1972,
left a blank space where the network controller was supposed to fit. If
Metcalfe could not come up with something to fill the blank, the matter would be taken out of his hands—which would be not only a challenge to his intellectual authority as the network guy, but a blow to his
pride.
That dismal outcome was averted when he suddenly recalled a concept he had first encountered months earlier. Back in June, while visiting Washington on ARPANET business, he had lodged on the guest
room sofa-bed of his friend Steve Crocker, an ARPA program manager. Late that night he pulled down from a handy bookshelf a heavy
volume of papers from an obscure technical conference, "a sure cure
for jet-lag sleeplessness," and lumbered his way through one written
by a University of Hawaii professor named Norman Abramson.
Abramson's paper described ALOHAnet, a radio network designed to
allow computers to communicate with one another along the Hawaiian
archipelago. ALOHAnet was loosely derived from the ARPANET, as
could be seen from the nickname of its central control computer: Menehune, a mythical Hawaiian "imp." Metcalfe was annoyed by the pun but
intrigued by the scheme. ALOHAnet messages were transmitted in discrete digital packets through the atmosphere. Because air is a passive
medium (in contrast to, say, an electrically charged phone line), that feature made the system fetchingly simple. Abramson further described the
networks clever means of handling the interference that occurred whenever two or more stations tried to transmit simultaneously. If they failed
to hear an acknowledgment from the receiving station indicating that
their messages had arrived safely, they retransmitted after waiting a random interval so the messages would not collide a second time. This, Metcalfe perceived, would be a highly useful feature in a local network where
scores of computers might be trying to send messages on the same line.
The main limitation of ALOHAnet appeared to be its tendency
toward gridlock. The paper suggested that the channel could be
loaded up to only 17 per cent of its capacity before breaking down into
a incoherent jabber of retransmitted and recolliding messages.
"That can't be right," Metcalfe said to himself, propped up on
Crocker's sofa-bed.
It
had not escaped his notice that Abramson's figures
were not based on experience
—
the existing ALOHAnet linked only
seven computers
—
but on theory, and misapplied theory at that.
He
realized Abramson had made two impossible assumptions: That the number
of users was infinite, and that each one kept mindlessly typing even after
the acknowledgments stopped coming. No wonder the model filled up
with messages and retransmissions until it crashed like an overloaded
blimp. "Totally unacceptable," Metcalfe thought.
But suppose one imposed a couple of real-life assumptions on Abramson's model? Such as that the system
had a
finite number
of
terminals—
thirty, forty, even a hundred—and that
users
stopped transmitting if the
system stopped responding. In that
case, Metcalfe
calculated, the system
should remain stable even at 90 per cent
of
capacity.
Cheap, simple, and capacious: Back
at PARC,
he realized that
ALOHAnet
possessed most of the
qualities the
lab sought in
a
local network.
Over
the next few months
Metcalfe
worked to
adapt
it
to
the
centers high-volume, high-performance specifications.
He junked
the
central control computer,
Menehune, because
each
Alto would
control its own transmission rate.
He designed
a scheme
by
which each
station would listen to the line
and stop
transmitting the
instant
it
heard any interference, instead
of continuing
to chatter.
And rather
than transmit via radio, he
proposed joining
the Altos by some
sort of
physical line.
The
key element was that
the medium
had to be inert.
Metcalfe
understood that if the line
had to carry an
electrical current to
aid
transmission, like a phone line,
Murphy's Law
would
take
over.
The
line voltage would become the
component
most vulnerable
to
failure.
But
if there was no power on the line, Murphy would be
defeated. It
was
possible and much better, he reasoned, to send messages
into
a
passive medium, like the " 'luminiferous aether' once
thought
to
per
vade the universe as the medium for the propagation of light."