
Title: Where the Wizards Stay Up Late: The Origins of the Internet
Author: Katie Hafner and Matthew Lyon
Scope: 3 stars
Readability: 4 stars
My personal rating: 5 stars
See more on my book rating system.
If you enjoy this summary, please support the author by buying the book.
Topic of Book
The authors explain how the Internet was created, with special attention to ARPA and BBN (sometimes known as the first Internet company).
If you would like to learn more about technology in history, read my book From Poverty to Progress: How Humans Invented Progress, and How We Can Keep It Going.
My Comments
Anyone who is interested in how the internet got started should read this book. It is mainly an historical narrative that focuses on the 1960s and early 70s, but it adds some lessons on how innovations occur.
Key Take-aways
- The Internet started via funding from the Pentagon’s Advanced Research Projects Agency (ARPA).
- The idea of something like the Internet was conceived of by Joseph Licklider in 1960. He later joined ARPA.
- Licklider’s successor, Bob Taylor, played a leading role in funding Bolt Beranek and Newman (BBN) to build a proof-of-concept. He later founded the legendary Xerox PARC.
- Some key theoretical concepts proved critical:
- Packet-switching
- Breaking messages down into small bits
- Subnetworks with small, identical nodes, all interconnected
- Separate network processors that sent/receive data between subnetworks.
- Clear between the host responsibilities and the network responsibilities
- Headers in message which contained nothing but the processing instructions
- Small teams establishing simple protocols with limited scope
- The ARPANet got very little use until users discovered email in the late 1970s and early 1980s. For a long time, this was the main use of the early Internet.
- In the early 1990s Internet usage finally went mainstream:
- In 1990 the World Wide Web, a multimedia branch of the Internet, was created.
- In 1991 the NSF allowed commercial activity on the Net.
- In 1993 the first modern browser, Mosaic, was invented.
Important Quotes from Book
That there even existed an agency within the Pentagon capable of supporting what some might consider esoteric academic research was a tribute to the wisdom of ARPA’s earliest founders. The agency had been formed by President Dwight Eisenhower in the period of national crisis following the Soviet launch of the first Sputnik satellite in October 1957. The research agency was to be a fast-response mechanism closely tied to the president and secretary of defense, to ensure that Americans would never again be taken by surprise on the technological frontier. President Eisenhower saw ARPA fitting nicely into his strategy to stem the intense rivalries among branches of the military over research-and-development programs. The ARPA idea began with a man who was neither scientist nor soldier, but soap salesman.
In the post-Sputnik panic, the race for outer space cut a wide swath through American life, causing a new emphasis on science in the schools, worsening relations between Russia and the United States, and opening a floodgate for R&D spending. Washington’s “external challenge” R&D spending rose from $5 billion per year to over $13 billion annually between 1959 and 1964.Sputniklaunched a golden era for military science and technology. (By the mid-1960s, the nation’s total R&D expenditures would account for 3 percent of the gross national product, a benchmark that was both a symbol of progress and a goal for other countries.)
All eyes were on ARPA when it opened its doors with a $520 million appropriation and a $2 billion budget plan. It was given direction over all U.S. space programs and all advanced strategic missile research. By 1959, a headquarters staff of about seventy people had been hired, a number that would remain fairly constant for years. These were mainly scientific project managers who analyzed and evaluated R&D proposals, supervising the work of hundreds of contractors.
The rest of ARPA’s staff was recruited from industry’s top-flight technical talent at places like Lockheed, Union Carbide, Convair, and other Pentagon contractors.
They fleshed out a set of goals to distance ARPA from the Pentagon by shifting the agency’s focus to the nation’s long-term “basic research” efforts. The services had never been interested in abstract research projects.
In time, the “ARPA style”—freewheeling, open to high risk, agile—would be vaunted. Other Washington bureaucrats came to envy ARPA’smodus operandi. Eventually the agency attracted an elite corps of hard-charging R&D advocates from the finest universities and research laboratories, who set about building a community of the best technical and scientific minds in American research.
[Joseph Licklider’s} thoughts about the role computers could play in people’s lives hit a crescendo in 1960 with the publication of his seminal paper “Man-Computer Symbiosis.” In it he distilled many of his ideas into a central thesis: A close coupling between humans and “the electronic members of the partnership” would eventually result in cooperative decision making. Moreover, decisions would be made by humans, using computers, without what Lick called “inflexible dependence on predetermined programs.” He held to the view that computers would naturally continue to be used for what they do best: all of the rote work. And this would free humans to devote energy to making better decisions and developing clearer insights than they would be capable of without computers. Together, Lick suggested, man and machine would perform far more competently than either could alone. Moreover, attacking problems in partnership with computers could save the most valuable of postmodern resources: time.
Then Lick worked to find the country’s foremost computer centers and set up research contracts with them. In short order, he had reached out to the best computer scientists of the day, from Stanford, MIT, UCLA, Berkeley, and a handful of companies, bringing them into ARPA’s sphere. All told, there were about a dozen in Lick’s inner circle, which Ruina called “Lick’s priesthood.” In typical fashion, where his most passionate beliefs masqueraded as a bit of a joke, Licklider nicknamed it the Intergalactic Computer Network.
Six months after his arrival at ARPA, Lick wrote a lengthy memo to the members of the Intergalactic Network in which he expressed his frustration over the proliferation of disparate programming languages, debugging systems, time-sharing system control languages, and documentation schemes. In making the case for an attempt at standardization, Lick discussed the hypothetical problem of a network of computers.
Bob Taylor [Licklider’s replacement] gave his boss a quick briefing: IPTO contractors, most of whom were at research universities, were beginning to request more and more computer resources. Every principal investigator, it seemed, wanted his own computer. Not only was there an obvious duplication of effort across the research community, but it was getting damned expensive. Computers weren’t small and they weren’t cheap. Why not try tying them all together? By building a system of electronic links between machines, researchers doing similar work in different parts of the country could share resources and results more easily. Instead of spreading a half dozen expensive mainframes across the country devoted to supporting advanced graphics research, ARPA could concentrate resources in one or two places and build a way for everyone to get at them. One university might concentrate on one thing, another research center could be funded to concentrate on something else, but regardless of where you were physically located, you would have access to it all. He suggested that ARPA fund a small test network, starting with, say, four nodes and building up to a dozen or so.
Paul Baran and Donald Davies—completely unknown to each other and working continents apart toward different goals—arrived at virtually the same revolutionary idea for a new kind of communications network. The realization of their concepts came to be known as packet-switching.
Baran’s basic theoretical network configuration was as simple as it was dramatically different and new. Telephone networks have always been constructed using central switching points. The most vulnerable are those centralized networks with all paths leading into a single nerve center. The other common design is a decentralized network with several main nerve centers.
Baran’s idea constituted a third approach to network design. He called his a distributed network. Avoid having a central communications switch, he said, and build a network composed of many nodes, each redundantly connected to its neighbor. His original diagram showed a network of interconnected nodes resembling a distorted lattice, or fish net.
Baran’s second big idea was still more revolutionary: Fracture the messages too. By dividing each message into parts, you could flood the network with what he called “message blocks,” all racing over different paths to their destination. Upon their arrival, a receiving computer would reassemble the message bits into readable form.
The way Wes Clark explained it, the solution was obvious: a subnetwork with small, identical nodes, all interconnected. The idea solved several problems. It placed far fewer demands on all the host computers and correspondingly fewer demands on the people in charge of them. The smaller computers composing this inner network would all speak the same language, of course, and they, not the host computers, would be in charge of all the routing. Furthermore, the host computers would have to adjust their language just once— to speak to the subnet. Not only did Clark’s idea make good sense technically, it was an administrative solution as well. ARPA could have the entire network under its direct control and not worry much about the characteristics of each host. Moreover, providing each site with its own identical computer would lend uniformity to the experiment.
When he returned to Washington, Roberts wrote a memorandum describing Clark’s idea and distributed it to Kleinrock and others. He called the intermediate computers that would control the network “interface message processors,” or IMPs, which he pronounced “imps.” They were to perform the functions of interconnecting the network, sending and receiving data, checking for errors, retransmitting in the event of errors, routing data, and verifying that messages arrived at their intended destinations. A protocol would be established for defining just how the IMPs should communicate with host computers. After word of Clark’s idea spread, the initial hostility toward the network diminished a bit.
[In late 1968], ARPA announced that the contract to build the Interface Message Processors that would reside at the core of its experimental network was being awarded to Bolt Beranek and Newman, a small consulting firm in Cambridge, Massachusetts.
Giving ample authority to people like Roberts was typical of ARPA’s management style, which stretched back to its earliest days. It was rooted in a deep trust of frontline scientists and engineers… many of the best scientists in the country, Roberts among them, came to view working for the agency as an important responsibility, a way of serving.
“Early on, Frank [Heart of BBn] made a decision, a very wise decision, to make a clean boundary between the host responsibilities and the network responsibilities,” said Crowther… This also made building the IMPs more manageable. All IMPs could be designed the same, rather than being customized for each site. It also kept BBN from being caught in the middle, having to mediate among the host sites over the network protocols.
The IMP would be built as a messenger, a sophisticated store-and-forward device, nothing more. Its job would be to carry bits, packets, and messages: To disassemble messages, store packets, check for errors, route the packets, and send acknowledgments for packets arriving error-free; and then to reassemble incoming packets into messages and send them up to the host machines—all in a common language.
The IMPs were designed to read only the first 32 bits of each message. This part of the message, originally called a “leader” (and later changed to “header”), specified either the source or destination, and included some additional control information. The leader contained the minimal data needed to send and process a message. These messages were then broken into packets within the source IMP. The burden of reading the content of the messages would be on the hosts themselves.
In the summer of 1968, a small group of graduate students from the first four host sites— UCLA, SRI, UC Santa Barbara, and the University of Utah—had met in Santa Barbara.
From that meeting emerged a corps of young researchers devoted to working on, thinking through, and scheming about the network’s host-to-host communications.
Steve Crocker once likened the concept of a host-to-host protocol to the invention of two-by- fours. “You imagine cities and buildings and houses, and so forth, but all you see are trees and forest. And somewhere along the way, you discover two-by-fours as an intermediate building block, and you say, well, I can get two-by-fours out of all these trees,” Crocker recalled. “We didn’t have the concept of an equivalent of a two-by-four, the basic protocols for getting all the computers to speak, and which would be useful for building all the applications.” The computer equivalent of a two-by-four was what the Network Working Group was trying to invent.
The general view was that any protocol was a potential building block, and so the best approach was to define simple protocols, each limited in scope, with the expectation that any of them might someday be joined or modified in various unanticipated ways. The protocol design philosophy adopted by the NWG broke ground for what came to be widely accepted as the “layered” approach to protocols.
One of the most important goals of building the lower-layer protocol between hosts was to be able to move a stream of packets from one computer to another without having to worry about what was inside the packets. The job of the lower layer was simply to move generic unidentified bits.
[In 1971], Heart’s team and Roberts had been discussing the possibility of connecting many new users to the net without going through a host computer. It appeared they could make it possible to log onto the network, reach a distant host, and control remote resources through a simple terminal device—a Teletype or a CRT with a keyboard— directly connected to an IMP. The new scheme would eliminate the need for a host computer between every user and the IMP subnet. All you’d need to make it work would be a dumb terminal connected to an IMP. This would open up a lot of new access points.
If it could be worked out technically, then hundreds or even thousands more users might gain access to the network without physical proximity to a host computer.
The only real problem with this network now was load. That is, there wasn’t much of it.
The ARPA network, however, was virtually unknown everywhere but in the inner sancta of the computer research community. And for only a small portion of the computer community, whose research interest was networking, was the ARPA network developing into a usable tool.
The ARPA network was a growing web of links and nodes, and that was it—like a highway system without cars.
TheARPANET was not intended as a message system. In the minds of its inventors, the network was intended for resource-sharing, period. That very little of its capacity was actually ever used for resource-sharing was a fact soon submersed in the tide of electronic mail. Between 1972 and the early 1980s, e-mail, or network mail as it was referred to, was discovered by thousands of early users.
The first electronic-mail delivery engaging two machines was done one day in 1972 by a quiet engineer, Ray Tomlinson at BBN.
Tomlinson’s CPYNET hack was a breakthrough; now there was nothing holding e-mail back from crossing the wider Net. Although in technical terms Tomlinson’s program was trivial, culturally it was revolutionary.
By the end of 1973, Cerf and Kahn had completed their paper, “A Protocol for Packet Network Intercommunication.”
The Cerf- Kahn paper of May 1974 described something revolutionary. Under the framework described in the paper, messages should be encapsulated and decapsulated in “datagrams,” much as a letter is put into and taken out of an envelope, and sent as end-to-end packets. These messages would be called transmission-control protocol, or TCP, messages. The paper also introduced the notion of gateways, which would read only the envelope so that only the receiving hosts would read the contents.
The invention of TCP would be absolutely crucial to networking. Without TCP, communication across networks couldn’t happen.
[In 1978], the trio presented an idea to the group: break off the piece of the Transmission-Control Protocol that deals with routing packets and form a separate Internet Protocol, or IP.
After the split, TCP would be responsible for breaking up messages into datagrams, reassembling them at the other end, detecting errors, resending anything that got lost, and putting packets back in the right order. The Internet Protocol, or IP, would be responsible for routing individual datagrams.
“I remember having a general guideline about what went into IP versus what was in TCP,” Postel recalled. “The rule was ‘Do the gateways need this information in order to move the packet?’If not, then that information does not go in IP.”
By 1978, TCP had officially become TCP/IP.
In 1973, just when Cerf and Kahn had begun collaborating on the concept of internetworking, Bob Metcalfe at Xerox PARC was inventing the technological underpinnings for a new kind of network. Called a short-distance, or local-area, network, Metcalfe’s network would connect computers not in different cities but in different rooms. One computer wishing to send a data packet to another machine—say, a desktop workstation sending to a printer—listens for traffic on the cable. If the computer detects conflicting transmissions, it waits, usually for a few thousandths of a second. When the cable is quiet, the computer begins transmitting its packet. If, during the transmission, it detects a collision, it stops and waits before trying again—usually a few hundred microseconds. In both instances, the computer chooses the delay randomly, minimizing the possibility of retrying at the same instant selected by whatever device sent the signal that caused the collision. As the network gets busier, computers back off and retry over longer random intervals. This keeps the process efficient and the channel intact.
He rechristened the system Ethernet.
By June 1983, more than seventy sites were on-line, obtaining full services and paying annual dues. At the end of the five-year period of NSF support in 1986, nearly all the country’s computer science departments, as well as a large number of private computer research sites, were connected. The network was financially stable and financially self-sufficient.
The collection of networks gradually came to be called the “Internet,” borrowing the first word of “Internet Protocol.”
At around the same time, private corporations and research institutions were building networks that used TCP/IP. The market opened up for routers. Gateways were the internetworking variation on IMPs, while routers were the mass-produced version of gateways, hooking local area networks to the ARPANET.
Throughout the early 1980s, local area networks were the rage. Every university hooked its workstations to local area networks. Rather than connect to a single large computer, universities wanted to connect their entire local area network—or LAN—to the ARPANET.
The takeoff was just beginning. In 1990, the World Wide Web, a multimedia branch of the Internet, had been created by researchers at CERN, the European Laboratory for Particle Physics near Geneva. Using Tim Berners-Lee’s HTTP protocol, computer scientists around the world began making the Internet easier to navigate with point-and-click programs. These browsers were modeled after Berners-Lee’s original, and usually based on the CERN code library. One browser in particular, called Mosaic, created in 1993 by a couple of students at University of Illinois, would help popularize the Web and therefore the Net as no software tool had yet done.
[In 1991], the NSF had lifted restrictions against commercial use of the Internet, and now you could get rich not just by inventing a gateway to the Net but by taking business itself onto the Net.
Related Books
- “Accidental Empires: How… Silicon Valley Make Their Millions” by Robert X Cringely
- “Hard Drive: Bill Gates and the Making of the Microsoft Empire” by Wallace and Erickson
- “The Social Organism: A Radical Understanding of Social Media” by Luckett and Casey
If you would like to learn more about technology in history, read my book From Poverty to Progress: How Humans Invented Progress, and How We Can Keep It Going.
