NSF and the Birth of the Internet — Text-only | Flash Special Report
Computers Begin to Transform the World
We know today that the Internet has changed our society in ways not seen since the invention of the printing press. But how did we get here?
Just as the development of the printed word was really about two technologies—the invention of movable type and mass produced paper—the story of the Internet is about computing and the development of sophisticated networks to connect them. This report tells the story of how NSF helped these two technologies develop into the modern Internet.
The potential power of computers to transform the world started to become a reality in the 1960s. Computer scientists and engineers began building more powerful computers and finding new ways to use them, from helping NASA to put a man on the moon to speeding up accounting at major corporations. NSF funded the development of several academic computer centers across the country to help advance the field of computing. The pace of innovation and development was so fast that researchers needed new ways to communicate and share ideas.
Narrator: Today Tempo is characterized by one of man’s most advanced developments—the electronic computer. All but unknown a few years ago, computers have benefited mankind in a 1001 ways. They have helped work wonders in countless aspects of everyday life. They have revolutionized business, science, government, education, medicine. Yes, computers are fine. Yes, some people do have their problems with and without computers. You know it and we at UNIVAC know it. And what’s more important, we’re doing something to solve their problems.
And most important to computer users, our people using these advanced facilities are working on computer improvements. Working on the need for faster and more economical memory; more reliable circuitry; more responsive real-time and multi-programming capability; ability to handle increased numbers of peripheral devices, perfecting communications and satellite operations. And the result of UNIVAC’s technological and programming research has consistently been revolutionary, new equipment.
First member of the 9000 series—the 9200 system—versatile printer and high-speed processor. Large capacity memory. Card reader and card punch. Here’s what the UNIVAC 9200 can do for you. Let’s suppose you’re using punched cards now. You’re at a point where you really need a computer but you can’t afford one. With the new 9200 system, these four units will—at amazingly low cost—replace all of this equipment you are now using. As an example, for approximately $1000 a month, you can replace your current equipment with a 9200 and here is what you get.
Memory—a whopping 8,192 bytes; expandable to twice that. Other systems start at a mere 2000 or 4000. Cycle time—super fast; 1.2 microseconds. Speed leaving other systems behind in the dust.
The 9200 gives you three-to-four times the processing power than other higher-priced, supposedly, new-generation computers because memory size and speed are the key elements in the efficiency of any computer.
Video Credit: Roger Wade Productions
Still Image Credit: NOAA
Cold War Raging
During the 1960s, the Cold War between the United States and the Soviet Union grew in intensity. Both sides sought an edge over their adversary in the development of new weapons and technologies. Crisis in Berlin and Cuba made a nuclear war between these two nations a possibility. How would government and defense installations stay in contact if cities and telecommunication networks were wiped out?
Image: An artist’s illustration of a Nike missile intercepting a Soviet aircraft. Deployed throughout the United States and Western Europe in the 1950s, ‘60s and ‘70s, Nike missiles were intended to stop Soviet aircraft before they could drop their nuclear bombs. The Nike systems were an example of both the high-tech nature of the U.S.-Soviet arms race and the serious threat posed by nuclear weapons during the Cold War.
Credit: U. S. Army
What About a Computer Network?
Researchers at the Massachusetts Institute of Technology proposed linking computers together in a diffused and decentralized network so the computers could share data and researchers could use them from hundreds or thousands of miles away. The network would have no central hub that could be taken out, so even if parts of the network were damaged or destroyed, the rest of it would still function.
Image: A researcher checks the output of a Control Data Corporation CDC 3600 computer in the mid-1960s at the National Center for Atmospheric Research. During this time, scientists began working on networking technology that would allow these powerful but immobile machines to be used by researchers in remote areas.
Credit: © University Corporation for Atmospheric Research
Early Networking Experiments
The Department of Defense’s Advanced Research Projects Agency (ARPA) took over this research in 1962. Initial research showed that computers could be linked up over long distances, but existing telecommunications infrastructure, which did not have switches or other devices capable of sending large amounts of data between multiple computers, would not be sufficient for a large-scale network.
Image: (Black and white photo) The ARPANET Team in 1969.
Credit: Courtesy of Frank Heart
Image: (Color photo) Three original members of the ARPANET team hold a photo of themselves and their colleagues back in 1969. The ARPANET team created many of the networking technologies that would later be used to create the Internet.
Credit: Meghan Moore, Megpix Photography
ARPA researchers and others kept working on the problem and eventually they developed important technologies such as packet switching, which allows varying amounts of data to flow to different computers, even if there wasn’t a direct connection between them. Eventually they developed a system where different “nodes” or computers on the network, would send, receive and pass on information between them.
In 1969, ARPA launched a working prototype of this concept that linked up computers at four universities in the southwestern United States. The linkage of these four computers, called ARPANET, was the forerunner to the modern Internet.
On August 29th, Labor Day weekend, UCLA received the first switch. At the end of the weekend, on Tuesday—the day after Labor Day—we connected our UCLA host which is an SDS Sigma 7 to the switch—the IMP—and we moved data back and forth from the IMP to the host.
So what does that switch look like? That’s what it looks like now. That is the first switch. That’s IMP number one. That’s what the control panel looks like. If you look inside that’s how beautiful it is. It’s so ugly it’s beautiful.
So we decided to keep a log finally. So if you look inside there’s a very important entry there. What I just circled is the date of the entry. It says October 29, 1969. And the time is at 10:30 at night and the entry says, “Talked to SRI, Host to Host,” and it’s signed by Charlie Klein. That’s a record of the first computer-to-computer message; host-to-host message ever on the Internet.
What we wanted to do was login from the UCLA host to the SRI host. And to login meant that we would look like a timeshare terminal attached to their machine locally. To do that you type in L-O-G and the remote host knows what you’re trying to do. It will fill in the I-N for you. That’s all we had to do was type L-O-G.
We had Charlie Klein at my end and Bill Duval up at Stanford Research Institute and we had a telephone connection between the two as well as a data connection so they could talk to each other and explain what was going on. So Charlie sent the L and said, “Did you get the L?” Bill said, “Yep, got the L.” We sent the O. “Did you get the O?” “Yep, we got the O.” This is technology. We sent the G. “Did you get the G?” Crash! The system crashed!
So, what was the first message ever? Lo, as in, lo and behold. Shortest message you could imagine; most prophetic. Well planned.
Basically, on October 29, at 10:30 at night the first Internet message. That’s when the infant Internet uttered its first words.
Video Credit: IEEE GLOBECOM 50th Anniversary Commemorative Lecture, courtesy of IEEE Communications Society
Still Image Credit: Alexander McKenzie
Vint Cerf and the Development of ARPANET
Vint Cerf worked on the ARPANET project in the 1960s as a graduate student at UCLA. He and his fellow students, working under the tutelage of Len Kleinrock, worked as a team to work out kinks in the proposed system. "We were just rank amateurs,” Cerf says “and we were expecting that some authority would finally come along and say, 'Here's how we are going to do it.' And nobody ever came along." This adventurous spirit drove Cerf and his fellow students to work hard with the hopes of building something new and exciting.
Vint Cerf, Chief Internet Evangelist, Google, former ARPANET researcher: “Well, I was very, very lucky as a graduate student. I went to UCLA, primarily to get an education in computing. I had spent two years at IBM and I realized I needed more graduate work on the way computers were designed, operating systems, and so on.”
“So, I went to UCLA, and by good fortune, wound up working as a programmer for a man named Leonard Kleinrock. Kleinrock was a theorist who had come up with the idea of packet switching while he was at MIT, and had done queuing theoretic analyses of how such systems would behave. So when ARPA, the Advanced Research Projects Agency, decided that it wanted to build a packet switch net and use that to connect the resources of the computer science departments that it was funding so they could share computing resources and share access to software, Kleinrock was the guy nominated to do the network measurement center. And I ended up being one of the principle programmers, writing the software for the first computer that was connected up to the first node of the ARPANET in September of 1969. Well the next node came a month later, and four nodes were in operation by the end of 1969.”
“Bob Kahn, who was one of the principle architects of the ARPANET working (inaudible) at the time, came out to UCLA along with another of his colleagues, Dave Walden, to test this four node network. They had a theory, Bob in particular had a theory, that there were conditions under which the network would congest or collapse. His colleagues didn’t believe that there was much probability of that. And so my job as the network measurement programmer guy, was to inject traffic into the network that would try to reproduce the theoretical failures that Bob Kahn anticipated, and he was right. We killed the network repeatedly with various kinds of traffic flows. So this was a wonderful opportunity for someone to explore a space that was really unknown, a wide-area network based on packet switching. Conventional thinking at the time in the communications world was that this was a silly idea and it would never work.”
“So the four nodes that were part of this first network were UCLA, UC Santa Barbara, SRI International and the University of Utah. So those were the first four by the end of 1969.”
“I don’t think any of us had any idea what the implications of this work would be in the long run. We knew that we were exploring new territory, we knew that computers were potentially very powerful things. We had heard from visionaries like Doug Engelbart and JCR Lickliter who were at MIT, that you could do things with computers that were not just crunching numbers, that these things could be used to augment human intellect.”
Video Credit: Cliff Braverman, Dena Headlee, Lauren Kitchen and Dana Cruikshank for National Science Foundation
Still Image Credit: José Mercado, Stanford News Service
John Prusinski, Kathryn Prusinski, Michael Conlon and Joan Endres for S2N Media