text-only page produced automatically by LIFT Text
Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
Special Report
Text-only
design element
NSF and the Birth of the Internet — Home
1960s
1
1970s
1
1980s
1
1990s
1
2000s
1
Maps
1
Resources
1


NSF and the Birth of the Internet — Text-only | Flash Special Report
1980s


Computers Get Personal…Powerful
The 1980s are remembered as the time computers truly changed our everyday lives. Companies such as Atari, Apple, Commodore, and IBM began producing small, affordable computers designed to help people perform everyday tasks like typing letters, balancing their checkbooks, and playing games. By the end of the decade, these ‘personal computers’ or PCs, were becoming as common as washing machines in many American homes, and millions of workers’ jobs revolved around a desktop PC.

At the same time, incredibly powerful supercomputers were being developed at universities and laboratories. Realizing the potential of these supercomputers to empower scientific research, in 1984 NSF opened supercomputer centers across the country and created a network to link them together. Almost as soon as they were developed, NSF began thinking of new ways to allow even more researchers to tap the power these supercomputers had to offer.

Video Transcript:

George O. Strawn, Chief Information Officer, National Science Foundation: “By the late 70s and early 80s, the microcomputer revolution, which we now call the personal computer revolution, started taking over.  And universities, as other places, immediately began buying as many little computers as they could rather than as big of a computer as they could.  This worked fine for people who had to run small computer jobs.  They became masters of their own fate, rather than having to go bow before the central computer people and ask for an account and allocations.  They liked that very well, but the people who had big computer jobs all of a sudden found out they had no place to go.”

Doug Gale, former Program Officer, NSF; Former CIO, University of Nebraska: “I was at Cornell when Ken Wilson came into Ken King’s office, who was then the Vice Provost for Computing.  At that point in time, the largest supercomputer in the world was a Cray-1 and Ken Wilson, with a very straight face said, ‘I need 10,000 Cray-1 equivalents to do my calculations.’  And Ken King, with an equally straight face said, ‘Oh, that’s nice.  How soon do you need them, Ken?’ And Ken Wilson responded—with a completely straight face because he was absolutely dead-serious, he said, ‘I can wait six months.’"

John Connolly, former Division Director, Advanced Scientific Computing, National Science Foundation: “The idea was, okay, we’ll build a few supercomputer centers and then we’ll build networks that will access those supercomputers and eventually those networks were designed over—first of all it was a backbone network, and then there was a bunch of connecting networks that connected the different blocks of places.”

George O. Strawn, Chief Information Officer, National Science Foundation: “The NSF got into the networking business because it got into the supercomputing business.”

Video Credit: Courtesy of National Center for Supercomputing Applications (NCSA)
Cliff Braverman, Dena Headlee, Lauren Kitchen and Dana Cruikshank for National Science Foundation
John Prusinski, Kathryn Prusinski, Michael Conlon and Joan Endres for S2N Media

Still Image Credit: Boffy b

TCP/IP Sets the Standard
Across the world, researchers and scientists began to build their own large computer networks. Although many attempted to develop their own protocols and technologies to run their networks, the tools and concepts developed by researchers in the 1970s—such as TCP/IP and packet switching—became widely adopted. These tools worked because they were relatively simple, flexible, and easily adopted. Having many networks based on similar protocols would eventually make it easier to link them together, and since no one owned a patent or trademark on TCP/IP, networks were able to use TCP/IP with a range of equipment and software.

By the beginning of the decade, the Department of Defense made it clear they did not want to run a national computer network that wasn’t directly related to defense work. This opened up the demand for a large and robust network open to all, a role filled by NSFNET.

Image: Diagram showing the core of the Internet in August 1987. In a few short years, ARPANET would cease to exist, and the NSFNET backbone would become the center of the Internet.
Credit: Courtesy of BBN Technologies

The Demand for New Networks
In the early 1980s, NSF had several initiatives running to help spread the benefits of networking. One of these efforts was called CSNET, and it linked together several computer science departments across the country using TCP/IP. Another was a network of regional computer networks that linked up universities in different parts of the country. In 1981, universities came together to form BITNET, which allowed thousands of new users to experience innovations such as email and file transfers for the first time. All of these new networks showed the possibilities of computer networks and helped stoke demand for a robust nationwide network like NSFNET.

Video Transcript:

Doug Gale, former Program Officer, NSF; former CIO, University of Nebraska:
“In 1985 I did a survey of my faculty.  I described something that would now sound like the Internet and said, ‘Would you now find this useful?’  The moral of the story is never do a survey because you may not like the answer.  And the answers that came back were, ‘No, not particularly.’  One faculty member said, ‘Yeah, it might be useful. Maybe.’  So based on that overwhelming show of support, I burned the survey and we turned off one of our research mainframes and I used the maintenance savings which was about half a million dollars a year.  And I used that to start building a campus network.  But I realized by the mid 80s that one of the things that we had lost in the microcomputer revolution—we had lost the ability that would gain through timesharing to share information.  And by introducing distributed computing in a microcomputer, we basically started creating islands; islands of computing where people stayed on their island and didn’t talk to people on the other island.  So there was a sense that we had lost something with distributed computing, and this all pointed back towards networking being key.”

George O. Strawn, Chief Information Officer, National Science Foundation:
“The first thing that NSF did was to offer to computer science departments a program called CSNET.  Because of CSNET, computer science departments—in particular Ph.D.-granted computer science departments—could sign up for one level or another of the CSNET.  Well, it turns out that I was serving as chair of the computer science department at that point so I had the pleasure of signing us up for CSNET.”

Doug Gale, former Program Officer, NSF; former CIO, University of Nebraska:
“About that same time, I sent a proposal—and unsolicited proposal—to the National Science Foundation to create a regional network, MIDNET.  Of course, my immediate goal was to try to provide connectivity for my own faculty but I also politically realized that, you know, a single Midwestern land-grant university wasn’t going to have much political clout to get that kind of investment so I needed allies.  So I talked with folks at other campuses in the Midwest and they said, ‘Oh yeah.  Sure, we’ll go along if you get some money we’ll share some of it.’  I wrote a proposal to the NSF, didn’t hear back for awhile.  In the meantime, I’m sure you’ve heard of all of the other things that were going on in the background.  In 1986, I got a letter that said, ‘We liked your proposal.  Can you flesh it out?  And give us a few more details.’  Which we then did and created MIDNET.”

George O. Strawn, Chief Information Officer, National Science Foundation:
“MIDNET became one of the first, if not the first regional network to come up and get an approved proposal and connect to the NSFNET backbone.  You had no idea how important this was going to be but I recall saying at the time, ‘Well this is certainly something we can’t afford not to be involved with because this might work and might work big.’  And in particular, in the backbone at 56 thousand bits per second, the network was pretty congested.  And I would have faculty members coming in and pounding my desk and saying, ‘This network is always congested.  I really can’t use it to get to the supercomputer centers like I thought I could.’  So that’s the way we got started: at low speeds and congested conditions, but you’ve got to start somewhere.”

(Midnet Video - Sound of keyboard typing.)

(Voice of a male computer researcher)
 “And then we type TELNET with the appropriate Cray TELNET address. This will give us access to the Cray.”

(Voice of female narrator)
“In about twenty seconds, this graduate student at a computer terminal at the University of Nebraska—Lincoln will be connected to a supercomputer at the National Center for Supercomputing Applications at Champaign-Urbana.”

(Voice of a male computer researcher)
“And we should be there shortly. And we’re now logged into the Cray supercomputer.”

(Voice of female narrator)
“How? Through a network initiated in the early ‘80s by the National Science Foundation. The NSFNET backbone connects the six national supercomputing facilities and eight regional networks in the United States. Of those hubs is the University of Nebraska—Lincoln which manages MIDNET. MIDNET currently links 15 universities in the Midwest and has significantly increased the reliability of access to resources. Currently, the National Science Foundation funds both the NSFNET backbone and regional networks such as MIDNET so researchers may use these networks free of charge.”

(Voice of Doug Gale, former Chief Information Officer, University of Nebraska—Lincoln)
“Well, NSF funding isn’t going to last forever. At some point, the regional networks are going to have to become self-supporting. It costs about 500 times less to transmit a bit of information over MIDNET than it costs over dial-up telephone lines. Since researchers will commonly spend thousands of dollars every year for telephone charges, these cost savings can be very important.”

(Voice of female narrator)
“Through MIDNET, researchers can not only log into any of the supercomputers on the NSFNET backbone, but also to thousands of computers throughout the country. They can send and receive electronic mail, and can transfer files to any of these computers.

(Voice of Gordon Gallup, Professor of Chemistry, University of Nebraska—Lincoln)
“I’m really very thankful that I can get onto MIDNET. I’d rather have a Cray sitting here in my office, although it’s a little small, but this is really the only way that I can get on to a supercomputer. Public terminal rooms are really the pits. It’s nice that people can access supercomputers from their own offices.”

(Sound of keyboard typing.)

Video Credit: Cliff Braverman, Dena Headlee, Lauren Kitchen and Dana Cruikshank for National Science Foundation
University of Nebraska-Lincoln
John Prusinski, Kathryn Prusinski, Michael Conlon and Joan Endres for S2N Media

Still Image Credit: National Science Foundation

NSFNET is Born
In the mid-1980s, NSF decided the time was right to try to link its regional university networks and its supercomputer centers together. This initial effort was called NSFNET.
By 1987, participation in the new NSFNET project grew so rapidly that NSF knew it had to expand the capacity of this new network. In November of that year, it awarded a grant to a consortium of IBM, MCI, and a center at the University of Michigan called Merit to create a network or networks—or inter-net—capable of carrying data at speeds up to 56 kilobits a second. By July, 1987, this new system was up and running. The modern Internet was born.

Video Transcript:

John Connolly, former Division Director, Advanced Scientific Computing, National Science Foundation:
“If you read the history of the Internet right now, the Defense Department takes all the credit. Well, what we did at NSF is we borrowed their technology, we took it out of the military labs, and opened it up to everybody.”

George O. Strawn, Chief Information Officer, National Science Foundation:
“And created NSFNET program, which was going to be a national backbone to connect research universities to the supercomputer centers.”

Eric M. Aupperle, President Emeritus, MERIT Networks: “The foresight and the effort behind the NSFNET program, in the form of Dennis Jennings and Steve Wolff and others, was very instrumental in my view in creating today’s Internet.”

George O. Strawn, Chief Information Officer, National Science Foundation:
“On the drawing board was an international standard called OSI networking, which was in the process of being hammered out, which was going to provide a global standard for vendor-neutral networking. But, since that was a political process of getting people to agree on what should be in it and how it should work, it was not at all ready. The only networking protocols that were ready were the DoD ARPANET protocols, called TCP/IP.”

John Connolly, former Division Director, Advanced Scientific Computing, National Science Foundation: “What we used was the TCP/IP protocols and the packet-switching that they had developed.”

Eric M. Aupperle, President Emeritus, MERIT Networks: “And that was very insightful in terms of creating the kind of collaboration and ability to interact because that meant the manufacturers of computers were  also required to install TCP/IP protocols in their computers and with that, if you will, that created a de facto standard that allows to basically be able to collaborate and communicate.”

John Connolly, former Division Director, Advanced Scientific Computing, National Science Foundation: “And that worked pretty well.”

Hans-Werner Braun, former Principle Investigator for NSFNET at MERIT Networks: “The original 56 kilobits-per-second NSFNET backbone using Fuzzball-based LSI 11 nodes, became operational in mid-1986 between the five new NSF supercomputer centers and the National Center for Atmospheric Research. The rather open-access nature of the NSFNET—a new routing paradigm—constituted a challenge for the Internet, which until then was basically a hierarchical structure centered around the ARPANET. With the NSFNET, there was suddenly multiple national backbones with very different administration and modes of operations. In my opinion, the NSFNET activities most certainly propelled the Internet out of the U.S. Department of Defense research context, and paved the way towards today’s global and very broad cyberinfrastructure.” 

John Connolly, former Division Director, Advanced Scientific Computing, National Science Foundation: “I said ‘well, what if you open a system and nobody uses it, right?’ But that didn’t happen. I mean people found ways to connect. They would dial up, or they would use an old thing called BITNET, and you know eventually our network came on line. And pretty soon, we could see the curve going toward saturation. And I said, ‘ah, okay, it works.’

Video Credit: Cliff Braverman, Dena Headlee, Lauren Kitchen and Dana Cruikshank for National Science Foundation
John Prusinski, Kathryn Prusinski, Michael Conlon and Joan Endres for S2N Media
Courtesy of National Center for Supercomputing Applications (NCSA)
Courtesy of Hans-Werner Braun

Still Image Credit: Courtesy of Hans-Werner Braun

Private-Public Innovation
When NSF set out to create NSFNET, it wanted to avoid the same limitations and restrictions that ARPANET had.  NSF decided that the network should eventually become financially self-sustaining and not dependant on government funding or control. At the same time, NSF wanted the network to be able to grow quickly and accommodate as many users as possible.

NSF came up with an innovative way to satisfy both requirements by awarding the NSFNET grant to a team of private companies and public universities. By giving private industry an incentive to participate in—but not control—the network, and by encouraging anyone who wished to join the network to connect, NSF laid the groundwork for the Internet’s future success and explosive growth.

Video Transcript:

Eric M. Aupperle, President Emeritus, MERIT Networks: “One of the factors I think was important for us to be able to bring up the T1 NSFNET network was that after we turned the solicitation in, in mid August of ’87, was that the folks from IBM and MCI and MERIT continued to meet and planned for the eventuality that we might have the responsibility of trying to bring the network up in a short period of time.”

Stephen Wolff, former Division Director, Networking and Communication Research and Infrastructure, National Science Foundation: “There was an extraordinary commitment from MCI, from IBM, from MERIT and also from the state of Michigan—who put money into this partnership as well.  It was clear from the outset that IBM thought it was learning something new about networking that it didn’t know.  It was also clear that MCI was learning about data networking, which it had zero experience in at this point.  So both the major providers were learning something from this experience, but above and beyond that, they seemed dedicated to make it work.”

Doug Gale, former Program Officer, NSF; former CIO, University of Nebraska: “What was involved in putting in the T1 backbone?  I can’t think of any single entity that could have pulled it off.  I don’t think the government sector would have been able to do it very easily.  We could have, but it would have been an incredible expenditure of money.  I don’t think the private sector had the right set of skills and higher ed didn’t have the right set of skills, but the mix of the three was just phenomenally powerful.”

Eric M. Aupperle, President Emeritus, MERIT Networks: “In all levels, there was always a dedicated commitment to do whatever it takes, or whatever it took, to assure that the capabilities of the backbone met the dramatic growth challenges that we experienced.”

Stephen Wolff, former Division Director, Networking and Communication Research and Infrastructure, National Science Foundation: “I personally never have participated in a partnership that worked as well as that one for as long as that one did.  It was a truly cooperative enterprise.”

Doug Gale, former Program Officer, NSF; former CIO, University of Nebraska: “The way people worked together, contributed… things weren’t done on the basis of a contract.  Things were done on the basis of what needed to be done.  It was a spectacular success.”

MCI/IBM Video

Narrator: The world of tomorrow will be as different from today as ours is from the Renaissance.  A new world of communications moving at the speed of light; a quantum leap in knowledge.  But how can we release these new ideas?  The spoken word is too limiting, the written word is too slow.  Ideas moving at the speed of light transcend the old barriers of time and distance.  To share these ideas with more people, we need a new approach—a network designed for tomorrow.

And just as we need railroads and highways to move our manufactured goods, we need faster, more powerful computer networks to move our ideas.

Ira Richer, Ph.D., Defense Advanced Research Project Agency [DARPA]:
“First, we want to provide more institutions with access to networks.  Second, we want to provide institutions with higher speed access to networks because more people at those institutions are using the network and new applications, themselves, require greater capacity.”

Jane Caviness, Program Director, NSFNET National Science Foundation: “I think that with this network that we do begin to change.  We have already begun to change our way of doing science.  We collaborate with people we could not normally collaborate with.  We can get our information—our data—much more quickly and we no longer feel limited by where we are working physically because we have access to resources across the country.”

Narrator: NSFNET has already linked the major regional networks and supercomputer sites in the country.  What is needed now is a network that is a thousand times more powerful.  By connecting researchers from all sectors—higher education, business and government, we can maximize our research dollars and hasten the transfer of these research results to our economy and society.

What has happened so far with the national networks that we’ve developed in the past—as that information transfer and idea exchange moves forward—what we’ve discovered is that people start working together with each other in new ways, and they can carry out entirely new kinds of projects.

America leads the world in scientific research.  But we are in danger of losing our lead without a more powerful network.

Charles House, General Manager, Software Engineering, Hewlett-Packard: “I think the notion of combining government industry and the universities is a powerful one.  I think we’ve seen the value in Silicon Valley of bringing a university and industry consortium together.  I think for a lot of the kinds of things that we think of for supercomputing, you need the financial resources of the government.  An industry can’t do it, a university can’t do it, but a consortium of the several can do it.  When you realize that a lot of this has to be for the public good, it’s really a sort of common science—if you will—that we’re trying to create.  I think it’s absolutely vital that the three be put together.”

Larry Leifer, Ph.D., Director, Rehabilitation R&D, Palo Alto Veteran’s Hospital: “Networking between people through computers and computer networks is the goal and the payoff, I see, in this work.  I don’t see it solely or even primarily as an issue of running those computer models for the knowledge they feed back. For in a way, they only feed back data—kind of naked stuff.  What matters is that people can communicate with themselves and with others very rapidly.”

Douglas E. Van Houweling, Ph.D., Vice Provost, Information Technology, University of Michigan: “We have a major dependence in this nation on the creation and the transmission of ideas.  The future of this nation is at stake.  The benefits from this network don’t flow to individual people or to individual organizations.  They flow to the whole nation.  And for us to finance that new infrastructure, the federal government will have to provide resources that represent resources from the whole nation.”

Video Credit: Cliff Braverman, Dena Headlee, Lauren Kitchen and Dana Cruikshank for National Science Foundation
John Prusinski, Kathryn Prusinski, Michael V. Conlon and Joan Endres for S2N Media
Verizon Business and IBM

Still Image Credit: Courtesy of Hans-Werner Braun

NSFNET Grows a Backbone
As soon as that initial upgrade of NSFNET was made in 1988, it was almost immediately overwhelmed by new demand as more users came online. Soon the network grew at 10% a month pace. NSF began funding more upgrades to the system’s backbone—the connection lines between the network’s major hubs.

During the late 1980s, the NSFNET partners worked to find ways to accommodate this growth. Their work laid the groundwork for the Internet’s future growth.

Video Transcript:

Stephen Wolff, former Division Director, Networking and Communication Research and Infrastructure, National Science Foundation: “The 56 thousand-kilobit-backbone was terribly overloaded.”

Jane Caviness, former Program Officer, National Science Foundation: “Basically people weren’t doing things that had network access, just because it was so frustrating.”

Stephen Wolff, former Division Director, Networking and Communication Research and Infrastructure, National Science Foundation: “Traffic kept on building, users kept on coming on.  So it was important to get the T1 network up and running as quickly as we could.”

Hans-Werner Braun, former Principal Investigator for NSFNET at MERIT Networks: “MCI thought we were pretty crazy wanting unchannelled 1.5 megabit-per-second T1 links instead of multiplexing them down to 56 kilobit-per-second voicelog circuits.”

Stephen Wolff, former Division Director, Networking and Communication Research and Infrastructure, National Science Foundation: “The whole schedule—the solicitation, the proposal preparation, the proposal review, and the implementation—was all highly compressed.”

Hans-Werner Braun, former Principal Investigator for NSFNET at MERIT Networks: “We were working hard with MERIT’s dedicated staff and our partners, IBM and MCI, to make the project succeed.  Lots of new ground to cover.  The backbone nodes—which IBM built—were basically architected from scratch.  And were really a distributed system of nine small computers plus various other parts.”

Stephen Wolff, former Division Director, Networking and Communication Research and Infrastructure, National Science Foundation: “It was only due to heroic efforts by primarily Dave Mills at the University of Delaware who tweaked the routers all night, every night, to make them run.”

Hans-Werner Braun, former Principal Investigator for NSFNET at MERIT Networks: “Both IBM and MCI worked with us through all of the issues.  We all made it work within time and budget to the National Science Foundation.  We worked a lot of long hours.  We also got a lot of flack from people in the Internet community then—a number of whom would just not believe that we would be able to make it work at all.  All of us worked side-by-side but all of us also played our roles.  IBM delivered the backbone nodes, MCI provided the implementation of the T1 circuits, and MERIT made it all work as a system.”

George O. Strawn, Chief Information Officer, National Science Foundation: “And the minute the T1 backbone—the 1.5 megabit-per-second backbone—went into service, all of a sudden, the complaints from our faculty at Iowa State stopped.”

Jane Caviness, former Program Officer, National Science Foundation: “When the T1 backbone went live on the first of July of ’88, the growth was basically consistently 20 percent a month in traffic, which is something we thought we would start to tail-off and we didn’t.”

George O. Strawn, Chief Information Officer, National Science Foundation:
The next level of telephone speeds was called T3, which is 45 million-bits-per-second—quite a bit faster than the 1.5 million bits.  So we authorized the providers—MERIT and its partners—to create a T3 backbone to first supplement, and then replace, the T1 backbone.”

Hans-Werner Braun, former Principal Investigator for NSFNET at MERIT Networks: “When we told them that within a few years we need unchannelled 45 megabit-per-second DS3 links, they thought we were completely insane.”

George O. Strawn, Chief Information Officer, National Science Foundation: “Well it turned out to be sort of hard.  The T1 network had gone in very easily with very few bugs.  And the T3 network turned out—at those higher speeds—have other bugs called gray links and black links that took real network sleuths to figure out how to make it work properly.”

Video Credit: Cliff Braverman, Dena Headlee, Lauren Kitchen and Dana Cruikshank for National Science Foundation
John Prusinski, Kathryn Sharar Prusinski, Michael Conlon and Joan Endres for S2N Media

Still Image Credit: Courtesy of Hans-Werner Braun