Dr. Arden L. Bement, Jr.
National Science Foundation
Remarks at Dedication of New Terascale System
Pittsburgh Supercomputing Center
July 20, 2005
Thank you, Peter1. Hello, everyone. Thank you for inviting me to be part of this occasion, and to share the lectern with so many distinguished guests.
I was born in Pittsburgh; actually, very near here, in Dorseyville. It's always nice to return -- even for a short time.
Today, we are celebrating more than the advent of a computing system. We are fulfilling an important national goal -- providing one of the fastest computing capabilities to the US research community.
We've come a long way since the days when a mainframe computer took 3-5 seconds to perform a simple multiplication. The system we dedicate today is capable of an amazing 10 trillion multiplications per second!
With the ability to move data at 40 gigabits per second, I think we must be approaching warp speed.
I marvel at the changes since my initial forays into scientific computing, at the University of Michigan in the late 1950s. That was a period when top research universities were just beginning to integrate computer problem-solving into science and engineering courses. Michigan was a leader in this movement, with its pioneering program called MAD -- Michigan Algorithm Decoder.
Computational science was in its relative infancy, and a wondrous world of capability beckoned.
That exciting world kept us busy during the day -- and at night. Because it was also an era of time-consuming punch cards and batch processing, when a student submitted each program on a "deck of cards" and waited several hours to find out whether it worked. I remember many nights when sleep took a lower priority than correcting errors and resubmitting the deck, in order to meet a morning deadline for an exam or final project.
We are still submitting scientific problems to computers that run 24 hours a day, 365 days a year. Yes, we've had a few improvements in speed and capability -- roughly equivalent, you might say, to the change from the horse-and-buggy era to the age of personal space travel.
But one thing that hasn't changed is the need for leading-edge tools that exploit current knowledge, and advance it -- tools such as this one. And the need for hard-working, innovative scientists and engineers, like those who shaped this system into its current configuration.
As you well know, NSF's business is the work of scientists and engineers. Our mission is to advance the frontiers of science and engineering. We are unique in our orientation of sticking to risky science, and proposals that contain big, unanswered questions. Our primary task is to dog the frontier -- with more inquiry, more questions, and, collaterally, with more expertise and capability.
This system, which you have so capably brought to fruition, represents a significant leap in our ability to question, discover, and analyze: a leap to new understanding and the more rapid advancement of science.
NSF's strategy is two-fold: to provide leading-edge tools that not only enable frontier research, but that also drive the frontiers of capability. This couldn't be more true than in the computer and information sciences and engineering.
We aim, first, to provide broad access within the research and education community to a comprehensive suite of computer and communication resources. And, second, to drive the research needed to keep the nation's computer and communication resources at the forefront of worldwide capabilities.
Developing these resources is one of the most important investments of the early 21st century, both for science and engineering and for the nation.
As we forge ahead toward larger capacities and more powerful capabilities, we must proceed thoughtfully, to consider the changing landscape and the full range of needs -- data storage, transfer, and analysis; visualization; user interfaces; compatibility; security, and a host of other challenges.
Today's computing tools and services must keep pace with the changing nature of research; for example, the move to "big data" and "e-science."
The information flowing from a multitude of sensor arrays, networked observatories, and other instruments will yield rich results if we can move, digest, and manage it. As one scientist recently said, our instruments are "generating terabytes of data per day yielding only kilobytes of knowledge per month."2 Valuable knowledge is waiting to be unearthed in the data we have already amassed, let alone the data we continue to collect.
The challenge extends beyond compilation and storage, to the retrieval, manipulation, and analysis of terabytes, and even petabytes, of data. And so, NSF-funded research includes not only modeling the burn-rate of supernovae and visualizing the folding mechanisms of proteins, but developing algorithms for swift data transfer and science-driven analytics.
The richness of data, combined with powerful computing facilities and innovative people, promises a multitude of scientific breakthroughs. The Pittsburgh Center has all of these.
Another feature of the changing landscape is the interconnected nature of research. Just as important as getting systems up and running is connecting the dots.
Compatible connections and communications are essential to interdisciplinary research. They are essential to extending computing power to more institutions and regions of the country. And they encourage the international partnerships that will keep the United States at the forefront of the global science and engineering enterprise.
NSF's goal is to assemble leading-edge components into an interoperable cyberinfrastructure -- a set of geographically distributed computational resources that can be accessed individually or collectively.
To the user, the location of the CPUs is irrelevant: the computations are performed efficiently and seamlessly. In that regard, cyberinfrastructure will join the ranks of traditional infrastructures that we take for granted: the electrical grid, a clean water supply, the interstate highway system.
Already, with TeraGrid, we have achieved a magnificent melding of sophisticated systems into a larger, even more transformative tool.
At NSF, we recognize that we must address several community needs in parallel:
- Provide leading-edge capabilities to meet current computational needs.
- Support the pioneers who are developing future components and assimilating them into the cyberinfrastructure.
- And, finally, support the science and engineering transformations that cyberinfrastructure is enabling.
Above all, we must ensure that our infrastructure can adapt to the changing frontier.
Earlier this year, NSF formed a working group to address the agency's role in developing and maintaining the science and engineering cyberinfrastructure.3 The group included experts from each of the research directorates.
The working group developed strategies to strengthen partnerships among government, business and industry, and academia. They considered management structures for NSF's cyberinfrastructure investments. And they examined ways to coordinate investments in research and education activities that span multiple disciplines.
You will be hearing more about NSF's cyberinfrastructure strategy in the near future. At the same time, we will be listening to you.
The science and engineering community is both consumer of, and conduit to, future capabilities. You are the ones who anticipate, and identify the need for, frontier research and tools. Our job at NSF is to listen, evaluate, and prioritize.
Thus, even as we applaud today's premiere, we anticipate the clamor for tomorrow's enhanced performance. While we continue making acquisitions and awards for terascale systems and grid networks, we are already making initial forays into petascale research. That's 1,000 times the capability of a terascale system, the first of which was introduced only 10 years ago!
Coordinating these resource commitments is increasingly important in an era when so many science and engineering disciplines are embracing the modeling, simulation, and visualization power of high-performance computing and information systems.
We are dazzled by the glow of the many and varied stars within the portfolio of the Pittsburgh Supercomputing Center. With the LeMieux system, researchers have tackled problems ranging from predicting thunderstorms and examining water's molecular structure to developing algorithms for solving inverse problems, such as wave propagation through bone.
And already, hundreds of processors have put the Center's newest system to the test, probing the structure and dynamics of proteins, the sun's turbulence, and the labyrinth of quantum chromodynamics.
With this system, we celebrate a significant leap in science and engineering research and education capacity. We celebrate the ability to accommodate a changing landscape, with features from many disciplines and observations from many perspectives.
We thank the hard-working staff of the Pittsburgh Supercomputing Center for their labor and diligence in acquiring, and working with Cray to adapt, our newest star performer. We salute Carnegie Mellon University and the University of Pittsburgh, as well as Westinghouse Electric, for providing infrastructure and support. We welcome the academic and economic benefits afforded these universities, the region, and the state.
We acknowledge the partnerships that add value to the system's performance: the exchange of information with industry about science and engineering applications; the networking needed to move findings into development; and the commitment by all parties to training a skilled, industry-relevant workforce.
Most of all, we look forward to the vastly expanded knowledge of the future. Your collective efforts will move this nation forward.
I wish you speed, agility, and, above all, results.
1 Peter Freeman, NSF Assistant Director for Computer and Information Science and Engineering.
Return to speech.
2 "Pathways for Cyberinfrastructure," NSF Directorate for Computer and Information Science and Engineering, Feb. 8, 2005.
Return to speech.
3 Cyberinfrastructure Initial Implementation Working Group.
Return to speech.