This document has been archived.

Table of Contents

Symposium Proceedings

June 1996


National Science Foundation


The June 1996 National Science Foundation Symposium on Learning and Intelligent Systems brought together recognized experts--in the biological, behavioral, social, mathematical, and computer sciences as well as engineers and educators--in a landmark discussion of the opportunities for interdisciplinary research in this broad and important field. The presenters and discussion participants represented industry, academia, foundations, and government organizations and focused on the potential for major advancements in collaborative efforts across academic disciplines and across types of business, research, and government organizations.

The invited presenters were Dr. James A. Anderson of Brown University, Dr. Victor Zue of Massachusetts Institute of Technology, Dr. Louis Gomez of Northwestern University, Dr. Paula A. Tallal of Rutgers University, and Dr. Gerhard Fischer of the University of Colorado. Dr. Anne C. Petersen, then Deputy Director of the National Science Foundation, had asked speakers to focus on several key issues and to address these issues from the perspectives of both their current organizations and their own individual, often interdisciplinary, careers. The issues were:

The morning presentations were followed by an afternoon symposium on ways to pursue opportunities for this type of collaborative, interdisciplinary research. The participants included members of the NSF Director's Policy Group, external guests, and speakers from the morning session. A listing of both presenters and discussion participants follows this preface.

NSF's basic mission is threefold: To enable the United States to uphold a position of world leadership in all aspects of science, mathematics, and engineering; to promote the discovery, integration, dissemination, and employment of new knowledge in service to society; and to achieve national excellence in science, mathematics, engineering, and technology education at all levels. The LIS initiative, announced in November 1996, is one of the three areas of focus for NSF's Knowledge and Distributed Intelligence initiative. LIS was formed with the objective of energizing radical and rapid advances in our understanding of learning, creativity, and productivity, as well as developing the tools that will enhance the human ability to learn and create. Two symposiums--the first in September 1995 and the second, documented here, in June 1996--were instrumental in launching this initiative, providing a forum for critical discussions of short- and long-term objectives, potential grant programs, barriers to interdisciplinary research, and current strides in the spheres of learning and education.

The initiative is designed to support high-risk interdisciplinary research not otherwise funded under existing NSF programs. It incorporates NSF's Collaborative Research on Learning Technologies (CRLT) Program, which continues to support projects that creatively integrate basic research in education with basic research in information technology. In FY96, NSF made program planning and research awards under the CRLT Program, as a first step in defining the scope of the LIS initiative. In FY97, NSF made available $19.5 million for collaborative, interdisciplinary research under this initiative. Recently, NSF has awarded 28 new grants worth over $22.5 million to develop a deeper understanding of how learning occurs in humans, animals and artificial systems.

Over the years, interest and effort in this area has grown throughout NSF as well as in other parts of the federal government, private foundations, and the research laboratories and board rooms of corporations across the country. From the presentations and discussions recorded in this report, it is clear that this interest continues to grow and solidify and has the potential for significant advances such as educational applications that actually improve student performance; the further development of automated and interactive systems; enhanced reading instruction programs; commercial products that perform medical diagnoses; the application of machine learning to data mining and the control of large telecommunication networks; and speech development tools for language impaired individuals.

The initiative is managed by an appointed LIS Committee and involves six NSF Directorates: Biological Sciences; Computer and Information Science and Engineering; Education and Human Resources; Engineering; Mathematical and Physical Sciences; and Social, Behavioral, and Economic Sciences. This broad, systemic commitment stands to deliver significant payoffs. NSF expects to see major advances not only in the specific fields of LIS research, but also in improved methods and techniques for effective interdisciplinary research in other areas of endeavor.

Invited Speakers and Guests

(invited speakers identified in bold)

Dr. James A. Anderson, Brown University

Dr. Kirsty Bellman, Department of Advanced Research Projects Agencies

Dr. Joseph Bordogna, National Science Foundation

Dr. William P. Butz, National Science Foundation

Dr. Paul Chapin, National Science Foundation

Mr. Paul Chatelier, Office of Science and Technology Policy

Mr. William Clancey, Institute for Research on Learning

Dr. Robert Correll, National Science Foundation

Dr. Gerhard Fischer, University of Colorado

Dr. Larry Goldberg, National Science Foundation

Dr. Louis Gomez, Northwestern University

Dr. Beatrix A. Hamburg, William T. Grant Foundation

Dr. William C. Harris, National Science Foundation

Dr. Eric Horvitz, Microsoft Corporation

Mr. Thomas Kalil, The White House

Dr. Norman H. Kreisman, U.S. Department of Energy

Dr. Neal Lane, National Science Foundation

Dr. Yann A. Le Cun, AT&T Laboratories

Dr. William Lester, Jr., National Science Foundation

Dr. Cora Marrett, National Science Foundation

Dr. Peggy McCardle, National Institutes of Health

Dr. Mark L. Miller, Apple Computer, Inc.

Mr. Phillip Milstead, NASA

Dr. Ernest Moniz, Office of Science and Technology Policy

Dr. Anne C. Petersen, National Science Foundation

Dr. Nora Sabelli, National Science Foundation

Dr. Herbert A. Simon, Carnegie Mellon University

Ms. Vivien Stewart, Carnegie Corporation of New York

Dr. Phil Stone, U.S. Department of Energy

Dr. Gary Strong, National Science Foundation

Dr. Cornelius Sullivan, National Science Foundation

Dr. Paula A. Tallal, Rutgers University and Scientific Learning Principles

Dr. Luther Williams, National Science Foundation

Dr. Paul Young, National Science Foundation

Dr. Steven F. Zornetzer, Office of Naval Research

Dr. Victor Zue, Massachusetts Institute of Technology

Table of Contents

Dr. James A. Anderson, Brown University
Interdisciplinary Research in the Brain Behavioral Sciences 1

Dr. Gerhard Fischer, University of Colorado
Lifelong Learning 7

Dr. Louis Gomez, Northwestern University
Interdisciplinary Transformations in Teaching and Learning 13

Dr. Paula A. Tallal, Rutgers University and Scientific Learning Principles
Interdisciplinary Research Projects in Neuroscience 17

Dr. Victor Zue, Massachusetts Institute of Technology
Interdisciplinary Research and Human Language Technology 21

Dr. Anne C. Petersen, National Science Foundation
Afternoon Introduction
--Welcome and Objectives 25

Dr. Herbert A. Simon, Carnegie Mellon University
Scientific Opportunities of Learning and Intelligent Systems 27

Dr. Paul Young, National Science Foundation
Overview of NSF Learning and Intelligent Systems Program and Vision 34


Dr. James A. Anderson

Brown University

Interdisciplinary Research in the Brain Behavioral Sciences

I am going to talk about interdisciplinary research in the brain behavioral sciences. I have good news for you: It has been tried before and it works. What I am going to do is describe an example from 50 years ago that not only worked, but had a major impact on engineering and industry. As you will see, it covers roughly the same areas as the NSF Learning and Intelligent Systems Initiative. It has a great cast of characters and mammoth government support, as well.

If somebody called me a "prototypical interdisciplinarian," I think I would take it as a compliment, but I am not sure. I have been affiliated in one way or another with departments of physics, biology, physiology, mathematical psychology, psychology, applied math and neuroscience, just to hit the highlights. I am now in the Department of Cognitive and Linguistic Sciences. I have been doing the same thing all along; it is the titles that have been changing. This is not atypical for people in the brain sciences.

When I see people start interdisciplinary initiatives, such as this one on Learning and Intelligent Systems, first I find it amusing and then I get heartburn. The reason is that you never find anyone who has bad things to say about interdisciplinary research. Everybody loves it--administrators, bureaucrats, funding agencies. The critical question for this group is this: If everyone loves interdisciplinary work so much, why does it have so much trouble getting supported?

Historical Perspectives

Let me provide some historical perspectives on interdisciplinary work in the brain and cognitive sciences. As a graduate student at MIT in the 1960s, I spent several years in the Research Laboratory of Electronics and in the Communications Biophysics Laboratory where there was a sort of "brain understanding business" as part of the atmosphere. People, at least then, saw a natural grouping form out of neuroscience, cognitive science, parts of computer science, parts of statistics, animal behavior, linguistics and electrical engineering. These are the people concerned with how brains compute and what they compute, in some very general sense. Brain understanding is a small part of neuroscience. Much of the work in neuroscience, probably most of it, is collecting facts about the brain.

Now this constellation of areas is sometimes called "this stuff" by people in the field and the tradition goes way back. It encompasses roughly the areas covered by the Learning and Intelligent Systems Initiative, with the exception of education. This area has a long history.

In the 1940s through 1950s, the brain understanding business was deeply involved with groups at MIT centered around Warren McColloch and Norbert Wiener. When I prepared for this talk, I went back and looked at Wiener's introduction to his classic work, Cybernetics, from 1948. This book had an immense impact in many areas of science and engineering and was very influential in brain understanding. Consider some of the chapter titles: "Computing Machines and the Nervous System," "Cybernetics and Psychopathology,"Information, Language, and Society," "Brain Waves and Self-Organizing Systems." What many people do not know is that Cybernetics grew directly out of an early, consciously interdisciplinary project. It had a huge payoff. The introduction itself contains a very astute discussion of interdisciplinary initiatives and how they work.

Before World War II, Wiener participated in a series of round table discussions organized by Arturo Rosenblueth at Harvard Medical School. What I am going to do is discuss a little bit of how this worked out. However, before we get to that, let us look again at the chapter titles from Cybernetics and how you might translate them into in modern terminology. There are elements of cognitive neuroscience throughout the entire area.

Chapter Titles From CyberneticsModern Terminology
Computing Machines and the Nervous System Neurocomputation
Cybernetics and PsychopathologyComputational Models for Psychopathology
Information, Language, and SocietyIntelligent Machines, Human Computer Interactions
Brain Waves and Self-Organizing Systems Computational Neuroscience, now Non-Linear Dynamical System Models

Why Is Interdisciplinary Work Essential?

First of all, you cannot know it all yourself. Consider this quote from Wiener1: "For many years, Dr. Rosenblueth and I had shared the conviction that the most fruitful areas for the growth of the sciences were those which had been neglected as a no-man's land between the various established fields. Since Leibniz, there has perhaps been no man who has had a full command of all the intellectual activity of his day. Since that time, science has been increasingly the task of specialists, in fields which show a tendency to grow progressively narrower"(p.8). That was true in 1948; it is even more true today.

A second reason is that territoriality is bad. Another quote from Wiener: "A man may be a topologist or an acoustician or a coleopterist. He will be filled with the jargon of his field and will know all its literature and all its ramifications, but more frequently than not, he will regard the next subject as something belonging to his colleague three doors down the corridor" (p.8). Again, true in 1948 and true now. There is a culture, especially in universities, that really exalts narrow expertise. Scientists tend to form exclusionary cultures with great ease and frequency. Cultures, almost by definition, tend to be transparent, to become visible only in contrast with competing cultures. A sample comment: "I dislike this work, not because the writer is from another tribe, but because it does not meet the high and long-established standards of our field." You see this in review panels, journal editorial boards and academic departments, with varying degrees of virulence. It takes many years and several initiation ceremonies to become an expert. Outsiders, of course, have trouble breaking in.

A third reason is that someone else is liable to know something interesting. From Wiener's introduction: "There are fields of scientific work ... which have been explored from the different sides of pure mathematics, statistics, electrical engineering and neurophysiology; in which every single notion receives a separate name from each group; and in which important work has been triplicated or quadruplicated; while still other important work is delayed by the unavailability in one field of results that may already have become classical in the next field" (p.8). One interesting thing about this quote is that it contains most of the key disciplines that appear in modern brain understanding. Wiener himself added computer science and psychology later in the introduction. The engineering aspects involved in building smart machines was what cybernetics was all about. It was also ultimately the goal of the ENIAC computer project at the Moore School. This field was very well supported: It was World War II and the amount of money for high-risk, high-gain projects was enormous.

Another reason interdisciplinary work is essential is that it is where the treasures lie. From Wiener: "It is these boundary regions of science which offer the richest opportunities to the qualified investigator" (pp. 8-9). There are ample reasons to cut across boundaries. If somebody resolves your problem somewhere else, why not use their solution? This strategy is applied routinely in neuroscience. Electrical engineering has had an immense impact on concepts and techniques in neuroscience and in cognitive science. Neural networks, my own field, will, we hope, return the favor one of these days. Computer science has a profound impact, occasionally good, on cognitive science.

Why Isn't Everyone Doing Interdisciplinary Research?

If work at the boundaries is where the good stuff is, why isn't everyone doing it? That is the question for today. Wiener points out the classic problems, the ones we still have before us today:

"They [the boundary areas] are at the same time the most refractory to the accepted techniques of mass attack and the division of labor" (p.9).

So, interdisciplinary work is just hard to do. This is the one thing consistently underestimated by people who try to get going--it is harder to do and consequently takes longer than you can imagine. From Wiener, again: "If the difficulty of a physiological problem is mathematical in essence, 10 physiologists ignorant of mathematics will get precisely as far as one physiologist ignorant of mathematics, and no further. If a physiologist who knows no mathematics works together with a mathematician who knows no physiology, the one will be unable to state his problem in terms that the other can manipulate, and the second will be unable to put the answers in any form the first can understand" (p.9). You have to understand in detail what the other person is talking about. This is hard and this takes time.

If you have achieved distinction in one field and then go into another one, you are back at the bottom of the heap. People who achieve success do not like to be at the bottom of anything. It is hard to go through that learning process at an advanced stage in your career. The heart of the matter is, according to Wiener: "The mathematician need not have the skill to conduct a physiological experiment, but he must have the skill to understand one, to criticize one and to suggest one. The physiologist need not be able to prove a certain mathematical theorem, but he must be able to grasp its physiological significance and to tell the mathematician for what he should look" (p.9). This is the minimum level of expertise we have to have in a truly interdisciplinary kind of interaction--one that will be creative and advance science.

Again, I want to emphasize the point that interdisciplinary work is hard to do and that it takes a long time. Becoming a world-class expert in one field is very hard. Alan Newell suggested that it takes at least 10 years to become an expert. I think that is roughly right. This training usually involves several years of graduate school. How long does it take to be expert enough to work across disciplines? It might take five years of effort, at a senior stage in your career, when you have many other demands keeping you busy. I might say, by the way, that it is unlikely that Wiener actually met his own criteria. Wiener was notorious for an almost complete lack of understanding of neuroscience. Nevertheless, he had the good sense to deal with people who did know what they were talking about, so a lot was accomplished.

What kind of time scales are involved here? It takes years. The relevant time scale for self-education is much longer than most government agencies and companies are willing to deal with. The only institutions that work on this time scale are tenure at a university and jail. In practice, most of the integration is done in the heads of the students. The different experts talk to the graduate student and, if you are lucky and if the graduate student is good enough, he integrates "the stuff." The time scales then become generational. That is roughly the time span we are talking about for truly interdisciplinary research.

Wiener's introduction also discusses what he believes was a major success, in fact, part of the major success in the field. It was John von Neumann's work at Princeton that had a major impact on the development of both cybernetics and computers. In 1945, von Neumann wrote a technical report that clearly proposed the "von Neumann computer architecture"--the one we are all using now. This was actually a product of the whole Moore School group, but von Neumann had his name on it. It referred extensively to the computational brain models of Warren McColloch and Walter Pitts. It has been reprinted in a number of places, including Brian Randall's History of Computers.

This is another quote from Wiener's introduction: "At this time the construction of computing machines had proved . . . essential for the war effort . . . and was progressing at several centers-- Harvard, Aberdeen Proving Ground and the University of Pennsylvania were already constructing machines ... We had an opportunity to communicate our ideas to our colleagues, in particular to Dr. Aiken of Harvard, Dr. von Neumann of the Institute for Advanced Study and Dr. Goldstine of the ENIAC and EDVAC machines at the University of Pennsylvania. Everywhere we met with a sympathetic hearing and the vocabulary of the engineers soon became contaminated with the terms of the neurophysiologist and psychologist" (pp.22-23).

". . . Dr. von Neumann and myself felt it desirable to hold a joint meeting of all those interested in what we now call cybernetics, and this meeting took place at Princeton in the late winter of 1943-1944. Engineers, physiologists and mathematicians were all represented . . . Dr. McCulloch and Dr. Lorente de No of the Rockefeller Institute represented the physiologists . . . Dr. Goldstine was one of a group of several computing machine designers who participated in the meeting, while Dr. von Neumann, Dr. Pitts, and myself were the mathematicians."

"The physiologists gave a joint presentation of cybernetic problems from their point of view; similarly, the computing machine designers themselves presented their methods and objectives. At the end of the meeting, it had become clear to all that there was a substantial common basis of ideas between the workers in the different fields, that people in each group could already use notions which had been better developed by the others, and that some attempt should be made to achieve a common vocabulary" (p.23).

Of course, we are not there yet, but the Princeton meeting is where it first surfaced.

Why Do We Want to Understand the Brain?

It is like the Hubble telescope, which has generated immense interest because it seems to be giving us information about where the physical universe came from. A true understanding of the brain would give us information about where the mind came from and answer questions like: Who are we? or What are we? That is one of the reasons for the great interest among many people in this particular cluster of areas. William James described the state of what we might now call physiological psychology or cognitive neuroscience back in the Briefer Psychology in 1892.2 He is my hero. Here's what he writes in one of the last paragraphs in the book: "Something definite happens when to a certain brain-state, a certain 'mind-state' corresponds. A genuine glimpse into what it is would be the scientific achievement before which all past achievements would pale" (p.401, slightly edited). That's really why people get interested in this area with such fervor, like Wiener and McCulloch.

Let me finish by drawing a few lessons from all this information. These lessons seem to come out of the 1945 experience, but they are true for us today as well.

A very complex system requires and thrives on an interdisciplinary approach. It is the only way you can do it. Any specialty is just too small. The brain sciences are perhaps the best and longest-developed examples of this process. There was a major initiative along these lines for Wiener and McCulloch and there has been a continuation, although maybe not quite so spectacular, since then. The brain scientists are really good at doing interdisciplinary work, not as well as perhaps they should be, but they have experience and they think it is a good idea.

It takes a very long time for such efforts to pay off. The time scale is much longer than most people are comfortable with. By people, I mean both individuals and agencies. We need to be aware of that.

The culture of academics and granting agencies powerfully rewards narrow expertise and has for a hundred years. Less appealing adjectives--superficial, unfocused, dilettante, shallow"are attached to "interdisciplinary research" in the minds of many, although they might not say so in public. It is institutional. I see no easy way of breaking out of it. The criticism is that you know a lot, but you do not know anything very well. This criticism is used frequently.

Money is like manure. The biggest flowers grow in unexpected places. Cautious use of funds fertilizes only the mature plants. Which approach do you want to take? Ideally, you should take both. Remember, in World War II they had lots and lots of money. They would take big risks, big chances. You rarely heard about the projects that failed miserably. You heard about the big successes.

The failure rate of interdisciplinary efforts is very high. Wiener himself had a number of disasters along with his major successes. Are granting agencies prepared for the criticism they will get for long, expensive projects that fail? Because they will--the ones worth doing have a high failure rate.

Q & A

Question: One of the things you are suggesting, which I think is true, is that one of the best approaches for getting to interdisciplinary work is through interdisciplinary training. That is, the training programs themselves potentially need to be more interdisciplinary. At the same time, that extends the duration of graduate education because, as you said, it takes longer to learn more than one thing.

Dr. Anderson: Yes, I think this is definitely true, and I think schools that have a habit of producing good people in these areas tend to be unstructured in what they require graduate students to do. They can take courses in many areas. We have done this deliberately in our department, for example. We allow people to go outside the department to take courses in mathematics, applied math, physiology. This is a good thing and we encourage it. It does mean, I suspect, that people are a little slower getting out than they should be. We found it successful, but not everybody would. I think that encouraging people to do things where the payoff is not obvious or immediate is a good thing.

Question: I think you have identified very well a lot of the issues about interdisciplinary work. Some people in the agencies are probably well prepared to take the risks that you have challenged them to do. One of the real challenges is the review system. Have you thought about that or do you have any ideas on it?

Dr. Anderson: Yes, I have thought about it. I became acutely aware of this when I was the chairman of a National Institute of Mental Health study section. The study sections tend to be full of extremely good, very smart people. However, simply because you are dealing with an averaging process, the groups tend to be much more conservative in their totality than the individual members actually are. This is a curious phenomenon; I think everybody is aware of it and we consciously try to make allowances for it. Particularly in the areas we are talking about, any new ideas you have, by their very nature, are controversial. Somebody in a panel is sure not to like them, or at least not like them very much.

There is a powerful conservative tendency at work in the study section review panel mechanisms. It is my impression that one of the reasons military agencies do so well in these areas is that they are really handing out money based on the feelings of one person. This one person can take big chances and, in fact, they expect a number of their programs to fail. If there are not enough that do fail, they are not being aggressive enough. That is how you make big breakthroughs. The military agencies are, of course, after the big breakthroughs. You have to have both in a funding situation, but there has to be a mechanism to allow the high-variance proposals to get funded to some degree anyway. That is the single strongest impediment with review panels, especially at groups like National Institutes of Health, where they take review panels as gospel and rarely go outside them. NSF is a little better that way, but still conservative.

Dr. Gerhard Fischer

University of Colorado

Lifelong Learning

I want to talk about the notion that learning is a lifelong activity. I will follow the charge the NSF people gave me, and structure what I say with this question in mind: What is the state of knowledge?

I would first like to claim that we follow what I call the "gift-wrapping approach" to technology. This notion is structured after the finding in many businesses that information technologies have really not produced the productivity gains people had expected. The main argument in the business reengineering community is that technology was added to existing world processes, but the processes themselves were not changed. With the gift-wrapping approach to technology, we have this world of education with Skinner-type models of instruction and, in the world of working, Taylor-type production lines. We wrap technology around this world instead of reconceptualizing the underlying processes. My first claim is that we have to move beyond Skinner and Taylor in thinking about what work is, what learning is and what collaborating is.

Beyond Skinner and Taylor
there is a "scientific," best way to learn and to work ->problems are ill-defined and wicked
separation of thinking, doing, and learning ->integration
assumption: task domains can be completely understood ->partial understanding
all relevant knowledge can be explicitly articulated ->knowledge is tacit
teacher/manager as oracle ->teacher/manager as facilitator/coach
operational environment: mass markets, simple products and processes, slow change, certainty ->customer orientation, complex products and processes, rapid and substantial change, uncertainty and conflicts

This table outlines Skinner/Taylor-like thinking versus the thinking in which we should engage. These are all issues that are highly debated in the environment of business reengineering. People think about how to get away from the traditional separation of thinking, doing and learning, and to find new ways to separate things. In the field of teaching, there is a lot of effort being given to moving away from the teacher or manager as an oracle who stands up in front talking, to the teacher or manager as facilitator or coach. My claim is that we have to get to that point, and not just think about technology as a gift-wrapping element.

For my second point, I will use an idea from Neil Postman, who says you cannot do philosophy with smoke signals.3 In the good old days, people used smoke signals to alert neighboring castles that the enemy was coming. Postman's claim is that you cannot engage in philosophy this way, and my claim is that current computer systems, to a large extent, do not support the goals of enhancing creativity, imagination, contextualized learning, or learning-on-demand.

Popular Misconceptions in Learning and Education

I want to illustrate this with a number of misconceptions floating around. The first one is that computers, by themselves, will change education. I have worked for the last 25 years in this field, and I have witnessed repeated attempts by the computer-assisted instruction world to remedy all problems in education.

The second misconception, which came up in the context of the National Information Infrastructure, is that information is a scarce resource. The National Information Infrastructure was sold with the myth that every schoolchild in America will have access to a Nobel Prize winner. While this is true at the hardware level, I am sure that Herb Simon would not be delighted to find 1,000 or 10,000 e-mail messages in his mailbox every morning. So the scarce resource is not information. The scarce resource is human attention and ability to deal with huge amounts of information.

Another misconception is that just because information is available on the World Wide Web, this is automatically a better way of transmitting it. Wherever the information is, we still have to process it, and we still have to understand it. Throwing information at people may not necessarily solve all problems. This is a problem that we have encountered numerous times in our research over the last few years, where we have been trying to understand our situations in interdisciplinary collaboration or in teacher-to-student interactions.

The notion of ease-of-use is also very misleading. In the Knowledge Navigator tape we saw a little while ago, and in many other futuristic tapes, the future of learning is pictured as a person sitting in a chair and pushing a button here and a button there. The message conveyed is that if you want to learn something, this is how it will happen. But learning always requires engagement. It will require affection and personal relevance. Learning to play the piano well isn't easy, and yet a lot of people do it. When we think about technology, we should not only reflect on ease-of-use, but also on these other dimensions.

The last misconception I want to mention is that the most important objective of computational media is to reduce the cost of education. Sure, there is no doubt you want to do this. But I think we should also think about increasing the quality of the educational experience.

To return to the question of the state of knowledge, the last point I want to make is that, in some ways, we are not addressing the potential magnitude of the change. What the change ultimately will be is probably very difficult to determine. In my mind, we are facing a change similar to the introduction of reading and writing to societies 2,000 years ago. Computational media should be looked upon not as another little piece of technology, but as something with the potential to do for society what reading and writing did. If you read Socrates and Plato, you find several interesting discussions in which reading and writing were not regarded as being only positive.

Lifelong Learning--More Than Adult Education

One of the big challenges in the years and decades to come for fundamental interdisciplinary research is the notion of lifelong learning, where this means more than adult education. From our perspective, it covers and unifies everything from an intuitive learner at home, to scholastic learning in school and university, to the skilled domain worker in the workplace. When you can divide the number of requirements like that, working and learning needs to be much more tightly integrated. Some people say learning is a new form of labor, and if I take this audience here today, in my mind, this is probably a true statement, i.e., for most of us, working means to a large extent learning.

How do we engage people in solving self-directed, authentic problems? There has been quite a bit of discussion about how we can build more constructionist environments, to augment and complement classroom teaching. Trying to do so, the big question becomes: How can we create technological developments to support this environment? A focus of our research for a number of years has been the question of what it would mean to support more learning-on-demand. In my mind, this violates a number of very fundamental principles on which our whole educational system is built, but which are no longer maintainable. If you look at the currently practiced front-loading model of education, I would argue that coverage is impossible and obsolescence cannot be avoided. Since there was a lot of discussion about interdisciplinary research earlier, it is clear that the individual human mind is limited. We should think not just about individual learning, but about organizational and collaborative learning.

The last point I want to make is that lifelong learning is hard. I guess we experience this ourselves. Three years ago nobody knew what the World Wide Web was. Now, as you well know, many people believe that not knowing what the World Wide Web is leaves you out of the cyberspace culture. A different story is to learn to use the World Wide Web for truly meaningful tasks. An even bigger challenge is not just to use the World Wide Web as a consumer, but to be an active contributor and designer, using it as an everyday citizen as a medium to express our thoughts and our opinions. A new literacy is needed to do so, and this literacy will not come for free but will require serious learning.

Another example that lifelong learning requires much more than having new technologies: We (as a university research center) have been collaborating with NYNEX for several years. NYNEX employed approximately 3,000 COBOL programmers in 1996. The problem of building large, complex software systems with tools of the 1990s rather than with archaic tools of the 1960s is not solved by going to them and saying: Well, today we have SmallTalk and C++. There are much larger problems to think about with regard to lifelong learning.

We have a project going on now supported by NSF with schoolteachers that focuses on understanding that teachers also must be lifelong learners and what that entails. This is another big problem. In our research, we have developed a set of hypotheses for computational environments and used them to ground our system-building efforts in some theoretical framework that supports this notion of lifelong learning.

What would it take to build a successful interdisciplinary investigation? We have been involved in a number of collaborations, but I think the problem that Snow, in his two cultures--or, I would say today, many cultures--identified is one of the critical problems; people have illustrated this in the previous talks.

So, what is the response of the relevant science and engineering communities? My first claim is that the future is not out there waiting to be discovered; it has to be invented and designed. The interesting question is: Who does this? If you believe, for example, that Hollywood does this, then you have one way of seeing the future. What will we do with the National Information Infrastructure? Well, having 500 TV channels may be one consequence of it.

And what about the current state of affairs at universities? Having spoken about other organizations, I want to point out that I'm a faculty member of a university and that I believe there are quite a few things wrong with current universities. They are lecture-dominated and curriculum-dominated. We have taught some classes where we wanted to incorporate more learning-on-demand elements. But where is there room for authentic, self-directed learning activities if, at the beginning of a three-month course, you provide students with micro-managed curriculum telling them exactly what type of things will be dealt with and when? I think that in our university environment we overemphasize the notion that students solve given problems. In the real world, problems really do not exist. We always intermix problem framing with problem solving. Another artificial aspect of most university or school problems is that there are right or wrong answers. In the real world, this is rarely the case. We have to rethink, from a lifelong learning perspective, what a university education will mean in the future.

What is the long-term societal impact of this research? Here is a quote from Einstein: "Wisdom is not the product of schooling, but the lifelong attempt to acquire it." We have to think about what I call the redefining of the roles of high-tech scribes. In the Middle Ages, the average person had to go to a scribe to express herself or himself. In today's world, we find a similar situation with respect to computational media: Only high-tech scribes can master them. And the high-tech scribes have an interest in sustaining this role. This will be one of the big challenges: To avoid creating an elite group of high-tech scribes.

Another thing I consider fundamental in what we are facing is a change of mindsets. I mentioned earlier the notion of teachers as lifelong learners, which means teachers should not look upon themselves as truthtellers or oracles, but as coaches, facilitators, mentors and learners (see table on page 6). It would be an incredible educational experience for a student in high school to see his or her teacher struggling with a problem. But, today, that is frightening to many teachers. We want to create a society that is not limited to consumers of modern technology. My complaint about the World Wide Web would be that currently most people dealing with it are consumers. I briefly mentioned the "500 TV channel" future. Illich, in his outspoken criticism, calls schools and universities reproductive organs of a consumer society.4 We can devise many technical challenges from this.

What will be the basic skills in the future? There's a lot of debate in all the sciences about "basic skills." If most relevant knowledge for the working world must be learned on demand, which is a claim I would defend, what is the role of basic education? One could argue it is to empower people to learn on demand.

By focusing on a lifelong learning perspective, we have had the opportunity to think a lot about school-to-work transition. This has been a concern for many organizations. I strongly believe that the world of working and living relies on--you can agree with me or not--collaboration, creativity, definition of and framing of problems, dealing with uncertainty, change, distributed cognition (distributed intelligence), and symmetry of ignorance. If this is the case then the world of schools and universities needs to prepare students for a more meaningful life in the world. Planning to employ learning systems and intelligent systems in our educational institutions, we must judge them from these perspectives.


Comment: I have a couple of broad comments. One of the hardest things for us to do is to enrich our notions of what we mean by interdisciplinary. As long as we have the semantic tag or label that we throw on this, we're not going to deepen our understanding of the technology that we need and the kinds of processes required to really make it work. There are two points not brought out as strongly as they could be. One goes back to theoretical foundations, which is also a very long-term activity, but one that often goes in parallel with interdisciplinary work. So, part of what we need is much better theory, much better formalisms; for example, formal foundations that allow people to make the assertions and the boundaries of their discipline more explicit, more processable, more analytic, so that they can, in fact, be combined with other disciplines.

Speaking as a neuroscientist and as someone who has worked on space systems--two highly interdisci-plinary areas--part of what happens is that in the building of a real system or even in scientific endeavors in brain science, we often develop a sort of data-dictionary level of interaction. People learn to talk, and they gradually intuit the problems and the foundations of another field. But what we really need to do is create some new foundations for scientists, including, for example, the ability to integrate models at a formal, explicit level.

To put it a slightly different way, this is something that Nora Sabelli, John Cherniavsky and several others of us have been dealing with between our NSF and DARPA educational projects. Right now, people think that the big problem with technology demonstrations is turning them into technology insertions. That certainly is one problem, and we're dealing with it. So you package, let's say, an educational tool with real curriculum or real pedagogy, and you make that transfer.

But there's something else that needs to go on that people are not addressing. It is not the technology transfer; rather, it is what we call climbing up to the systemic level. How do I take superb technology demonstrations and insertions and actually climb up to a systemic level? What is the information? What is the instrumentation that I have to do at the technology insertion level or transfer level that allows an organization like a school or a hospital or a manufacturer to make system-level tradeoffs about how to use the technology?

It's not just what we're referring to when we talk about scaling up, which is a commonly known as model replication; this is much more difficult, because it involves real tradeoffs between how you are doing things now and how you envision doing them in the future. There are some qualities that haven't even been represented and made explicit enough to reason about.

Comment: I have a comment in reaction, in part, to hearing some of the wonderfully wise statements that Norbert Wiener made a couple decades ago. Disciplines haven't always been around. They're evolving entities in themselves and they come into existence for various reasons. In my own career, I've seen interdisciplinary work lead to new subdisciplines and disciplines. There's an interesting distinction between, on the one hand, nurturing the formation of an active new discipline--which often comes as interdisciplinary work with its own peer review process, its own potentially revised or modified jargon and terms, and its own conferences--and, on the other hand, nurturing a flash-in-the-pan new scholarship that goes on when somebody who finds himself steadfast in statistics, for example, starts thinking about graphical models and computational constraints. There's an interesting interplay between what we mean by interdisciplinary work in terms of the creation and nurturing of new disciplines and the evolution of old disciplines, versus nurturing individuals and what they're doing.

Dr. Joseph Bordogna, National Science Foundation: I will stand on dangerous ground here and try to summarize what everybody has been saying. It has to do with changing the academy from reverence to reductionism, to innovation through integration. It is a real shift, and I am not sure we can insert things with the present leadership. We have to spend time educating the new students. I hear a lot of things; for example, in engineering, one might call it project-based learning: You do things, or hands-on, teamwork, invent the future, and so on. What we're attempting to do in engineering education, with all three engineering schools, is to really implement these things so that graduates have independence through not only knowing something in depth, but also possessing a functional literacy across a variety of disciplines. As a result, they can see disparate pieces and integrate them to get something out the door.

So, I feel comfortable with what you are saying; it validates the sort of ad hoc way in which we have made a big investment at NSF in the last 15 years, in changing the undergraduate engineering paradigm. In essence, we still graduate people who have been taught that to know your special thing deeply is the reason you're revered--not necessarily the reason you are hired and compensated, but the reason you are revered. That is the holy grail. We have to change this so that, at the baccalaureate level to begin with, students graduate with an understanding that it's much more complicated than that. The word interdisciplinary does not have the right cachet. You have all talked about that, especially the first speaker. And because the cachet is in an area where it mitigates against making these changes, it can be very emotional. We have to attack it at its root.

When we did the Collaborative Learning for Technologies work, we had an argument about a phrase; I think this relates to what you have mentioned. The phrase was: Applying technology to learning. That phrase is wrong. The right phrase is: Integrating technology with learning, to create a new paradigm. That is what I think everybody is saying here. I like what I am hearing because it makes me at least feel more confident that we are on the right road in our initial investments to get at this.

Dr. Fischer: We talked a lot about interdisciplinary research, and it seems to me the disciplines are something which we have created; they are not God-given. What comes first? What comes first is that our society faces real problems. Disciplines, after being created at some point in time, often live a long life. But the world moves on; our goals and needs change. And inevitably, a mismatch between the problems we are facing and the existing disciplines will emerge. These problems do not fall neatly into one discipline or another. There is nothing a priori that says we have to accept a given set of disciplines. Interdisciplinary is only a consequence of accepting certain discipline structures that we have at the moment. I often say that at universities we recommend change to everyone else, but when it comes down to changing ourselves, giving up the set disciplines or allowing people to get tenure or a Ph.D. in an interdisciplinary area, we are much more resistant.

Comment: I think the last comment, in particular, about the universities having trouble restructuring is particularly important. I wonder if members of the panel have any comments or advice on what NSF might be able to do, with regard to its structures, to position itself to respond to the creativity of the community, and not just to have you fill holes in boxes that we seem to create. Somebody must have thought of that--what they would do if they were czar.

Comment: My comment has to do with the question of having at least part of the funding devoted to very high-risk proposals, as a way of getting away from the conservatism of both established programs and established review groups. These established groups do a fine job in a lot of areas, so you wouldn't want to get rid of them. But certainly a fraction of their funding could go into a program where you simply say to somebody: Okay, here's a certain amount of money, go out and find people you think would be interesting to support. And use your own judgment, like the military agencies tend to do.

Dr. Louis Gomez

Northwestern University

Interdisciplinary Transformations inTeaching and Learning

I am going to talk about my core interest, which is the design and construction of systems, both social and technological, that support teaching and learning in places where people learn. My primary interest is in classrooms, but I think this problem extends well beyond that. I'm interested in how interdisciplinary research can transform what we know about helping people teach and learn in these highly social situations.

I'm sure many of you have heard the following story, but I think it bears repeating. It is a story about what would happen if people with different kinds of expertise came forward from the past to the present time--for example, a doctor and a teacher. A doctor coming to today's operating theater would look around and decide that he or she didn't know how to work in this new context. If we're talking about, say, 100 years, the doctor may not even recognize the place of work as at all related to where he or she worked in the past. But if a teacher were to come forward to this time and look around, he or she would probably take a step back, ponder for a moment, and say, "Well, I know how to work here."

The goal of our work is to figure out how to be really transformative in the context of teaching and learning. With respect to that goal, here is how I have approached today: If someone were to deliver to me a plate with the answers to help me progress towards my goal, what would I like to see on that plate? And further, what sort of interdisciplinary research would give me the answers? I've been fortunate enough to have a fair amount of interdisciplinary work experience in my career. When I was in graduate school at the University of California at Berkeley, I was one of the first cohort of people doing a thing called cognitive psychology, soon to be called cognitive science, which represented an interdisciplinary focus in education, anthropology, computer science and other fields. Later, I was fortunate enough to work at both Bell Laboratories and Bell Corp., where the coin of the realm was interdisciplinary groups trying to solve a problem. Today at Northwestern, we have a group of people with different kinds of expertise working on the problem of teaching and learning.

I'll start with the general architecture for my remarks. I'd like to begin by sharing some successes in the world of interdisciplinary research and how interdisciplinary research has had a transformative impact on teaching and learning. I'd then like to point out what I perceive as the core problem to be solved. I'll close with some comments on what makes interdisciplinary teams work and what the roadblocks are. I think it's important for you to get a clear sense of what the problem is; to help me define and clarify it, I've brought several videotaped segments.

Approaches and Interactions

We have had a certain amount of success changing what people do in classrooms as a function of a kind of interdisciplinary research. Research on case analysis and expert rivaling has led to intelligent tutoring system applications for the acquisition of basic skills. These are unique and revolutionary, and they demonstratively improve the way people learn things like algebra. A kind of thinking about case-based reasoning and analysis has led to teaching applications based on stories and case-based teaching approaches. These approaches seem to say: What I really want you to do, as a learner, is to possess a rich repertoire of things that people do in the world, and then to draw on that repertoire to learn things. This is very different from simply reporting the facts of the case.

In all of these cases, a group of people--some doing cognitive psychology, some computer science, and, in some cases, anthropology and linguistics--have combined to bring forth these insights and this kind of transformation. This includes people who have conducted research in understanding prior conceptions and have tried to build the mental models that learners have. These models have led to rethinking the teaching technique wherein learners learn through articulation of what they believe to be true of a state before they are given new information about that state. This involves articulation of naive reasoning or making conceptions apparent before teaching.

Another example has to do with people who have done work in what are called "communities of practice." They are trying to understand how organizations of people combine through practices and language to shape what new members of a community learn. They are trying to chart that very carefully, looking at how expertise is distributed across people and things and how that expertise is often organized to help new people join a community. This work has led to new kinds of thinking about instruction in the form of "cognitive apprentices," wherein you try to orchestrate, in school situations and other learning situations, a kind of apprenticeship learning that divorces teachers from standard roles of simple, straightforward deliverers of information.

Models of Interdisciplinary Research

My first experience with a form of interdisciplinary research was a kind of cross-pollination model of what we do. You have multiple disciplines that are somehow in either physical or electronic proximity. They continue to do what they traditionally have done, but they combine their efforts because they have a shared set of cognate elements of what they commonly believe to be a new core problem area. They combine understanding in performing the research. My earliest experience with cognitive science and cognitive psychology was that model of interdisciplinary work and it was successful.

Today at Northwestern, we practice a different kind of model, a more problem-focused model that I learned in industrial research. The goal is to gain access to some shining light and focus it on the problem. There are certain kinds of expertise, as opposed to core disciplines, that can contribute to the process. Expertise is assembled to focus on the problem; because the problem is almost pre-identified, it does not necessarily emerge from the context of the disciplines. I found, in the course of my own experience, that this problem-focused way of marshaling interdisciplinary work is particularly successful.

Learning in a Complex Context

I want to take that point as the springboard to talking about the issue on which I'd like to see us make progress. I coined this issue "understanding and supporting learning in a complex context." The classroom is an incredibly complicated place. When you try to develop a system to support teaching and learning, the things that go on in that place are, in fact, the data that you need to analyze. As a discipline, we are woefully underprepared to treat that activity as data and to analyze it. This problem has several components. One is taking what goes on in contexts such as classrooms and treating it in a rigorous and in-depth way. This means analyzing the activity so that it can then inform design progress.

A second factor is that learning environments, especially schools, are places where a lot goes on, not just the delivery of the facts. Yet, we don't really know how to construct situations that form a tighter coupling among the cognitive, social and psychosocial roles of teachers as they go through the teaching and learning process. Many people--myself included--have vaguely formed intuitions, and it would help tremendously if we understood how learning goes on in a complex context, and if we knew more about how culture influences the use and utility of technologies and other things that support learning.

Analyzing Activity

The key goal here is to have more complete theoretical and methodological techniques for more nuanced analyses of activity content. Today, when we analyze what goes in classrooms for the purpose of designing better systems, it is typical to do one of two things. Either we collect examples of behavior and tell stories about very complex things, or we take highly stereotypical kinds of the behavior, count them and reduce the complexity. I don't think we have well worked-out techniques that allow us to work in the vast middle where all the content really is. I believe this both at the level of how to think, theoretically, about what's going on in those situations and how to have, methodologically, the techniques and technology to do it.

To give an example, I want to show you one quick activity from a couple of classrooms that we have worked with over the years.

Video plays.

This is a standard classroom interaction with teacher and student--an example of teachers doing stereo consultation with students in a classroom setting. The key problem here is: When you think about situations like this and decide that you want to change them, or insert new technologies, you really want something that supports an effective space like this. In this example, this woman is one of the best science teachers in her high school.

So, how can we use interdisciplinary expertise to help us think about, first, recording the interaction as data; second, taking a technological or organizational intervention; and third, analyzing its impact. This requires several efforts: Cultural analysis, cognitive and social modeling, and linguistic analysis. You need analytic notations that help you parse the interaction and put it together in new and different ways. You also need better technology for video display and manipulation to make that happen more smoothly. If we had those things, we'd have a deeper understanding of highly situated behavior, which I believe will lead to better learning environments.

Not Just Facts and Practice

Those of us who work in teaching and learning often behave as though our sole job is to support the teaching of the facts or to support the teaching of practice. But when you talk to most teachers, especially pre-college teachers, about what they think they're doing, they believe that they're trying to support the development of a whole person; this includes intellectual development and social development. Teachers tell us quite frankly that often, especially in highly challenged teaching situations, the subject--chemistry, for example--is just an excuse to have a meaningful interaction with the student and to further his or her development. What we fail to do, perhaps because we don't understand the situation well enough, is to develop technologies and techniques that make it easier for teachers and others to merge support for social development with support for intellectual development. An example of this might be a teacher, working under the guise of teaching earth science, trying to help a student understand what it is really like to do research in a college situation. The ostensible goal for them being together was earth and physical sciences.

There are any number of systems and approaches being produced today that try to take advantage of microcultural differences in the United States. Some people believe it possible to draw on aspects of cultural experience to facilitate some form of teaching and learning. When we try to do this, however, it's a shot in the dark. We don't really understand what that nexus is about. It would be incredibly important to have a kind of cultural analysis, a linguistic analysis, even collaboration from the worlds of entertainment and design, to help us think about how to design systems that make meaningful contact to cultural experience so that people learn better.

There is another example of two little girls using a prototype system developed at Northwestern. The goal was to help them learn to read by learning hand-clapping rhymes, which a lot of African-American children learn from very young ages. The girls are engaged in hand-clapping as a play activity, and when their hand-clapping fails to match what the computer says back to them, they get deeply engaged in trying to figure out how to change words to make it read properly. Their engagement gets to the point where they look at the little character on the screen, ask if they can meet her, and try to figure out how to be more like her.

The objective of the designer of this system was to take an aspect of these little girls' cultural experience, hand-clapping rhymes, put it in a reading instruction program, and try to engage these kids more deeply. Did it work? We really don't know. Did she pick the right thing? I don't know that either. It was a shot in the dark, because there was no base of information that could arguably come from something called learning and intelligent systems that would inform the design of something like that. The key impact would be a better understanding of when and if design prescriptions ... [inaudible] ...

What Makes Interdisciplinary Research Work?

Over the years of trying to do some form of interdisciplinary research, lots of things turn out to be important, like shared expertise and the mutual opportunity of having some people learn what others already know. When I try to reduce this to what really makes it happen, it comes down to three things. First, it takes trust. In interdisciplinary partnerships that don't work, there always seems to be a sense of senior partnership. When you have a sense of senior partners, a sense of first among equals, other people don't feel as positive about participating. In some cases, computer scientists feel like they should be in the driver's seat and behavioral scientists should support them.

Second, it takes persistence. Jim Anderson talked about this. We often have a clock for getting something done and it is usually driven by our experience with one group of people of a very narrow form of expertise working together. We don't give interdisciplinary efforts enough time to happen. Finally, it takes faith. You take a problem like the last one I mentioned. Is there a real nexus of culture and learning? Well, I think so. Teachers tell me it's possible. Designers feel they have some evidence that they have been able to engage users because of it. But it's really a kind of faith, and you need to have that kind of faith to embark on a high-risk, potentially high-gain research effort that would elucidate it.

Dr. Paula A. Tallal

Rutgers University and Scientific Learning Principles

Interdisciplinary Research Projects in Neuroscience

I'm here today to talk about my experience wearing many of the different hats involved in this initiative. All of the different speakers today have resonated with me--Dr. Anderson in particular, when he spoke about the importance of interdisciplinary training programs.

I was told early in my career that I would have to be very proactive about looking for a job because I would never see one advertised for someone like me, and I would never encounter a department that automatically understood why they needed someone like me. As a result, I ended up having the opportunity at Rutgers to help create from scratch a new department, called the Center for Molecular and Behavioral Neuroscience. Our primary vision was to develop across the domain of neuroscience an atmosphere of trust in which each important aspect of neuroscience, from molecular to behavioral, would be given equal weighting and equal prominence. There would be no sense that the basic "touching-the-brain" neuroscience was any more or less of a basic science than the cognitive/behavioral aspect.

We focused on the understanding that there is an equally sophisticated basic science in every aspect of brain research, from molecular through systems, through behavioral, through cognitive. No one aspect is more or less important than the others. It is important to train the next generation of students to have a working familiarity and a specification of expertise across these domains, so they can bridge the gaps we find difficult to bridge today. We want to develop, within individuals as well as across individuals, information that will help us to really understand molecular as well as behavioral aspects of brain function. So a major focus of our program is a multidisciplinary, interdisciplinary graduate program called Behavior and Neurosciences.

What we've done at Rutgers is unlike what I see being done in most other current neuroscience programs around the country, in that we give equal weight to the molecular and cognitive aspects of neuroscience and, at the same time, give opportunities for in-depth expertise in other areas as well. Our faculty includes those looking at potassium channels, molecular genetics, animal research in vision, perception and attention, as well as human research relating to normal and abnormal neurological and mental processes. My own research focuses on the neurobiological basis of speech perception and its implications for language-based learning disabilities. This category of disability affects 10-15 percent of our population and is of growing concern.

Barriers to Multidisciplinary Research

I would like to give an actual example of a multidisciplinary research project. But before I get to that, I want to talk about some of the barriers to doing this kind of work. Clearly, there are barriers to integrating effectively across many disciplines--we've heard about this already from many of the previous speakers. There are a couple of barriers that haven't been covered; one of them is particularly interesting and, in fact, was in this morning's USA Today. Right on the front page, it says, "Women missing good jobs in key growth industry." The article states that inside the corporate headquarters of most computer, engineering, or any modern developing industries, one could "fire a cannon through the cubicles and hit only men."

Someone looking around the main table in the front of this room could see the same thing. I raise this issue because it means we're missing a lot of talent in these industries. As we sit around thinking about what the goals are for this initiative, if we don't build in mechanisms for training, beginning at a very early age, that encompass our whole population, we're missing a great opportunity. We currently are in a crisis at a couple of different levels. One, clearly pointed out in today's paper, is that women are not being educated at the same level to reap the benefits and participate in this growing initiative of interdisciplinary research, technology and practical application. This can still be clearly seen from elementary school to university, through industry and in our society as a whole.

Another socioeconomic barrier is the lack of access to the hardware and software needed to grow up feeling comfortable interacting with computers. This barrier will have far-reaching impact as jobs and access to information increasingly demand technology. Lack of equal access to computers and the Internet in schools has major effects on the extent to which we can use whatever we develop in these kinds of initiatives, particularly in education. I'll be speaking in a moment from a more practical point of view, based on the initiative in which Mike Merzenich, Bill Jenkins, Steve Miller and I are currently engaged. We are translating our basic research into practical application, through the use of CD-ROMs and the Internet, to help improve the learning potential of language-learning impaired children. This initiative is going to meet with a major barrier to market--the lack of equal access to computers and the Internet in a lot of schools and clinics around the United States.

Language and Learning--An Interdisciplinary Example

I want to give you a flavor of another multidisciplinary research initiative in which I have been involved for some time. Many years ago, this project would have been considered extremely unlikely to succeed. We talk about language and learning disabilities as problems that seem almost insoluble. As a well-trained experimental psychologist at Cambridge University in England, I was taught that one reduces large problems to smaller investigatable questions.

You must choose a question you can investigate. And although all of us would like very much to understand language at the highest linguistic levels, we must first understand how the smaller components of sound that make up speech are processed in the brain. Children with specific developmental language disabilities may offer us a unique window into what really is going on as the brain struggles to process speech. These are children who are developing normally in every respect, except that they are failing to develop language at the expected age, and they usually go on to develop reading problems as well. What I've found over many, many experiments during almost 20 years of work is that language-impaired children are just like normally developing children in all of their information processing, hearing abilities and visual abilities, except for one very profound difference: The ability to integrate two or more brief bits of information entering the nervous system simultaneously or in rapid succession.

We asked children to listen to two events and tell us whether they're the same or different--for example, a low tone and a high tone. We found that normally developing children need only tens of milliseconds between two 75-millisecond tones. Language-impaired children can understand the task and learn to perform perfectly well, but they need an order of magnitude more time--200 to 300 milliseconds--between events. This profound delay in brain processing tasks has been replicated over and over now in a variety of different sensory information studies across modalities with language-impaired children. In a nutshell, the brains of these children process more slowly, whether you look at speech perception or speech production, visual or motor performance. How does that translate into something that might interfere specifically with language? Why is it that these children are so normal in every other respect, and yet so very abnormal in language?

To answer these questions, one needs to understand what speech is like and what it is the brain might have to do to make important speech discriminations. Here is just one example of the kind of speech contrast that every baby lying in a crib has to listen to and somehow sort out. This is one speech sound: "Ba." This is a different speech sound: "Da." If you look at the acoustic pattern produced when these speech sounds are said, you see an acoustic frequency over time spectrogram. What we see here is that although the frequencies are fairly complex, they differ in only the initial 40 millisecond onset, during which a very rapid frequency change over time occurs that signals the difference between "ba" and "da."

What the brain needs to do is extract this rapidly changing frequency information from other information that immediately follows it. The interesting fact that I discovered early on, and have continued to work on since then, is that language-impaired children can't make these kinds of rapid discriminations at all. Basically, their brains cannot process anything in as little as 40milliseconds when this is followed rapidly in succession by further information. They need hundreds of milliseconds. Now this is where technology comes in. We used computers to artificially stretch out the acoustic portions of the signal that our research had indicated were most impaired in these children. We developed a computer algorithm to alter the acoustics of ongoing speech, to stretch and amplitude enhance the salient temporal components within the speech signal.

Reorganizing the Brain--Another Example

About four years ago, I began working with colleagues at the University of California, San Francisco. They were doing single-cell electrophysiology in monkeys and mapping the somatosensory cortex of the monkey's hand. What Mike Merzenich at UCSF and his colleagues were fascinated by was the ability to show that if you map the actual surface of a monkey's hand in terms of individual electrode placements--physiological basis, single-cell recording--you get an actual representation, a physical map inside the somatosensory cortex. This neatly maps out the thumb and each finger. It's quite remarkable and shows that we all have little homunculi representing our brains in the neurophysiology of the brain.

What Merzenich and his colleagues have found is that these representations are not static. Through specific training or stimulation occurring on the fingertips of two fingers, for example, over a short period of time the representation in the brain at the single-cell level actually remaps itself and is influenced by learning. This is quite remarkable and very important for this conference. When we're sitting in a classroom and the teachers are teaching about some subject, what we're doing is literally remapping and reorganizing the brains. All learning has an effect in physiology because brains are indeed plastic, not static. Another interesting and important point here is that Merzenich's studies were done in adult monkeys. The effect does not depend on getting in at some critical early periods. This means that sensory representation--the very basic representation of brain reorganization--is modifiable by training.

So how does that relate in any way to what's going on with language-learning impaired children? Merzenich and I decided to ask the following question: If one can influence and remap through training and learning, can we remap the brain out of a defect? If language-impaired children are not processing information at a rate that allows them to extract sensory information out of the environment fast enough for speech and language processing, can we speed their neural processing rate up by training? We've developed a series of computer games based on remapping, basic learning algorithms and experience. We mapped that on top of our findings that extending and altering the acoustics within speech might help these children. We put it all together in a fairly complex series of programs we call Fast ForWord that allow a child to "play," for hours at a time, at remapping.

The games are designed to both enhance the ability to process the speech that the child does hear, because it's been acoustically altered to fit their brain processing rate, and, at the same time, to alter the brain so it can process more quickly, thus dealing better with the real-world speech environment. The games are also adaptive in that the computer is constantly changing the input based on the child's last two or three responses. The results of our first couple of studies were published in the January 1996 issue of Science, with two back-to-back articles.

We reported changes in processing rate, speech and language functioning in a group of language-impaired children who spent a month playing these computer games in which the acoustics were altered and the focus was adaptive learning and training. One group of children used programs with modified acoustic speech and temporally adaptive training exercises, another group used the exact same program with natural speech and visual, non-adaptive training. Everything else was the same for both groups; it was a very well-controlled study. We saw dramatic improvements not only in the temporal thresholds (very dramatic news in itself, that something as basic as your physiological threshold can be changed through adaptive learning) but also in speech discrimination, language processing and even grammatical comprehension. These are the hallmarks of these children's problems. In a month of training with these new computer-generated, acoustically modified training exercises, we saw the equivalent of approximately two years of actual language growth based on standard clinical tests.

Closing Comments

I will close with comments on the final hat that I wear. This year I'm on sabbatical from Rutgers University and I am working in a new start-up company that my colleagues Merzenich, Miller and Jenkins and I have developed, called Scientific Learning Principles, in which we're now transferring this technology into programs that will benefit language-learning impaired children in classrooms and clinics. We encourage all of you to track our progress on the Web at

One last comment: When I helped found our Neuroscience Center, at Rutgers we had 16 faculty positions. We've now recruited 15 of them, and eight of them are women, and eight cognitively oriented in the field of neuroscience, which is traditionally a male field focused on molecular and systems approaches. I also spent 20 years doing basic science laboratory research which turned out to have practical applications. Through the university-based technology transfer system, my scientific colleagues, joined by business partners, founded a new company, Scientific Learning Corp., aimed at linking basic neuroscience research, industry, business, schools and clinics. What I have learned from these experiences is the benefit of stretching traditional boundaries to forge new links and greater interaction between previously segmented groupings within our community. This is often strongly resisted, but the benefits for society are surely worth it.

Dr. Victor Zue

Massachusetts Institute of Technology

Interdisciplinary Research and Human Language Technology

This morning we are each speaking slightly differently, about different angles, as an interdisciplinary presentation. I'm going to provide a view from the standpoint of human language--both human language with which we communicate and human language technology. I'm focusing on this for two reasons. First, many people readily accept human language as a true sign of intelligence; we can communicate by using language. Second, it is something I know about from many perspectives. I was trained as an electrical engineer and I am now a computer scientist. I spent the first 10 years of my professional life trying to understand human-to-human communication, and the last 10 years trying to build machines that can achieve something like it.

I want to provide two different perspectives within the context of this topic. On the one hand, the capabilities of machines, bandwidth, software and information are growing exponentially; on the other, human capabilities evolve on an evolutionary scale. It's very important to deal with this widening gap between where we are and with" what we are confronted. We need to help people acquire, manage and process information.

The Roles of Human Language Technology

Human language technology (HLT) can play two roles within the context of learning and intelligent systems. First, HLT facilitates learning by providing an interface. We all know speech is very natural: We learn how to speak before we know how to write and read. Speech is efficient: I can talk 10 times faster than I can write, four times faster than I can type. Speech is very flexible: I don't have to touch anything or see anything. Just as important, however, is the fact that information is often a remnant of human communication. The kind of information we seek is linguistic in nature and we have to provide easy access to it. I will give you an example of this in a minute.

The second point is that the development of HLT can benefit from better understanding of human learning intelligence. In a few minutes, I will show you some examples of where we are as a community; you will probably agree with me that we are far from being able to do the kinds of things that the science fiction writers wish we would be able to do by the year 2000.

Let me concentrate on the first of my two perspectives and think within the context of providing information for the user. Imagine a person sitting here, for example, interested in asking the question: Can I have the telephone number of the nearest hospital? We have some sort of source information in some sort of linguistic archive that has been painstakingly processed by human beings and turned into some sort of special database. It could be the Yellow Pages or White Pages or some other collected set of information. Today, you typically access this information using a keyboard--usually with a pair of hands that belongs to someone else. In the future, what we want to be able to do is just pick up the phone, ask the question and have an intelligent and graceful interface that allows us to access the information by speaking fluently and conversationally. What is just as important is the ability to process the information--to annotate videotape or audiotape, for example, and index them so that we can ask questions and get directly to the information. This would be important progress as compared to asking for the information in a prescribed way.

There are many different applications for this in learning. In the areas of collaborative learning and distance learning, for example, these kinds of capabilities are going to be very important. Now, you may be thinking that because I have been working in speech for a long time, it is natural for me to see speech as a solution to all things. You know the saying, "If you have a hammer, everything looks like nails." Nevertheless, it is very important to realize that what we really want is based on human language, although the modality may vary from speaking to handwriting to typing. In certain environments, such as in this particular meeting, for example, handwriting will probably work better for taking notes than talking to your little computers.

What is important is for us to be able to capture the linguistic competence of human beings and to apply that linguistic competence through different modalities and media. We all know that when we converse, there is the notion of turn-taking. There is also the notion of clarification. And the notion of referencing: Whether I want to use the pronoun "it" as I'm typing or speaking, or I simply want to point to something to reference it. Those are all speech acts, and the goal or desire is to capture something like that pictorially. The use of multiple modalities is an essential concept here. We must integrate all of these as a graceful interface inspired by our linguistic competence. This is something that separates us from other animals.

A Look at New Technologies

Most people will probably agree that speech recognition is the most fully developed of the new technologies. In this workshop, in fact, people have talked about that. So, how well can we execute this speech recognition? I'd like to show you three or four examples of where we are. The first piece of the video is actually a visionary video; it's not real. This is something that was done almost 10 years ago by Apple Computers, called the Knowledge Navigator. I will show you about 30 to 40 seconds of it to emphasize what you have to go through when you're trying to learn something. Watch for clarification, dialog, using hands, using speech--all these kinds of things.

Video plays.

From this brief video, you get a sense of the kind of interface we would like to achieve. The videotape also gives us a reality check: It's mid-1996, and where are we?

The next piece is something that's done at BBN, a company in Cambridge. It's dictation, and the vocabulary includes tens of thousands of words. This is an example of what you can do sitting in front of your workstation and wanting to create a document such as a letter or report.

Video plays.

A third example is a system called InfoMedia News on Demand, under development at Carnegie Mellon University. The idea here is to index a large number of videotapes and access sections by content. So you have to tape the radio announcer's speech, transcribe and index it, and provide the interface to access the information. You ask a question and you can see the speech being indexed.

Video plays.

The last example is the very beginning, the rudimentary capability described in the Knowledge Navigator scenario. A person is actually interacting with a machine through conversation. So far, the first example requires nothing but recognizing the words. The second one has some capability of understanding, and the third example I'm going to show you is one in which the user actually carries on a conversation with a computer to solve real problems.

Video plays.

Present technology is far from what we would really need to provide a graceful human interface to access information. I decided to prepare a quick chart to give you some idea, within the framework of the auditory channel, of human capabilities as compared to machines. In speech recognition, for example, humans are far better than machines by at least an order of magnitude--maybe two orders of magnitude--if you compare error rates. Language understanding is something a machine can do reasonably well, provided there is a very restricted domain. If you want to talk about travel, for example, that's okay, but don't ask about other topics in commerce.

In other areas, machines are actually doing better than humans--voice verification, for example. A machine nowadays can tell identical twins from one another. In the case of sound detection, you can sit on one side of a stadium and hear a conversation on the other side. In a sense, the machine is cheating. The microphone can take the sound spectrum and do translation so that you can hear subsonic and ultrasonic types of sounds, with no concern about auditory fatigue. Machines are also much better than humans in terms of sound localization--for example, hearing a siren as you're driving. So in some cases a machine can do better, but that's typically in focused areas where constrained pattern recognition techniques can be applied. With regard to the kind of speech capability we would like to have, however, we are far from it.

By far, the most successful approach to achieving high-performance speech recognition and language understanding is a data-driven approach using statistical modeling techniques. Although we have done very well in improving its performance, it's still far from human performance. Achieving human capabilities would require a tremendous amount of training data--hundreds, if not thousands, of hours of data, testing, training the machine to do a focused thing. And even that might not be sufficient. The current way we approach the problem may not be sufficient as task complexity increases, and we certainly pay little attention to human cognition and perception.

Interdisciplinary Collaboration

I would like to echo what Jim Anderson just said, that interdisciplinary collaboration turns out to be a big but rewarding undertaking. People working on speech are collaborating with people who were led to the notion of speech understanding by natural language processing. It turns out to be a wonderful thing: Not just transcribing speech, but understanding and executing it. It's a total win. So far, we haven't gone nearly far enough, and I certainly agree with one of the comments made here today. We don't have an educational program that could train people appropriately, even in such a narrow field as human language.

Now, although I am talking about one particular area, the same probably holds true for other areas as well. If we're interested in this kind of human-computing interface, I think we have to emphasize the set of language-based principles that integrate and coordinate multiple modalities and multiple media. We need to promote an interdisciplinary program that brings together people from different backgrounds and different expertise. We actually have done quite a bit of that at MIT, but not nearly enough. For example, I think people working in speech must understand vision; there are a lot of complimentary things we can learn from each other.

The last point I want to make is that if you want to do this, rather than having people work in individual pieces, you must be working on integrated solutions to real-world problems. If you understand the how of human intelligence, how we learn, the best way to demonstrate that understanding is through implementation. I strongly urge that we include a component in this program to emphasize this--to put your money where your mouth is, so to speak.

Q & A

Question: I noticed that most of your thinking here is sort of still "inside the box." A user sits down at this large or small box and that is the environment. As we're starting to envision what we need to invest in to really deal with some of the exciting and interesting changes in our culture that will occur five or 10 years out, there are two obvious concerns.

One concern is mobility: I'm in my car and on my car phone and I want to ask that question about the nearest hospital. What kind of issues are you seeing about that?

The other concern relates to my own programs and one of the things I've been emphasizing in them. What is it like to have a variety of utilities there as support for human-human, human-agent interactions, but as a collaborator experiencing some environment together? In a lot of my work, we have virtual reality environments where the programs are personified as characters; they walk around, talk with themselves and others; interesting issues arise. There is a different kind of intelligence here, a whole different set of strategies in action. What does it mean to have the commentary taking place in the common arena? Can you comment on these points?

Dr. Zue: Yes. Let me very quickly comment, and then maybe there are other people who can join me. I completely agree with you about your second concern; it is extremely important. Both of your points are important. The metaphor of a person sitting in front of a screen clearly doesn't go far enough. Let me give you an example: Soon, there will be a toll-free number you can call at MIT from anywhere in the country and ask about the weather condition in 750 cities. You will be able to get a weather forecast by talking to a machine.

One of the interesting things about this is that if you take away a display, the communication is drastically different. We make a distinction between "displayful" communication, human-human and human-machine, versus "displayless" communication. The output from the other party needs to be clearly filtered. You can't just dump all the words and have people suffer from information overflow and not be able to capture the meaning. Clarification dialogue then becomes much more important.

Question: Your comments relate to what I am pushing for in the shared context situation; there is a lot of anticipation that a different kind of intelligence is possible. Now the user drives the questions and interactions. In the shared context, you can imagine very different conversational abilities.

Dr. Zue: Absolutely. You're right.

Dr. Anne C. Petersen

National Science Foundation

Afternoon Introduction--Welcome and Objectives

I want to thank this group for joining us and extend a very warm welcome to you all. We have, here, colleagues from industry, private foundations and other federal agencies, as well as several of us from NSF. We recognize that each of you has many demands on your time and we are very pleased that you chose to make this a priority today. I should note, too, that there were others who very much wanted to come and are interested in what we decide.

We've had an exciting symposium so far, and I want to thank the five speakers who gave very stimulating presentations this morning. I know that we all have additional questions and comments that we want to share. In a few minutes, we will hear from Herb Simon on these issues, and I look forward to that as well.

The purpose of this afternoon session is to hear what each of your organizations is doing and to describe a bit what the National Science Foundation is planning on doing in this area. We welcome your feedback on what we're thinking about. We also want to hear about any activities or plans that you have in the different related areas, and we especially want to discuss the potential for various productive partnerships with you.

Interdisciplinary Research at NSF

For a long time, NSF has had research that fits within this overall rubric. I won't go through all of it, but it has largely been within particular areas: We've looked at technology, learning and cognition, animal behavior, and behavioral and developmental neuroscience. Programs in computer science and engineering have long supported work in this area. Over the last decade, we have also funded research on technology and cognition, and science and mathematics education. Hopefully, at some point in our discussions today, specific examples of these will come to the table. More recently we have recognized that we need to think about this as a broader effort cutting across NSF.

Two years ago, we had what we called the Director's Opportunity Fund and we invited proposals from around the Foundation. The proposals had to cross over directorates, with at least two of the Foundation's seven directorates involved. We had $50 million in the fund, overall, and we received many proposals. Three proposals were submitted, however, that were very similar to each other. The first was called Learning in Natural and Artificial Systems; it came from five of our directorates. Another was titled Educational Technology and was the product of two directorates. The third, Intelligent Systems, was submitted by three directorates. From their descriptions as well as from the titles, these proposals clearly had significant areas of overlap. We asked those submitting the proposals to work together to see if something productive could be designed.

The group working on it planned several workshops, and we had a report out to you of the first of those. Then, in order to move toward a Foundation-wide program, we encouraged our group of Assistant Directors to become intimately involved with the workshops. We thought it might be difficult to design an NSF-wide program without that key group being very much involved. The group worked with program staff who were asked specifically to represent all of NSF, not just their particular programs, in this effort. It was a very successful effort. Bill Butz was the head of the working group, working especially with Joe Bordogna, as well as several others among the Assistant Directors of NSF. Paul Young will describe this effort later.

Concerns and Possibilities for the Future

I was going to cover what it is that really excites me about this area, but I think we've already observed inspiring work this morning that overlaps with my own interests.

I do want to emphasize one point: We need to be sure that we consider human beings. We must avoid wasteful examples of the kind we've all heard: Somebody gives computers to a school and then assumes that learning will happen. But that school might not use the gift and might leave the computers in boxes. There is no wiring, no software, and, especially, no people who know what to do with this computer resource to use it for learning. We truly have a lot of work ahead of us.

I also spotted an editorial in the New York Times talking about computer addiction, saying that several colleges and universities were concerned recently about students who spend all their time on computers. The students seem to be genuinely addicted to the computers, sometimes doing their school work but not doing anything else. We need to anticipate these outcomes.

There are many important questions that could not be easily addressed with the way NSF typically does programs. I'm sure we all have our own set of burning questions that we think need to be addressed but that are not being effectively answered. But, as we begin to think about what we want to do in terms of our program that won't merely replicate what is already going on, it would be wise for us to find out what industry is doing, what private foundations are doing, and, especially, what other federal agencies are doing. Also, we hope to discuss whether there are possibilities for cooperating.

Let me turn to Dr. Herbert Simon. I think that for probably everyone in this room, he needs absolutely no introduction. He's been the mainstay of this entire field, stimulating lots of ideas for quite a long time. He's a Nobel Laureate in economics. And he's provided a lot of stimulation of both conceptual scientific work and more practical applications.

Dr. Herbert A. Simon

Carnegie Mellon University

Scientific Opportunities of Learning and Intelligent Systems

I have heard some very important issues addressed today. There is the sense of a common mission coming through all of the talks we have heard. There's a reason for this, and it begins with the history that we started with this morning. Jim Anderson related the history of cognitive science--I'll use the word cognitive science to cover all the things we've been talking about over the past few hours--which is a very long history, stretching back at least to the Cybernetics period, and probably even before that. The events he was discussing are now about 50 years old; one interesting question that I raise is: If this is such an exciting field, what's been happening in the last 50 years?

I think there are several things to be said about that. First, a lot has been happening on the interdisciplinary front in the last 50 years. The whole profession of computer scientist was created during that period. Computer science, in itself, doesn't cover exactly what we're talking about, for it extends beyond it in one direction and perhaps starts with another goal, although artificial intelligence sometimes likes to expand itself into these areas. That's a whole new profession that was created.

We shouldn't be faint-hearted about interdisciplinary efforts, although that's an example of another lesson: As soon as we recognize an interdisciplinary area and train people in it, they become a new, narrow discipline that is in danger of the same narrowness we experienced in the generation before. So this doesn't work for an indefinite period. It worked for a while, and then we had to start a new revolution, which is perfectly all right.

So we have another discipline: Cognitive science. And we now have people also doing neuroscience. That term is not a new term, again, but as a field in which you can get a Ph.D. and do things, it has arrived. And, yes, universities are regressive in many respects. They're still giving lectures. We're all fond of pointing this out, and some of us are fond of doing it; others have tried to reform a little bit. But we can point to an enormous amount of change and an enormous number of new kinds of mixtures of disciplines, or interdisciplines, over the past 50 years.

Second, we've been making a tremendous amount of progress in the substance itself. Before I get down to that, I want to say why there has been the start of this re-disciplinary shuffle and why there has been this progress. Not all scientific problems are born equal. There are good scientific problems, better scientific problems and really great scientific problems. As far as I'm concerned, it seems easy to pick out at least four problems as being right up there at the top. If you have a favorite fifth, I won't quarrel with you. If you start mentioning six and seven, I probably will quarrel with you.

The Four Big Questions

There are four problems that human beings have been concerned with from time immemorial. First is the nature of matter. High-energy physicists get really excited about that, and they should. Some of the rest of us do too, when we listen to them. Second is the origin of the universe: The Big Bang, as some of us think of it now--at least until the next theory comes along--and all of the things that go with it. Third is the nature of life. There again, we've had a big revolution--the molecular biology revolution, which is also interdisciplinary. And many of you know the history of the Rockefeller Foundation's role--in particular, Warren Weaver's role--in bringing that about and making the new interdiscipline possible. The fourth problem, of course, is the nature of mind: How the human mind operates, what it can do, what it does do, and, today, the imitation of mind by machine.

Now those are great scientific problems, exciting scientific problems. It is hard to think of problems that could be more exciting. They each have characteristics of the kinds of things that we should be providing resources for and supporting and that NSF obviously should and does support. But there's another criterion to consider: Science shouldn't be supported on a large scale unless there's something you know to do about it; unless there are some techniques for approaching it; unless we have the machinery and the means for going at it, or some shadow of an idea. Of course, if you have all the ideas, then you're done and you should go look for a new problem. But you have to know how to take the first step to make research on a topic viable.

In the past 50 years, give or take, there has been substantial and successful demonstration of the possibilities for exploring and understanding the human mind on a variety of levels. This goes all the way from the molecular level up to the level of social systems, including social systems today that contain important electronic components for computing and communicating. We saw and heard examples at all of these levels in the discussions this morning. We saw perhaps a half a dozen examples, and we could have seen hundreds--tens of hundreds--of similar examples. I have a few favorite examples of my own.

Opportunities and Progress

This particular proposed initiative specifically emphasizes the learning aspects of intelligence. This is the idea that intelligence isn't exhibited simply in the performance of an intelligent system, whether human or computerized, but, in the human case, by the fact that the human being can acquire that intelligence through a series of appropriate experiences. The whole trick, of course, is to a) provide those experiences; and b) motivate the human beings to learn from experiences, which is a very large part of the game.

I don't want to continue talking about the things we now know about human beings, because if we knew them all, we wouldn't have to be supporting any research. Victor Zue, for example, showed us some of the things we didn't know or didn't know quite well enough about human speech recognition and human speech understanding. Here we have a field that is well underway, a field in which substantial progress has been made to address one of the really important and exciting scientific problems. This problem is not only exciting intellectually, but practically as well, because it bears on everything we human beings do and the success of it all: On our learning, on our performance, on our decision making, on our problem solving--you name it, speech recognition and understanding is deeply concerned with it.

We're at a particularly good time now--we have been for perhaps maybe 10 years--for pursuing cognitive science with vigor because, at the moment, equipment and the cost of equipment are not really a bottleneck. The NSF people may groan a little when I say that, because they deal with budget limits and all sorts of issues like that. But it's not really a bottleneck. We did our first AI program--The Logic Theorist--on one of the largest machines in the world, the JOHNNIAC. And we barely got it on. As I remember, we had 6,000 words in the core and 10,000 words on a drum, but it was scratched, so we really didn't have 10,000. Today, as you know, you don't even need a PC to run programs like that.

Contrary to predictions made a few years ago, most sophisticated computer programs for either doing intelligent things or modeling intelligent processes are not being run on supercomputers. They're being run on a PC, if not a laptop. So we have no more excuses that we can't tackle a problem because the machines aren't big enough. The machines are plenty big enough. The real limitations today are our own imaginations, our own ability to push through the research to create the concept. It is a really terrifying thought. Assuming, of course, that our salaries are going to be taken care of--but that's peanuts when you look at the investment our society has made, quite correctly, in the other three problems that I have mentioned. As soon as we realize what progress we can make, we're going to make similar investments in this problem. That may be what we are just beginning to do.

Understanding Human Thinking

As I said, we had excellent examples this morning of areas where important progress has been made. Let me mention a couple of my other favorites in order to illustrate some specific points. I have watched the development of this field primarily from the viewpoint of a psychologist trying to understand human thinking by simulating it with computer programs. This is just one of the many approaches that was mentioned this morning--as it should be.

The basic strategy in this particular field over the years has been the following: First, pick a task human beings do that everyone would agree requires intelligence. Then, see if you can figure out enough about it by running laboratory experiments, by taking thinking-aloud protocols from human beings, by doing all the things you can do, such as studying brain-injured people and so forth. See if you can figure out enough about it and understand the processes well enough so that you can build a computer program that will not only get the same result, but will do it in a humanoid way. That, again, is a scientific problem: Finding out what actual processes the humans were using along the way and making sure that those are the processes you are using in your simulation. When you can do that for a particular task to a satisfactory degree of accuracy, so that there is a good match, for example, between the detailed trace of the computer program and the thinking-aloud protocol of a subject, then you have a theory that makes almost second-by-second predictions of the subject's behavior, and you can begin to test the generality of the theory for other subjects. An early program of this kind was the general Problem Solver, which demonstrated the central role of means-ends analysis in human problem solving in a dozen different tasks.

Very early on, these methods were applied to studying the process of choosing moves in chess, although not much of that research effort was really devoted to understanding how humans play chess. Most of it was devoted to seeing what you could get hardware to do--still is today, for the most part. We graduated from those games to puzzles. Tower of Hanoi was a big source of research in the field of understanding what a puzzle is all about, and why it is puzzling. And then people responded in the following way: Oh, yes, of course computers can play chess, but chess is a very well-structured game; it's very artificial; it's not like life. Those of you who work in this field have heard it time and again. So then you say: Well, tell me a task that is hard for people to do, and we'll try that next.

So the field has expanded over the years to the point where we can make some very strong statements, for example, about the processes that physicians use in making medical diagnoses. There is a commercial product on the market now that diagnoses, and does it very well at clinical levels. What matters most is that we can show the important extent to which that product, and others like it, do or don't imitate the human diagnostician.

Understanding Creativity and Intuition

The same thing can be said about many aspects of the process of scientific discovery. We can, in fact, show that the processes we postulate in machine programs imitate historically important discoveries in process, not simply in product. An example is Kepler's discovery of the third law of planetary motion, a discovery process that has been successfully modeled by the computer program, BACON. Other kinds of problems arise when one does this.

There's a pop psychology about human thinking, and a pop psychology about what computers are. Take the latter first. Computers are machines. We all "know" what machines do: Move in a rote fashion. Of course, anybody who lives with a computer has to get over that notion rapidly. Computers are just as confused as we are in many respects. They follow their programs, but their programs can be confusing and are often not adequate to the tasks that we give them. So there has to be--not only among those of us who are doing the research, but in the wider population who has to support it financially and morally--a new understanding of how mechanisms can, in fact, accomplish thought. The best way of providing that understanding is through examples.

Bakunin, the anarchist, always advocated propaganda of the deed rather than propaganda of the word. We are not going to change people's views about what thinking is in human beings or what thinking is in computers by talking about it. We are going to change people's views by building systems that have properties you can determine by comparing them with properties that we can experimentally determine, in a scientific way, to be present in human beings doing the same thing.

Let's take the word "intuition." It's supposed to be a particularly marvelous--and sometimes miraculous-- property that human beings have that allows them to do all sorts of great things--especially if they happen to be Mozart or Einstein. (For the rest of us, our intuition doesn't seem to get us to quite the same place, but never mind that.) There is this thing called "intuition." To move a field like this forward, we have to understand not only the very concrete and down-to-earth words, but also words like "intuition," and we have to understand the processes to which they apply.

We've gone again a long way towards that understanding. What phenomena do people report when you ask them concretely: "What is intuition? Give me an instance of it. Give me the criteria used to judge when this has been intuition." Today, we can respond to these questions and say: Oh, yes, intuition is simply the process of recognition, and here are some computer programs capable of recognition and of behaving intuitively in the same way that human beings do. The list includes, as a matter of fact, the programs that do medical diagnoses. When we take this pop psychology and submit it to the usual scientific tests, we can find very operational counterparts to the concepts discussed in it. Then we can see what kinds of processes, what kinds of mechanisms, realize these counterparts.

Processes for Continued Progress

Let me sum up this point of view and say a word or two about some of the paths that need to be followed if we are to produce valid theories of thinking. First, in most of the areas that we are dealing with here, we are pursuing an empirical science, whether it be the behavior of computers when they are programmed in certain ways or the behavior of human beings as a function of the kinds of training they've had and the kinds of learning they've done. Now, the term "empirical science" does not indicate that there is no theory. It means that the final test is definite evidence, in the laboratory or in the operation of a running system, that tells us how it really does run and whether the theory is right.

Insofar as we use computers in this business, the computer gives us a new tool of theory. A computer program is nothing more and nothing less than a set of difference equations. Difference equations are different from differential equations only in the way that they treat time: They step instead of flow. So we have today large bodies of theory in the form of computer programs that tell you exactly what a set of difference equations tells you--namely, if the system is in this state right now, and you give it this input, then it will be in that state at the next turn of the screw, and so forth and so on. So we have a powerful tool--not the only one, but a powerful one--of theory there. We are an empirical science or sciences: We build and test systems, we experiment with people at the symbolic level and at the neural level, and we bring these two worlds together by simulating human thinking and learning.

Finally, let me get back to the learning aspect of this. Our main interest is not just to understand human thinking, but to arrive at intelligent systems that will help us do all sorts of things, such as designing machines or medical diagnosis. Then, of course, from one standpoint, we wouldn't need to worry about learning. Because you can take a computer and if you know what you want it to do, and if you know how to write the program, once you have written it, you can stuff it in the box. Then there is the even more beautiful property: Once you have it in the one box, you have it in all of the boxes. Programs are copiable. You don't have to educate a class of computers; at worst, you have to educate one computer to get the class educated.

However, when we get big complex computing systems that are trying to do anything intelligent, we also know that we almost never want them to be doing the same things tomorrow that they're doing today. In large complex computer systems today, the big problem and the big expenditure of human effort is updating those systems. This requires an understanding of the systems at a level of detail that is almost impossible for human beings to work with, particularly with personnel turnover.

So, we are going to have to look at learning problems, not only to improve human learning in all the ways that have been talked about today, but also in order to make use of this wonderful resource of the computer to help in human work--to do some of the intelligent work itself, such as screening out all the information we can't handle. Gerhard Fischer has helped us to realize that it is the human mind that is the scarce resource, not the information. The only way we can filter through vast bodies of data effectively is by having intelligent systems between us and the information. To achieve that kind of intelligence, computer programs are going to have to learn, because they're going to have to continually change and modify. So there is another important reason for focusing on programs in this general area of cognitive science, and on the particular subject of learning.

The talks we heard this morning were both an illustration of what has already been accomplished in this field and evidence of a marvelous rate of progress over the past 50 years with very modest resources. I once predicted that the computer would be playing World Championship Chess in 10 years. I made the prediction almost exactly 40 years ago. I'm not the least apologetic for that prediction. The only reason it didn't happen in 10 years is that the people, not just the bucks, weren't there. There wasn't the spirit of adventure and the spirit of excitement that I heard in this room this morning. With that spirit of adventure and excitement, we can make that progress immensely more rapid than it has been over the last 50 years. Thank you.

Q & A

Question: The four major problems you identified seem to me to belong more to the powers of what you characterize as the natural sciences. That is, if you think about how we can employ learning and intelligent systems to make this a better world, I would count this as at least belonging as much to the artificial sciences as to the natural sciences. So where, in your four major problems, is this problem of the sciences of the artificial?

Dr. Simon: Well, one of the things I said in my book (The Science of the Artificial)5 is that the line is really a very artificial line. Wheat is an artificial subject, an artificial object. Wheat wasn't built into the system at the Creation. Human beings produced it by selective breeding. When we are talking about learning systems, we are not talking about anything very different than when we are talking about evolutionary systems. Novelty is not coming into the natural world as rapidly as it comes into what you are calling the artificial world. But, however "artificial" an object or a system you create, it has to obey the underlying laws of nature.

One of the things that we are trying to understand, when we build big computing systems or when we try to understand that human computing system called the brain, is what constraints are placed on such systems by the underlying natural laws. Some of those laws are very abstract. For example, we might ask: What are the capacities of such systems for parallel action (for doing a lot of different things at this time)? Part of the answer to that question may be: Well, there's such-and-such a part of the brain that has to deal with anything that involves conscious awareness, and it only has such-and-such a capacity, etc.

Many models of thinking have such an assumption in them, but others of the constraints are very much more general. For example, if you have a complex task in which there are certain precedence relations, so that certain parts of the task have to be done before others, then what kinds of parallel systems will work in this task and what kinds won't? That applies to human organizations as well as to the way evolution finally got a beetle organized, and what specialized organs it had to have. I think that although it's useful to think about systems as artificial because that leads you to think of how they can be different than they are, we shouldn't divorce that from the natural laws.

Question: You made the distinction between the pursuit of scientific exploration about what humans are like--a fundamentally psychological task--and using computers and simulations to better understand that in terms of principles, versus solving the problem; for example, some optimum or theoretical--you used the word hardware --optimizations. I'd like to hear your comments on when it becomes true that pursuing optimality under realistic constraints comes back to rediscovering the principles of psychology.

Dr. Simon: Well, there certainly is a lot of this kind of rediscovery going on. I think the comment Gerhard [Fischer] made was that the von Neumann computer borrowed a lot of ideas from human psychology. Certainly when we invented list languages, the notion of association was very prominent in our minds, and we saw it as something that had to be in that system to give it the properties that we wanted. So again, I think the reason there are many people who wander back and forth between expert systems and human psychology is because there's been a very profitable trade between the two through the whole history of this field, ever since computers first came into the world. And I think we are going to continue to see that. The current progress in chess programs is not being made just by making the machines faster. It's being made by borrowing more and more ideas from what we know of the human world of chess.

Question: Do you think that there would be a different research agenda if the problem were set as trying to use computers to augment human abilities as opposed to replacing them?

Dr. Simon: I think the research agenda has to be like all research agendas, starting with: What is the nature of intelligence? What do you have to have to get a system to behave intelligently? We have these things here we are very proud of--our human brains, which we think are very intelligent. So a good way of studying intelligence is to study how that system operates. And then we have the practical needs of the world and this other interesting thing called a computer and we can ask the question: How can we get that computer to exhibit intelligence? To some degree it will do it in a humanoid way, but not necessarily.

Then we get to the interesting situation where humans and computers do tasks in interaction. CAD/CAM is an example, although humans are still doing most of the designing in CAD/CAM, and computers most of the drafting. But that too will pass. And then you have to decide, both from an economics standpoint and from social standpoints, what the roles of the human will be in this co-existing system and what will be the roles of the computer. As in most science, you have to get the knowledge first; you have got to get the potentiality first before you can make such a choice. I don't think any of us can promise that the choice will always be made right or will always be made to the advantage of the human species, which is the side we're on.

I think all of us need to be aware that in any fundamental field of science, whether it be physics or molecular biology, you can't advance human possibilities for action without creating deep moral and ethical problems about when those capabilities should be used and for what purpose. If we haven't been aware of those problems before, the more successful we are, the more we are going to be aware of them, just as the biologists are becoming aware, and as the physicists became instantly aware in 1945.

Dr. Paul Young

National Science Foundation

Overview of NSF Learning and Intelligent Systems Program and Vision

I would like to make some remarks before we get to the real business of the afternoon, which is to hear from our friends in industry, other government agencies and foundations about what they may be doing in this general area and their reactions to the initiative and visions being expressed here. I would like to talk to you about what we have done within NSF regarding learning and intelligent systems and where we are in this process.

After this morning's talks and the presentation by Herb Simon, I am sure you feel the sense of excitement that we at the Foundation feel, especially about new knowledge and technologies enabling the human ability to work with intelligence and creativity. We believe that advancing this knowledge and these technologies will require new forms of cooperations. We have heard a lot about that from our speakers this morning, and I am sure you are going to reinforce this view this afternoon. We are interested in cooperations across the boundaries of disciplinary departments in universities, across boundaries between science and engineering, across boundaries between the search for new knowledge about learning and intelligence on the one hand and the problems of applying that knowledge in developing new learning technologies and methods on the other.

We also hope to have a staff that can help cross boundaries between universities and industry, between industry and foundations, between federal agencies and--sometimes the most solid boundary of all--between those of us in the federal government and the rest of you. Having arisen, boundaries are tough to break. It takes a strong vision, and often shared patience, to make the effort and to provide the fuel for the journey. But our vision should be strong. Certainly we've heard a lot of vision this morning.

Our vision is to augment the human ability to learn and create. We hope from these ideas we can develop a shared understanding of what will make new efforts in boundary-breaking worthwhile for each of us, and what will motivate our colleagues to join us in the next steps. I'll come back to the vision and the mechanisms for implementing it in a few minutes. First, however, let me tell you a little bit about business as usual at the Foundation, and then I'll get on with Learning and Intelligent Systems.

Disciplinary Research at NSF

Most of you know the Foundation has seven directorates that organize most of our activities. These directorates have names like Engineering or Education and Human Resources. Within these directorates are programs, where most of the real work of the Foundation is done. The programs have names like Linguistics, Neuroscience, Knowledge and Databases, and Robotics and Machine Intelligence. Some of the programs correspond fairly directly with departments in universities and tend to fund research in those departments. It is, in fact, research in these disciplinary areas, often conducted by disciplinary programs, that has produced a lot of fundamental work in existing areas, including many of the things that we heard about this morning. Earlier, Anne [Petersen] gave a brief overview of some of those areas; I'm not going to repeat them or go into more depth, but they're broad and they span the areas of the Foundation.

In recent years, NSF's funding of such largely disciplinary-based research in these broad areas has been in the range of perhaps $100 million a year. We believe that these expenditures pay off well. The resulting bodies of knowledge and technology in many fields are impressive. This fundamental work is really important, and such work is the foundation on which we must build.

Cross-Cutting Research at NSF

The integration of knowledge across fields and across boundaries that we've been talking about today has only begun. Some integration often begins out in the universities--often before it begins at NSF. Jim Anderson this morning talked of its beginning 50 years ago, and today we are still working at it. Still, those of us at NSF whose programs are responsible for the various research communities in what we're beginning to call Learning and Intelligent Systems have been aware of the need for more cross-cutting work and for building this area more aggressively.

In 1990, several program directors in what was then the Division of Behavioral and Neurosciences and the Directorate for Computer and Information Science and Engineering began planning a Foundation initiative in this area; a workshop was convened in April 1991. Fifteen leaders in the contributing disciplines gathered and produced a report entitled To Strengthen American Cognitive Science for the Twenty-First Century. Thereafter, these program directors continued to coalesce their common interest in this growing area.

As Anne Petersen mentioned, a renewed impetus came about in September 1994 with NSF's new Opportunity Fund, which was designed to elicit new cross-cutting research. Anne mentioned that there were three separate proposals that came in, all spanning the Foundation in many ways, independent of each other and with broad themes common to all of them. As a consequence, the higher management of the Foundation asked the three groups to get together to begin thinking about a common initiative in these broad areas.

NSF encouraged this new integrated initiative and funds were made available for convening workshops and other activities in the general area of Learning and Intelligent Systems. The group formed, met intensively throughout 1995, and planned and conducted four workshops between September 1995 and January 1996. The first workshop concerned Learning and Intelligent Systems broadly; you have the report of that workshop. The second developed a research agenda on Computationally Based Educational Technology. The third focused on Learning and Adaptation in Natural and Artificial Systems. And the fourth concerned Modeling Biological Systems Including the Human Brain.

A great many ideas have emerged from these workshops, along with consensus within the Foundation that a convergence of research on learning in humans, in other organisms and in artificial systems offers a rare opportunity for collaboration as a potential for important new results. That consensus within the Foundation has surely been reinforced by what we've heard today from all the speakers, Dr. Simon in particular. On these foundations, a broad working group was formed in late 1995 for the explicit purpose of trying to plan the extended support.

The group's efforts met obvious enthusiasm on the part of NSF's top leadership, and they expanded the leadership to include all the directorates at the Foundation. This group planned today's workshop, and they have already begun an initial effort in Learning Technologies that I will tell you about in a minute. And they also began preparation of a program announcement to launch a broader initiative in Learning and Intelligent Systems in Fall 1996.

Let me digress a moment. Part of the overall effort here is Learning Technologies. We've already started that part of the overall effort earlier. Recently, we had an evaluation of over 200 pre-proposals dealing with collaborative learning tools, visualization, data exploration, virtual laboratories and so forth. These proposals explore the interface between technology and human learning, drawing on the problems of learning to drive research and problems of technology development to address challenges in learning.

The proposals in this area require collaborative research in a number of areas, including modeling the learning and teaching processes that some of you referred to this morning. We also have research supporting automation and pedagogy, and user center designed learning systems. We heard about these this morning as well. All of this is part of the longer term effort to understand how information technology can help transform education in the 21st century. And this is simply part of the overall effort for Learning and Intelligent Systems that you have heard about more generally today, and the broader effort in neurosciences, cognitive sciences and so forth.

Realizing the Vision

At the beginning of my remarks, I promised to return to mechanisms for fostering our long-term vision. The vision is to augment the human ability to learn and create. To do this, we want to foster more cross-disciplinary research; we want to understand how learning and intelligent behavior occurs in humans, other organisms and artificial systems; and we want to use this knowledge to develop learning tools that are accessible to users with different capabilities, knowledge and expertise. This is also part of what you heard this morning.

We are planning to embody this dual mechanism in a new cross-directorate program. Representatives from every directorate in the Foundation will sit on a committee to evaluate proposals that come into this program. The anticipated program for Learning and Intelligent Systems will lead to the fulfillment of three specific long-term objectives. The first objective is the effective integration of existing research on Learning and Intelligent Systems from such separate fields as animal behavior, artificial intelligence, cognitive psychology, cultural anthropology, education, language acquisition, mathematical systems theory, neuroscience and sociology.

The second objective is fundamental engineering, methodological and scientific breakthroughs from new cross-disciplinary research teams that combine their knowledge and perspectives in new ways to attack the most important puzzles about learning, intelligence and creativity.

The third objective is the development of effective new learning tools that integrate the best new research findings from all relevant disciplines to produce interactive, collaborative and visual technologies for persons with different capabilities and expectations.

This is how we are approaching NSF's vision, and we are eager for your reactions, participation and advice.


Mr. Thomas Kalil

The White House

President Clinton has a really significant interest in this issue. He has focused on the need to invest in education and training as a core part of his economic agenda, and the reason for that is becoming more and more apparent. What is happening in the global economy is that technology and capital are becoming increasingly mobile. A recent study done by McKenzie says that this notion of fully integrated global financial markets, which economists have talked about for a long time, is, in fact, becoming true.

If you think about what is uniquely American and what is going to serve as a source of competitive advantage to maintain our standard of living, it is clear that this whole area of lifelong learning is incredibly important. Obviously, it is important not only for economic reasons, but also for social reasons, and for educating children in ways that will make them full participants in our political system as well.

The President and the Vice President have talked a great deal about the opportunity that we have--if we can do it correctly--to use technology in ways that will support lifelong learning. In February 1996, the President announced a $2 billion initiative that would go to leverage activities at the state and local level for states and communities interested in increasing the number of computers in the classroom, for connecting the schools and classrooms to the Internet and other advanced telecommunications services, for training teachers so that they feel comfortable using the technology, and, perhaps most importantly, for developing software and other educational applications that would actually improve student performance. There is a very high level of interest in this area on the part of the President and the Vice President because it is ultimately going to determine our long-term standard of living.

Dr. Eric Horvitz

Microsoft Corporation

Many of you may not be familiar with a fairly intensive effort that Microsoft initiated about four and a half years ago to build a leading research group in the realm of computing. To date, there are about 100 people in this group with subgroups in natural language processing, speech understanding, vision, user interfaces, graphics and my own group, decision theory and adaptive systems--where we study such topics as learning models of users from data, information retrieval and intelligent decision making for enhancing the user interface and operating systems. Learning and intelligence, the themes in your report, cut to the core of what Microsoft Research is all about.

Both the research group and the executive management of Microsoft believe strongly that critical innovations in learning and intelligence are necessary to make computing ubiquitous and valuable to society, and, of course, for promoting commercial success of the software and computing sector. There are three different ways to look at how this conversation can relate to Microsoft, and more particularly, to Microsoft Research.

First, the research group is doing things that are very relevant to this proposed program. For example, there's a project at Microsoft Research centering on automated methods for reviewing large corpora of information and for automatically building large usable knowledge bases for speech understanding, natural language processing and decision making. The research group itself supports targeted external groups, including, for example, the MIT Media Lab and CSLI at Stanford. We support several researchers and teams that have mutual research interests. However, on the whole, Microsoft tends to hire people as full-time employees to build excellence internally while maintaining affiliations with consortia of various kinds.

Second, there is a fairly extensive community-giving program at Microsoft just getting off the ground now. The program's first themes include lifelong learning, K-12, as well as university-based education programs and the application of technology and computer software to enhance lifelong learning. There is an aggressive software and assistance donation program to schools and libraries, as well as general interest in funding related projects. The community affairs program is especially interested in leveraging donations for the lower socioeconomic communities, targeting libraries in inner cities, and communities such as Native Americans, and so on.

Third, there is quite a bit of work on educational software products for kids at Microsoft. Diligent and aggressive program managers and developers are attempting to take the best technology we have and put it into real products that can be used by children in the K-12 realm to enhance the learning experience.

Mr. William Clancey

Institute for Research on Learning

I have to start by saying that, as a cognitive scientist and computer scientist, I was very excited by the report. I resonate strongly with the neuro-science/psychology/computer science connection--the ideas here are very exciting.

Coming from the Institute for Research on Learning (IRL), I have a different perspective. So, I have four points to make. The Institute for Research on Learning is a not-for-profit organization which could be actually viewed as an experiment in multidisciplinary research. We were founded about nine years ago. The hook for me, when John Seely Brown of Xerox got me to join, was that we were going to combine social and cognitive perspectives on learning. John essentially said to me, back in 1987, that in cognitive science we have all of these ideas coming together, but the anthropologists have not been working with us. To some large extent, the sociologists have not been working with us either. What would it mean to bring those people together? There was a lot more detail, of course, but that was the core notion, and it's what I have been trying to do for the last nine years.

The Institute for Research on Learning, though, does not have all the components of what we might consider a multidisciplinary group because the vast majority of our people are social scientists. I am the only AI researcher at IRL. Jim Greeno, whom I'm sure many of you know, is at IRL very often; he participates at least one day each week, so he is our other cognitive scientist. Everyone else is an anthropologist, social linguist, or social anthropologist, and the vast majority are women--there are probably 40 out of 50. So it's an experiment, and my conclusion after nine years of this work is that the collaboration is a big success to the extent of what's practical and what's worth doing.

To summarize what I've learned: I came into IRL with a background in expert systems and intelligent tutoring systems, a clear AI researcher. One of the puzzles we faced was why industry was not picking up on designs that were already 20 years old. For me, the hook was the realization that we were changing the practice of instructional design, and that it couldn't be done just as computer scientists. Yes, developing tools is a large part of changing a social system, but it's not the whole picture. That's why we didn't have the success that we expected when we believed that the tools would speak for themselves.

The second thing I learned has to do with broadening the design space of what we can do with the technology. In a nutshell, this is the idea of augmenting intelligence, not just replacing people. It's great when we can automate and when we can replace, and that's what we want to do as soon as we can, maybe. But we haven't been considering how we could facilitate conversations as opposed to just replacing them.

Let me give you a simple example that really hits me, one that speaks for the change in mindset. When I was developing a tutoring system from the MYCIN Expert System back in the early 70s, my perspective--and again, I think it was fruitful and important--was the notion of building an omniscient tutor. I wanted to build a program that could do as well as any person and would allow individualized instruction. That led to a very fruitful line of research; but what really strikes me is that we never considered an alternative use of the technology at the time. Part of the reason was because we wanted to push the state-of-the-art of the technology, not solve the education and training problem.

It didn't occur to us that we could have taken MYCIN as it existed in 1979 or 1975, with its rather good explanation system, and given it to students with, say, a set of 10 cases, and told them that in some of these cases the program was making mistakes. We might have challenged them to use the explanation program to find out where the knowledge base could be improved, and then to tell us what the improvement should be. Or, perhaps, even to find another case in which the program did well, or a case where it didn't do well.

What really strikes me now is that the technology was there. We didn't have the hardware; we had to deliver it on a million-dollar machine, so we couldn't have had all the students around. But in terms of the design space, that was not one of the possibilities that we considered, and it was because we were playing a different game. Today we can consider it, if we're interested in solving the real problems. That was an eye-opener.

Another aspect of this change of design space--and this is at the core of our change in mindset--was a different notion about knowledge. The big lesson from the social scientists was to think about the work that people have to do in practice when they have a theory in hand, such as a company policy. What do they have to do to solve the problem? I observed a very clear example when I worked with Xerox in Lewisville at their Customer Administration Center. There was a first-level manager sitting around with her people, essentially saying: Here's Xerox's policy, now how are we going to solve the customer's problem? Their problem was to creatively interpret that policy so that they didn't get knocked down, but still they adhered to the higher rule of solving the customer's problem.

The notion of construction of knowledge as occurring in everyday design and in policy interpretation was a big shift. I had originally understood construction of knowledge to mean the construction of scientific theory, but that led down the wrong path of cultural relativism and so on. The real meat of it was on the floor, with policy interpretation and design problems.

What IRL has been doing is basic research on real applications, which is great. We're still selling that and we're quite successful. We're up to about a $5 million budget, soft money. NSF has been very generous on the school side, and we have the business side as well. But what we have concluded is that we need to preserve the basic research of each of the disciplines as well; the reality is that the individual researchers continue to publish and participate in their own communities, and we want that to continue.

A second point comes from work at Kaiser Permanente, NYNEX and Xerox that I've been involved in for the last three or four years. I would say that business, speaking very broadly, is willing but not ready. I am seeing amazing expert systems and intelligent tutoring systems; they are out there. People are willing to build these things, but the businesses, corporations as a whole, are immature. Their immaturity comes from lack of a local infrastructure. You find people who are the champions, who have no training in how to build an expert system, but who are trying to build something. So you see people coping, coping with the resources, the time and the practicalities, but without enough in-house capability to do what they are trying to do.

The second half of this point is that the expectations are also quite immature. Again, design space needs to be broadened. It astounds me to think of somebody saying, "They're paid $20,000, I've got to hold to that. I can't train them on how to do the problem better. I'm going to just have to use an expert system there." It is 1996, and this is happening! The difficulties are in getting them to think about what learning is, and why it is that they're calling these people knowledge workers. All I can say is that it is a big systemic mess. Their understanding of how to use the technology is just beginning.

My third point comes from my experience working with social scientists. I've learned a tremendous amount, and I've been very excited by this experience, but there are major problems. The most important problem is a lack of training, a lack of understanding, of what design principles are. It is, in its most extreme form, a lack of appreciation of what modeling is all about. I can only capture that by a phrase that just left me in total silence: "How can I model what I don't understand?"

And finally, I came in with an idealized view about how social science was going to be the full complement to cognitive science and psychology. I've learned a lot about motivation and about identity and participation, but I've been very frustrated that few people talk about curiosity or about leadership or ego.

My points are that overall multidisciplinary research has been fruitful, that we have to retain our disciplines, and that business is not quite ready. A lot of work has been done, including the report we received, but I don't see very much included about bringing in the social sciences. I think it could be fruitful, but it's going to be very challenging.

Mr. Phillip Milstead


NASA, of course, is interested in intelligent systems, primarily in the field of engineering and in a few others as well--more than, say, the education/lifelong learning arena. But we do fully understand the need for technology to bleed from one area to another. The value of technology exchange is evidenced by our past projects in education and lifelong learning. We have been involved in curriculum development, low-cost connections to information and other related activities. All of this is well-documented in the High Performance Computing and Communications Supplement to the President's budget. Any one of you can look it up there if you like, or come to see us. We'll gladly talk with you about it.

We also fully believe in collaboration with other agencies and companies--whatever is necessary to get the job done. There is definitely room for technology tradeoffs here between the varying disciplines. We all look forward to doing that.

Dr. Steven F. Zornetzer

Office of Naval Research

Dr. Simon mentioned that the field is about 50 years old. A few weeks ago, the Office of Naval Research (ONR) celebrated its 50th anniversary. I think the founding fathers of NSF all came from ONR as the original funding agency in this country. So this is interesting, coming back together here.

We have a different model at ONR for funding and supporting interdisciplinary research, which we view as very important in our organization. Someone said this morning that the most interesting problems are found at the interfaces between disciplines. We firmly believe that, and we commit a good deal of our research money--in particular, our 6.1 basic research dollars--to exploring these interfaces. In my own department, which is analogous to one of your directorates here, we have a number of interdisciplinary programs that I have begun over the past number of years.

The first interdisciplinary program is Neural Networks, a marriage between neuroscience, electronics, computer science and mathematics. Neural Networks, which we began in 1984, has been a very successful initial investment. The second program is Cognitive Neurosciences. The third, Behavioral Immunology, was a very successful program that NIH subsequently picked up. Fourth is Molecular Marine Biology--again, the marriage of fairly disparate groups that we encouraged to work together to develop a new understanding at those interfaces. Two other interdisciplinary programs are Computational Vision and Biomaterials.

There are a number of things that we explore with as much money as we can put into them. In addition to this kind of horizontal integration across the disciplines at ONR, in our recent reorganization we've also applied a kind of orthogonal, vertical integration. We have decided to take our investments, from basic research to applied research, and focus them in integrated ways. This is done in addition to managing basic research dollars; for example, in Cognitive Neuroscience or Human Computer Interactions, I'm responsible, as are my counterparts at ONR, for integrating those basic research dollars with technology development and applications-oriented investments. We develop products, and we try to develop a stream of coordinated investments from our basic research through advanced development.

This is a very interesting model, a model which, again, most of the funding agencies really have not embraced. It is sort of an experiment. I know DARPA does this to some extent, but as ONR has more basic research money, and DARPA has more applied research money, there is a different emphasis in the way the balance is worked out in each organization. Again, as Jim Anderson alluded to this morning, some of the military funding organizations, particularly ONR, have long histories of trying to find those high-risk, high-payoff areas. We know we're going to fail in a number of these, but we're willing to make that investment.

We've specifically developed internal devices to motivate the development of interdisciplinary programs. I can talk about peer review, for example. It is an enemy to the kind of thing you're trying to do here. This won't work if you rely on traditional peer review as the basis for making decisions in this kind of interdisciplinary program. As Dr. Simon indicated earlier, this is a really fundamental problem. It's one on which we should all work together.

It's interesting that NSF, ONR and DARPA are all within a block or two of one another. We don't cooperate or collaborate on as many project areas as we could. This is one that we all have a significant interest in. I've got a lot of money invested in related areas, and I know you have more than we do. It would be very productive for us to explore ways to work together.

Dr. Yann Le Cun

AT&T Laboratories

AT&T Laboratories is the part of Bell Laboratories that stayed with AT&T after the recent break-up of the company. It's a group of about 350 people and represents about half of the information science part of Bell Labs before the break-up. My personal background is in machine learning as a way of understanding human learning and animal learning, and as a way of building intelligent machines. That is the primary interest AT&T has in that respect.

There are many groups within AT&T Labs, and among the 300 people, I would say there are 40 to 50 people who are involved with machine learning to some extent. Machine learning has had an enormous impact on certain areas of computer science recently; today, it is an essential part of intelligent systems. For example, the recent success of speech recognition can be almost entirely attributed to new ways of training machines, namely interactive models. The success of recent systems in machine vision, particularly for handwriting recognition, can be almost entirely attributed to neuro-network technology and is based on machine learning. There are, of course, several other subjects in which machine learning can have a big impact and already has in certain respects.

In the area of pattern recognition, we know now we have no other way of doing it but to rely to a large extent on machine learning. Machine vision normally ... [inaudible] ... computer vision, speech recognition, natural language processing--this is becoming more and more dependent upon statistics and data collection, intelligent agents, data mining and things of that type. There's a fairly big effort in AT&T Labs on the theoretical foundation of machine learning and learning in general, particularly using certain recent developments that draw on work done about 20 years ago around the theory of the ... [inaudible] ... dimension. The potential applications are, among other things, human-machine interaction, control of large telecommunication networks, and, obviously, helping telecommunication customers to mine data on the network.

AT&T is interested in other aspects, sort of the flip side of what we've been talking about here. We are interested in human learning, the training of people. AT&T is also involved in a project to connect schools to the Internet and because of that we are involved in setting up computers in schools. These are the obvious overlaps with what has been discussed here.

Dr. Mark Miller

Apple Computer, Inc.

I'm from what is now called Apple Research Labs. A week ago, we were the Advanced Technology Group. The name change reflects a slight change in emphasis on the part of our new CEO, Gill Emelio--we all feel good about that and about him. Within Apple Research Labs, we have a number of different laboratories in areas that fit together very well. One, the Knowledge Systems Lab, is doing work in areas such as extracting structure and content from unstructured text and searching and indexing. ...[inaudible]...

Another lab is developing advanced prototypes of what we call Pervasive Computing Devices, mobile wireless devices that are more like an "information appliance" than the conventional notion of a computer. Obviously, there's a computer in there, but it might be somewhat more specialized. The idea is to think in terms of devices being portals into the information space: It's not the box that is important, it's the knowledge that it lets you access.

My own lab, which until recently was called Learning and Tools, has a number of programs. (I'll tell you the new name in a minute.) We also have the Apple Classrooms of Tomorrow, where the focus recently has been on the establishment of international sites. In the last couple of weeks, we opened sites in Australia, Scotland and Belgium. We're looking at sites in the Pacific Rim in the future. We face a number of interesting issues related to cultural differences, for example, in perceptions about pedagogy. A lot of what Apple Classrooms of Tomorrow has been about recently is the teacher/staff development and changing the notion of the teacher's role. If you look at, say, the schools in Japan, we have more of a challenge in changing the teacher to be the "guide on the side," rather than the "sage on the stage," which was talked about earlier this morning.

We are also very interested in interactive simulation for learning. We have a project that was originally called Kid's Sim and was recently renamed Coco. The new feature it offers is that children can now publish their interactive simulations to the World Wide Web as Web pages. They are very interactive and very dynamic. It's a lot easier than teaching them Java, even though it may output Java.

We've done some work in Internet commerce. We're particularly interested in the notion of object economy and what that means for learning in the future. We did the prototypes of the first virtual servers, where you can buy Quicktime conferencing from Apple over the Net; that's now in the product groups. We also created the teacher development business and sent that off into the K-12 group as a business.

We have some work in authoring tools for learning environments, funded, in fact, under the Technology Reinvestment Program. It is called the East/West Consortium, and it is in collaboration with about six universities and two publishers. In some sense, the idea for that work came from looking at what was happening with multimedia in learning and powerful constructionist environments that could be created, and realizing all the power that we had in the intelligent tutoring system. We arrived at the kind of insights that Bill Clancey was mentioning, such as, Wouldn't it be great if there were tools that would bring those together?

A couple of years ago, you had to do those kinds of things separately. You couldn't easily put rich multimedia into an intelligent tutoring system. Today, you see many rich examples of getting to the universities through the Apple Web site, showing what kinds of things are being built with those tools. More and more, these tools are moving toward Internet standard technologies.

We are also beginning to see a real meeting of the minds between the instructionist style of intelligent tutoring system and the constructivist work that's gone on, for example, in Apple Classrooms of Tomorrow. We have a program in performance support where we're trying to get at lifelong learning issues. One of the places where we have done some of that work is rural India, where we've looked at health care workers who are called Auxiliary Nurse Midwives. Their primary responsibilities are in the areas of family planning and health issues during pregnancy, but when they visit a village, they may have 5,000 patients. They might have somebody with a camel bite and they need to know what to do about that. If there's an outbreak of malaria, wouldn't it be nice if they could be carrying around a small device such as a Newton, and that device would have a window into that information space where they could learn about what they need to deal with the malaria--a just-in-time, on-demand kind of learning.

These are the kinds of activities we've been doing in the past. More and more we've been struggling with this whole issue of social construction of knowledge and trying to change our mindset in many of the same ways that Bill talked about. I am trying to bring that social perspective into what has been a cognitive science perspective. We're actually changing the name of our lab a little bit to reflect that. Instead of Learning and Tools, we're going to be called Learning Communities. We're looking at the use of the Internet for creating communities of virtual learners where we can break the boundaries of time and place. In building these communities, we have to identify the issues: Do you have to start with an existing physical community and expand it into cyberspace, or can you create it virtually? How do you create an invitation for people to join the conversation? And so on. It's very timely.

I've been active in the Net Day Activities. There's a session that Tom Kalil is holding on Saturday which I won't be able to attend, but one of the things that comes out when you talk about Net Day is--in California, I believe we now have the largest ratio of 10-base-T outlets to computers of any state in the nation. But just having the wire certainly isn't the answer, although it may be part of it. In the same way, just having the computers isn't the answer. According to Delaine Easton, Secretary of Education for California, we're number 50 out of 50 in the number of computers relative to the number of students in the California Public School System.

But even if we did what somebody men-tioned--bring lots of boxes of computers there--the boxes may just sit on shelves if they're not handled correctly. And that's what my lab is about: What do you do with them? One of the aspects is getting the teachers to be lifelong learners and getting them to change that whole conversation. That's really what it's got to be about.

A lot of people who are out wiring the schools are following the model of a giant digital encyclopedia, saying that what we need to do is provide access so that we can fill these vessels with knowledge. But what we really need to do is change the conversation to be more about how we invite them to join a community of lifelong learners.

Ms. Vivien Stewart

Carnegie Corporation of New York

I am here today more to learn about what's going on than to contribute, because I think Carnegie and most other private foundations have very limited investments in this area. The part of the foundation in which I work, the part relevant to this discussion, focuses on improving children's learning for a whole variety of reasons, including the economic one cited earlier. We are interested in the range from about birth through high school, both in schools and in out-of-school settings, with particular attention to the connection between learning, health and the social environment. Although that is not something I heard discussed very much around this table, it may still be on people's minds.

The foundation funds a variety of research and demonstration and policy analysis kinds of projects, some of which are in this cognitive science tradition. In fact, a number of the intelligent tutor projects that have been mentioned around the table have had some funding from the Foundation. But others, I think, come more out of the traditions of child development and education systems research. I certainly resonated with the comment about building these wonderful tools, but if you've got a chaotic school system without the capacity to know where it's going or to apply itself, then you won't gain very much learning. With the exception of some private foundations that have a basic research focus, the ecological niche of most private foundations is more on the application side. They have relatively small dollars compared with government or industry; on the other hand, they do have a lot more flexibility. They don't have to use peer reviews or RFPs if they choose not to. Many of them put a big emphasis on the dissemination of information to practitioners, which I know is sometimes the limit of government research agencies. I would say that most of the private foundations that have an interest in education simply don't have a regular means of exposure to this set of ideas and research. There are a couple of groups of foundations in the field of education that get together four or five times a year to discuss what they're doing. There's one subgroup that is particularly interested in the applications of technology; they have actually been meeting with people from the Department of Education.

The emphasis of those discussions is on what kinds of applications make sense. Schools are scrambling to get on board the technological bandwagon, but they are quite confused about how to determine the best uses of technology for learning. A lot of the applications being sold are not particularly powerful. There is also some interest in trying to help schools be intelligent consumers of technology. And, on the flip side, there is interest in the dangers of some of this for children. At a minimum, this group is very focused on the educational potential of new technologies. There is fear that it may be just another television, that it is going to be more a medium for entertainment and advertising than it is for learning, and that there is a lack of representation of learning and children's interest in many of the public policy discussions about what is happening with these technologies.

I think it would be extremely useful if there were some way for NSF to create some means by which this particular sector could be exposed periodically to some of the most interesting research and applications coming out of this field. The current knowledge in that community is very short-term, applications-oriented.

Dr. Peggy McCardle

National Institutes of Health

I am in the Director's Office of Extramural Research, which is kind of NIH Headquarters. I have only been there for two months, so I'm not going to tell you everything NIH funds that would relate to this meeting. There are some basic science efforts in various institutes: Neurology, child health and human development, deafness. There are lots of different activities because we actually have 23 separate funding agencies.

My background is in neurolinguistics and then clinical language pathology and language development. I will take all of this back to NIH and we will certainly talk about it there. One thing that we are very aware of at NIH is that research is becoming, in general, much more interdisciplinary. As dollars are competed for more fiercely, there are more efforts at collaboration across institutes within NIH in terms of co-funding and other aspects, which also promotes a more interdisciplinary mindset. So in that sense, we're moving in this direction, too.

In that vein, let me say something defensive about peer review, because I have spent the last four years in peer review. I've heard comments today that it's very conventional, that it's not very flexible, and that it is not the way to fund interdisciplinary research. And yet, many of the study sections--especially the ones that deal with aspects of behavioral science--are quite interdisciplinary, and the only way that you can review the research is to have multiple experts from multiple fields. You can't bring in one or two interdisciplinary scholars who are going to know every aspect of the many fields that an investigator-initiated research application might bring together. We're very proud of our peer review system and we're moving to make it more flexible. So don't discount it as the way to fund research.

At NIH, we do a lot of basic biomedical science; we don't do a lot of applied research. But there are some other things that we fund at NIH, such as small business innovation research and small business technology transfer research. Those projects are product-oriented and often involve really nifty things--for example, an interactive way to teach neuroscience to high school students. And I think we're going to see a fair amount of what you're starting here at NIH eventually, and that's good.

Dr. Beatrix A. Hamburg

William T. Grant Foundation

We're a small foundation, not just in comparison with the federal government, but in comparison with Carnegie and some others. The mission of our foundation has been to optimize the growth and development of children and youth using research--rigorous research--as a guide to action. We support a wide spectrum of research, but we've had an emphasis on interdisciplinary research for a long period of time now.

One of the ways we have backed up the research is to have what we call the William T. Grant Faculty Scholars. We support them at $50,000 a year for five years, to keep them in the pipeline--we recognize it takes awhile. A number of people have commented that you can't have a business or Wall Street orientation, focused on the quarterly report, and expect to have results every three months. We also provide the researchers with a support system. They meet annually to exchange information, hassles and triumphs, and strategies and logistics. New collaborators develop within the network and through the contacts they make as a result of the network.

We are willing to take risks. We have our own peer review system, but we offer our grantees the opportunity to reply to the critics before their proposal is judged; I don't think any of the government agencies do this. Often the replies are very persuasive, especially when the proposals are at the cutting edge and simply haven't been fully understood by the reviewers.

We're also unusual in that about half of our board are academics themselves. We view these cutting-edge projects as venture capital and as taking risks. I am proud to say that one of the earliest grants that Paula Tallal ever received was from the William T. Grant Foundation, and that was a while back.

I want to thank everyone at NSF for the effort that they have expended to bring about today's symposium, and their earlier efforts as well. The presentation this morning was terribly exciting and extremely timely for us. We've had a lot of proposals about using computers in the school, and I've never been able to pinpoint the reason why we've never supported any of them, with one exception that I'll mention. I understand now that the reason is that because it was all gift-wrapping, and now I have a rhetoric to describe our decision making.

We view education as absolutely core to the well-being of children. It is not only about what they learn; it's also about seeing education as an integral part of their overall growth and development, providing an alternative to drugs, violence, etc. Education has a powerful impact on their life options and the course of their lives. We've had a very strong interest in developing a niche with school reform. A lot of the changes to date have been structural. We have been interested in motivation and learning as a social process and introducing cultural aspects into it for both teachers and students.

I have gotten a lot of wonderful ideas from what I heard this morning. The school-to-work transition has been an interest of ours since the mid-1980s, when, among other related projects, we published a work called The Forgotten Half, about non-college-bound kids who, upon graduating or leaving school, are left to flounder and typically are not attached to the labor force for another decade. It's more like the forgotten three-quarters by now, but in any case, we've had an enduring interest in that area. This discussion is opportune and timely because the business world is beginning to ask the crucial question: What is the role of employers in the school-to-work transition? They would, I suspect, be happy to begin to use some of the computer models in their training programs and in the ways in which their new employees begin to assume responsibilities. There are other things they need to do as well, but I think that would be a very important aspect of it.

The only computer effort that we have supported is Classroom, Inc. This is a program that simulates, in a very realistic way, experiences that are career-oriented. It is highly interactive; the ones that they have developed up until now have to do with banking and hotel administration, and they are just beginning to implement ones on medical careers. As we move increasingly into virtual reality and more of the interactive aspects, including an emphasis on decision making in uncertainty and things of that kind, there is a lot of promise for a relationship with the school-to-work transition.

Dr. Kirsty Bellman

Department of Advanced Research Projects Agencies (DARPA)

When I came into what was then called ARPA, I was asked to head a number of software and system engineering projects to help architect some of the middleware. This so-called middleware was going to go on to be used in national information infra-structure applications. I'm also a psychologist and a neuroscientist, in addition to having been involved with mathematics and computer science since I was 17. I ended up heading what has turned out to be one of the largest projects ever in terms of basic research, all the way through applications research in education and training, the Computer-Aided Education Training Initiative (CAETI). This is one of the those interesting things that can happen in large complex systems. Having now sat on top of an $80 million program in this area for a couple of years, I have some scar tissue I can share with you as you launch into this area.

First of all, DARPA has made the following mistake several times: We all believe in interdisciplinary efforts, we all believe in integration. We say to the researchers, "You guys are now funded to do work statement X, but, by the way, make sure you collaborate with all these other superb people that we've brought together in this community." The bottom line is, it does not work.

One of the things that has worked in CAETI is to take one-fourth of the money that you're planning to spend and devote it explicitly to people and methods and new research about how to integrate, and how to do the interdisciplinary work. In CAETI, as program manager, I spend a lot of time trying to be sure all the voices are heard. That's one of my chief jobs, to make sure that none of the different disciplines overpower. I have to be sure the technocrats don't always win. On the other hand, I have to be sure that what we hear from some of the educational community doesn't sway our good efforts too far the other way.

This includes having specific people explicitly in charge of clusters, in whatever way you decide to partition the research. It also means having people in charge of running between clusters, helping to coordinate and keeping track of who can be teamed together. At first the researchers were worried that there was a hierarchy and that they were subordinate to the cluster leaders. I reassured them that the leaders were there to take care of interaction, coordination and tracking so that they could do their work. It also includes developing new mathematical notations that have expressive enough language to allow us to bring some analytic power to reasoning about some of the system-level issues occurring in the program. It's all very hard and rich, and there have been some strides made in this area.

There is another thing I require of everybody getting any money from CAETI--and this is now more than 100 different groups. We have about 60 prime investigators, then we have these teams. Every researcher, whether 6.1 or all the way to 6.3, or out the door, is required to do two things. One is that they have to cooperate with the system architects and the engineers who are trying to build some overall coordination into the program. This means you have to start the mechanisms of organization as you start some of the research and you have to have some sort of consensus-based feeling of what you can do in the way of interchange. I am very proud to have labs like SHANKS and LESSGOALS and others, for the first time, opening up their toolsets so they can exchange user models, agents and techniques. We can actually then do experiments in this field where we can test across different pedagogues, across different kinds of domain-modeling techniques.

The second thing is that they all have to be involved--whether or not they're the most research-oriented person--in developing new instrumentation and new metrics for their area. So we have collaborative virtual environments, interesting stuff that I'll tell you about, Web-based tools, interesting kinds of new intelligent agents.

Part of our difficulty is that if we're going to do anything really scientific as a community here, we've got to learn some way of evaluating or learning what we're going to do in several ways. We've got to know if we're making progress, knowing what works when it works. We've got to have some basis for system engineering and scientific development and research. So everybody in CAETI, as the agent, or in the collaborative virtual environment, is involved in working together in little groups of 20 or 30 people, developing common instrumentation that they can all live with. It's about negotiation and common metrics that they can live with, to start comparing across intelligent tutors, for example. This has been invaluable.

Another thing I'm doing in the program, for the whole country, is creating repositories of our evaluation results. They will be open to all sorts of scientific researchers, not just DARPA or NSF, who have been working very closely on this program. I owe a lot to Andy Molnar and others in terms of the very basic development, and Nora and I have been trying to do testbeds.

Another point I want to make is that peer review can work. You have to explicitly set up a process where people feel very comfortable about admitting the limits of their expertise. I also did a lot of roving and observed things that were rejected out of hand; I could see whether or not the person rejecting it had a suitable background for reviewing it by quickly going through a checklist they had given me of the things that they knew.

Lastly, we really have to get away from this mythology that somehow all that remains of this technology are organizational issues--that it is just a matter of deployment. It is just not true. This is why NSF should be involved. There is some very key, complex, systems-level theory that we have to begin to address when we start talking about cyberspace as the next level of combined processing place. The same is true if we start talking about other kinds of really hard, interesting issues such as: What does it mean to have distributed intelligence that is somehow coordinated into common activity? We don't even have the vocabulary for speaking about some of things that we're trying to deal with and we're not going to get to a systemic level without it.

Now I want to mention some of the things CAETI's doing. My very dear colleague from NASA, Tyce Young, once called DARPA the Green Berets of Technology. So the idea is that DARPA's really good at getting everybody's attention, quickly amassing some interest and then making a couple of breakthroughs. So in this area, we tried to analyze, among the hyperbolae: What are we trying to get out of this when it comes to education? We had four promises and then set up four structures within the program to address the promises.

One promise is access to all these wonderful digital resources. The problem is, as most of you know, that if you go out there Net-surfing right now, you see fabulous stuff. I don't even have to tell you--pictures, text--but they don't automatically turn into valuable material for education, industry or any other kind of endeavor. The issue then for me was whether or not we can invent some ways of quickly customizing this material, of providing much better user models, descriptions of the tasks, descriptions of the kinds of uses we are going to have to make of it. Are there ways we can go out and capture this stuff with specification languages that automatically start packaging it for immediate use in the back-end of a tutor or a simulation or a lesson plan? We're doing that right now.

The second promise has to do with the fact that one of the things that we were supposed to have access to was each other. Groupware was supposed to do this. We can go into the reasons, but we all know it doesn't work very well. Hooking in individual programs doesn't work very well. One of the best examples we have out there in cyberspace are MUDs, games that home inventors have been playing. I am reluctant to tell you what MUD stands for because it'll turn you off; you'll think it's just games. I do offer the fact that games have led to many places in our intellectual history. MUD stands for Multi-User Dungeons. Remember the Dungeons and Dragons games? These home inventors wanted to create a way in which you and I could watch me slash off the head of the dragon, or grab the jewels and run, and things like that. But this turned out to be a wonderful moment in the history of science: In the act of doing that they created some very different stuff.

Let me give you a sense of how different this stuff is. There are thousands of these MUDs now. Some of them have 10,000 players (that means they're on three hours a day) who are running around in real-time, creating objects, changing the database, interacting with each other. As a computer scientist, of course, I take one step back. As a psychologist, I take one step back, and I think: There's something seriously interesting here. So I've been putting these MUD inventors together with the established computer science community, which is another whole story in itself and was a lot of fun. And we're souping up MUDs.

The third promise was that computer-based solutions were supposed to allow for individualized education and training by appropriate presentation and customization of material and feedback. The problem here is twofold. A phrase I use is "blowing out the back-end" of the tutor. The idea is that the way tutors were being written was that you wrote into one monolithic computer program all the pedagogy, all the domain knowledge that you were going to teach, and all the user-modeling techniques, etc. We do something different now. We can essentially make the tutor a front-end to an enormous number of different resources. That is also the way we can do the experiment across different pedagogies. That's one part of it.

The other part of it is to take the tutor and make it part of the student or learner environment and allow them to co-experience. To make the tutor a front-end for an enormous number of resources, to make them co-experience an environment, clearly there are tremendously difficult research issues. You have to change the kind of knowledge and intelligence that they have. They have to know a lot about recognizing an instance of their own knowledge. If I'm an astronomy tutor, I have to be smart enough first to recognize that a particular astronomy site on the Web is my kind of knowledge and then to organize it and so on.

The last promise was that it was all supposed to work together. Computer scientists always say that. We've been doing a lot in this area; this goes back to my system engineering and evaluation points. So CAETI has a lot going on and there is a lot more detail to relate among all the different projects.

Dr. Paula A. Tallal

Rutgers University and

Scientific Learning Principles

This discussion has been very interesting to me, on the one hand from the perspective of someone who has been involved in both federally and privately-funded research, and now, on the other hand, from the perspective of someone trying to transfer research into the marketplace in order to get it into the hands of those who can really use it.

Like most of you here, our interest is in the use of new technology to benefit learning. The most fundamental thing that I can say is that there is a tremendous lack of understanding of exactly how non-computer-literate most of the people actually are who ultimately stand to most benefit from the technology we are developing. The teachers, the allied health professionals, the physicians, whoever we're counting on to bring our advances to the kids who can benefit from them, are not well-versed themselves in these new technologies. This is, I believe, the biggest barrier to what we've talked about today, and it needs discussion. How are we going to train the people who have to set this hardware up and use the software appropriately in classrooms and clinics?

First, they have to make that paradigm shift, to actually feel comfortable speaking this new language. Second, they have to make it accessible. A lot of people say, "Leave it alone, the kids will do it." But kids can't get access to software until adults in decision-making positions provide the hardware and link it to the Internet, and a lot of these adults are just not equipped to be able to do this. This is a real problem, and one that we haven't discussed here. I would like to see it addressed by this group because it is the barrier to entry, in a very real sense, into the real world. No matter how sophisticated our technologies get, or our software or hardware, the work ends up being about the people who have to implement it and make the decisions about implementing it.

How do you inform the people who make the decisions about what's going to happen in schools--how money is going to be spent, and even how to get the hardware in place? How do you actually get the schools connected up to the Internet? I had an opportunity to hear President Clinton and Vice President Gore, and their views are absolutely right. How are we ever going to be competitive in the next generation if our children are not technologically astute? And yet, the people who are to train them, the teachers of today, for the most part, are not, themselves, technologically astute. The administrators who must make the decisions about whether and how to hook their schools up to the Internet are, for the most part, scared to death of the Internet. How do we deal with that? If we want to educate the children, we first have to focus on educating the teachers and administrators.

Unidentified Speaker

I am really interested in hearing from the company representatives about the areas for research and the broad areas for the Learning and Intelligent Systems initiative that they think are important but that are not likely to receive sufficient investment by the company for reasons having to do with, as economists call it, appropriability. It's easy for Apple to invest in Copeland or for Microsoft to invest in Windows NT; you have a clear interest in doing that. When you're talking about some of the more speculative research, the ability to capture the return on that investment goes down. So what are some of those areas that are important, but that the private sector may underinvest in because the returns are difficult to capture?

Also, what do you think are some of the more promising models for building collaboration between government, industry and academia? As I see it, one of the problems that government is going to face is the phenomenal rate of growth of your industry. The semiconductor industry, for example, is going to double over the next four or five years. The federal government is not going to double over the next four or five years. So the federal investment in research and development, as a percentage of your industry sales, is going to decline over time. What models are going to be able to leverage the small amount of investment that the federal government can make in research and development?

Dr. Eric Horvitz

Microsoft Corporation

I'll respond to your second question. I've been quite active in keeping strong collaboration and communication with my colleagues in academics. As you might imagine, Microsoft often conjures up thoughts of commercial success, complete with images of an overflowing coffer of funding dollars for external projects. In reality we, of course, have budgets, and need to make special arrangements for special cases of collaboration, when opportunities arise. Several colleagues have said to me statements to the effect of, "You have a lot of money and we have a big project." I listen to them and try to determine what their research is and how it might relate to our ongoing projects and goals at Microsoft Research. If there is some mutual interest, I will organize a meeting about possibilities for collaboration, which can include a variety of support options. Currently, there are few formal procedures set up for this kind of consideration at Microsoft. People are working to formalize procedures and I am working with them on this.

I should hope that part of the reason for this meeting today and part of the implicit thinking going on behind the initiative you are proposing is that there will be some creativity coming out of NSF on this issue. I would imagine that there are several possibilities for collaborative funding of projects of mutual interest. This kind of thing could come in the spirit of other existing collaborations. As an example, I believe some of the NSF Young Investigator Awards are linked to matching funds from alternative sources. Microsoft considers collaborating with academic consortia that are really exciting and interesting, such as MIT Media Lab and the Stanford Computer Forum. We also collaborate with individual researchers. One can imagine developing a matching grant program that might stimulate the explicit collaboration between the private sector and government funding agencies.

As for your first question, my initial reaction is based on Microsoft's current priorities in terms of research. I believe the problems with coming up with theories and technologies that we really understand in the realm of education and pedagogy are much more difficult to solve in the short term--perhaps even in the long term--than some of the hard technical problems we're working on--for example, learning causal models from data, data mining, and automated reasoning and decision making.

The technically more difficult and more well-defined problems are easier to solve when you put a lot of effort into them than are the more holistic problems that hinge on developing an understanding of social, community, human psychology and pedagogical issues. The technical problems are more concrete and closed, in a way, to use Herb Simon's notion of open and closed problems. It's probably going to take some long-term, consistent funding to support these longer-term psychological, community and educational theory research projects. So, as far as priorities, you might want to leave more to industry the concrete, closed problems, those that are well-defined and technical, and focus government monies on funding and monitoring research that seeks to develop and refine theories of human learning and education from data and so on. I imagine that AT&T and Apple might agree with this.

Dr. Mark Miller

Apple Computer, Inc.

It does depend on the company, and certainly Apple prides itself on having made much more of an investment than most companies in understanding what it means to learn with technology and to actually change how people think about that. The impact of that investment is that we are the number one provider of technologies to schools, pretty much in the world, and certainly in the United States, in spite of not being quite as successful in some other markets.

Of course, as you look to lifelong learning, you might think that some of that expertise will pay off outside of the traditional K-12 public school. As far as areas for government leverage, the clear difference between government-funded work, particularly at universities, and a corporate lab is the time frame. A lot of the problems that we want to address around the table here are sort of the four grand problems of science, the fourth one being the nature of mind. That is fundamental research, and it is not something that is going to impact your sales three or four years from now, unless you have a real breakthrough. The time from breakthrough to practical application can often be a decade.

So, although there's always a portfolio and a spread, most industrial labs think in the three- to five-year time frame. We talk about what we call radical research--not incremental, minor improvements to products, but also not fundamental research. We pride ourselves in our investment in understanding and learning with technology, but our work is somewhat applied. It is really about understanding how change happens when you introduce technology into a classroom, for example, and not so much about the fundamental nature of learning. It certainly is not about the neurophysiology of learning, which I think is where there might be a breakthrough.

It is not obvious to people who are primarily working in government-funded situations that, to a large extent, the funding of artificial intelligence work in industry by internal industrial dollars, as opposed to by ARPA or some similar source, has pretty much evaporated. The feeling in the industrial labs--with some exceptions, such as Microsoft's work in natural language, for example--is that we've run into some pretty difficult problems that some pretty smart people have broken their picks on. It's going to be, as Herb Simon said, a question of getting to some major new good ideas. Our imagination is the limit; its not that we don't have powerful enough computers.

We need probably three great new ideas before we're going to be able to do common-sense reasoning. Industry is unlikely to do that on its own because of accountability to the shareholders on a reasonably near-term basis. As a result, work in knowledge representation, in inference, which is absolutely essential to what we're trying to do here, gets funded on the "side" of things. This is partly because there are jointly funded projects, and dual-use kinds of efforts make a difference.

To give one example: At Apple, we put a lot of money over a number of years into the Psych. Project, which was one attempt to do really fundamental research in representation and reasoning. The truth is, we don't have a lot to show for that investment, and it has certainly had an impact on the credibility of future investments in that kind of work. There are a number of us in leadership roles at Apple's Research Labs who feel this is an important area. But it is hard for us to justify a three-year time frame for seeing an impact on revenues by doing more of this kind of work. I think that this is where the government really needs to play a role.

Dr. Yann Le Cun

AT&T Laboratories

There are some very specific areas in which industry might need help or collaborations. One has been mentioned by almost everybody: Learning theory. Work in this area had been mostly directed towards modeling artificial ways of learning, but in recent years has become sufficiently general that it could be applied to other types of learning, including more natural ways of learning. There's only so much that industry can do in this respect. Some work has already been done, but it's probably not enough.

A second area where there have been successes in recent times is machine perception systems, based to some extent on human or animal perception. But this hasn't gone very far, or not far enough for my taste. It's very hard to justify, in a corporate environment, conducting research on modeling human perception for the purpose of building practical systems. I would disagree with what you just said about having powerful machines. In fact, we are limited by the power of machines. If we had more powerful machines, we could do a lot more things; we would do things completely differently, but we would do a lot more.

A third area has to do with radical new ways of approaching things that are currently improved only incrementally, such as speech recognition. The way of doing speech recognition has basically not changed in the last 10 years. It's been mostly incremental change, and we're stuck in this local minimum, unable to make any major changes to the way things are done because it would mean erasing so many things that we've acquired over the years. We are beginning to see the saturation of the current technology there, and we probably have to find radical new ways of doing things. Again, corporate labs have a hard time doing this.

A fourth area is software for network-based education--remote education--particularly simula-tion environments for virtual labs. One of the beauties of computer education would be virtual environments in which we could experiment, as opposed to just flipping pages of a textbook. There is a lot of work to do in that respect. It is not clear what the business is for that right now. There probably will be one--particularly for companies that run information networks, like Microsoft and AT&T--but it doesn't exist right now.

The last area is standards in data collection. One of the problems with machine learning and things of that nature is that you can't do anything unless you have a lot of data. Data collection is a very expensive thing; it's typically something you want to share with your friends, which is difficult to do in a corporate environment. When you collect data for yourself, it costs you a lot of money and you don't want to share it. The government is just the right place to do that kind of thing. It's been done by the National Institute for Standards in Technology for various other projects, and by DARPA for speech recognition. There is a role to play there for other types of data collection projects.

Finally, one of your questions was about ways to encourage collaboration between universities and industry. One thing that has worked well in the past is internships. Somehow, we have to have people collocate to work together on a common project. It's certainly the best way to do things, as opposed to remote collaborations, which don't work very well. Students, in particular, are very good vehicles for transmitting information between locations.

Dr. Neal Lane

National Science Foundation

I just want to apologize for not being here for this whole meeting.

If you don't mind, I'll just sit here a little bit and find out what you all have learned. It's an extremely important area from our perspective. It gets at the heart of what NSF ought to be doing, and ought to be about--working with the organizations that we have represented here today. So I thank you very much.

Dr. Beatrix A. Hamburg

William T. Grant Foundation

I have been listening and thinking about learning, and I'm really delighted to hear that there has been an emphasis on the applications in schooling, teaching and learning. The universities might be helpful to the business world is in getting developmental psychologists into their orbit. A lot of the models are from adult learning. With children and adolescents, teaching, computer software, or work experience all must be developmentally appropriate. If it's not developmentally appropriate, it just isn't going to work. The video shown at lunch was extremely revealing because it was so developmentally appropriate. It showed what a six-year-old is going to want to do, and how they're going to want to do it. Louis Gomez was obviously anchored and steeped in details of what the developmental attributes of that age group are all about. With this kind of understanding, learning programs could be systematically introduced into the system.

The other thing the video demonstrated was the importance of gender differences. This is one of the few efforts directed towards girls, and it was clear that they were deeply engaged. They wanted to get right into the box. At present, video and computer materials are mostly male oriented. If we are going to address the problem that was expressed on the front page of USA Today, we need to have age-appropriate and gender-appropriate material for all children.

Dr. Kirsty Bellman

Department of Advanced Research Projects Agencies (DARPA)

You're absolutely right; there is a problem with bringing in psychologists. Most companies won't necessarily pay for anthropologists and psychologists and the rest in these projects. I'm concerned that it promotes the belief that the part of the equation that's missing is power. I want to point out that engineering is missing as much power. One of the ways we're going to bring this up is to point out that one of the ... [inaudible] ... receive psychological and cognitive data. We have, today, some very good psychologists and cognitive psychologists working within the AEGIS weapon system. They are finally showing that we are asking for impossible tasks, especially under stress, to be done by an operator. For those of you who don't know, these operators are literally looking at six or seven screens and five channels, two into one ear, three into the other. It's impossible. Add stress, emotion and the weight of preserving the freedom of the Western world on top of that.

Psychologists have been painstakingly creating all these studies, collecting real data for the first time about what is required of the operator under different scenarios. The problem is how that information is then fed into the reengineering or the re-architecture of those systems. The answer is: It isn't. There is nothing to receive it on the engineering side. I want to point out this horrendous gap that is as much missing on the engineering side as it is on the psychological side.

Dr. Eric Horvitz

Microsoft Corporation

Microsoft has worked with several social psychologists, and there is a case in point that I was thinking about when I spoke about some of the outside people. Cliff Nass and Byron Reeves, from CSLI at Stanford, have been consulting with Microsoft on their studies of how people react socially to animated agents and cartoons on screens. This work played a role in the design of several products that have hinged on the use of anthropomorphic characters in software.

Microsoft has a fairly extensive set of usability research labs and many people with expertise in leading studies of human-computer interaction. However, in general, the work is not focused on generalizing results and publishing them. Instead, studies are done more with an eye to near-term application development, creating sets of heuristics and rules and design guidelines as to what works better for now, for the next version and so on. This is not part of what I would consider the main work that we do at Microsoft Research. We're quite active in the research community: We publish, several of us have been organizers of international academic conferences, we have interns from all over the world, we give many academic presentations, we are doing both basic research and developing applications. We seek out principles and general relationships. Pursuing principles for human-computer interaction is so much more difficult than many of the problems people in my group focus on; it takes a long-range commitment and the results aren't as tangible at least for same level of funding. It's a harder problem.

Dr. Anne C. Petersen

National Science Foundation

One principle we discussed is that you can sometimes test hypotheses in the classical scientific sense while you're working on a real problem. That is the ideal framework for doing this work, when you can combine the two issues. I know it's not always easy.

Dr. Gerhard Fischer

University of Colorado

One point that has come up often in our discussion today is whether or not learning in artificial systems is the same as in human systems. This is an important research question that needs to be further clarified. To what extent is it the same? To what extent is it fundamentally different? When we look at teachers as lifelong learners, it's not that these teachers don't want to learn anything. They have a job, they have a family, and then they are expected to learn at least the same amount of new things as the faculty member who occasionally goes on sabbatical to refresh his learning potential. It is not an unwillingness to learn. It is a question of learning as an activity being brought into a life where there's no time for it.

One thing we identified is that people often suffer from what we call a production paradox. Let's say I make these notes for a presentation here and my graduate student comes along and says, "Oh, you use this old system? There are much more nifty tools around." At this particular time, I want to get my work done; learning something new is not at the top of my priorities. So you have learning inhibited by working. We all have to get certain things done.

On the other hand, we also know that if you constantly work and never learn, eventually your knowledge will become outdated. This is a very fundamental conflict in human learning, and I don't think it is a particularly big topic in machine learning. I see a lot of synergy in bringing machine learning and human learning together, creating a kind of synergistic view. My claim is that part of the synergistic view is to understand where these things are the same and where they are different. There are fundamental differences.

Dr. Eric Horvitz

Microsoft Corporation

I have been holding back on this comment, but Gerhard Fischer has put me over the threshold. I've felt an uneasiness during this meeting--and in reading the report--about the definition and scope of this initiative. I believe that the scope and aims of the initiative suffer from an attempt to coalesce related but distinct areas of study. I think the authors of the program have gone a bit too far in their laudable attempt to build a unified interdisciplinary program. The current problems in the interrelated areas of computer-assisted education, automated tools for learning, understanding problems in education, and psychological and sociological issues in the education of human beings are quite different (although related in a variety of ways) from the key challenges in the world of automated reasoning and machine learning. Indeed, principles that have been identified in automated problem solving and artificial intelligence may have quite a bit of application in the computer-aided education area. However, I view the fundamental problems and the current appropriate thrusts of the research in these areas as being quite different. I am particularly concerned that the packaging together of research on computer-assisted education and educational psychology with research on machine-based learning methods in this initiative will cause confusion and lead to problems with the understanding of the goals of the program by researchers. I wonder if anybody else here shares my concerns about the coalescing of the two areas?

Unidentified Speaker

I don't think the people who put this together really believe that there is some overall unified theory here. Rather, there are a lot of related things to explore. And I think they would agree with what Gerhard Fischer said, that one of the things to do is to find out where the similarities are and where the differences are. That itself is important.

Dr. Anne C. Petersen

National Science Foundation

We don't intend to quit doing the kinds of things we've been doing; our assumption is that that work will continue. We did, however, want to explore whether there are some things not getting done with the current approach.

Unidentified Speaker

I just wanted to partially answer Eric Horvitz's question. I don't think that there is a deep resolution, but part of it lies in what Bill ... [inaudible] ... talked about. When you're building these artificial systems, what you can do, in terms of near-term practical use for education, is to not necessarily assume that with our current technology they're good enough to replace the teacher or become the source of learning.

You can, in fact, turn it around and give it to the learners to improve; it then becomes more of a constructivist style of development and use. This can be very effective. In some sense, extreme versions, "AI should be an elementary school subject," I think this is right. The fundamental thing that you want people to do is to learn about thinking and learning and what it means to be a lifelong learner. As a practical matter, that's how I resolve the dilemma in my own mind for this decade.

Dr. Paula A. Tallal

Rutgers University and

Scientific Learning Principles

I'll put on my final hat, which is that of a licensed clinical psychologist and psychotherapist, and resonate to what I felt throughout the meeting. The reasons people are licensed and certified to be teachers and doctors and allied health professionals go way beyond the "facts" of their profession. "Bedside manner" and clinical intuition is not something that is going to come across a computer screen; it is going to be really important to always remember that these people are not replaceable. I think that what we can do through technology is somewhat equivalent to what fluoride did for dentists. It didn't replace any of them, or put any of them out of business. It just made them more effective.

Any initiative that is going to be successful has to teach the tools of the trade. Books are a tremendous asset to education, but they are not very useful to pupils if the teacher doesn't know how to read. I'll emphasize one more time that we have to teach a person how to read before they can use a book, and we have to teach the teachers and clinicians the fundamentals of accessing and using new technologies, both hardware and software, before they will use anything that any of us develop in classrooms and clinics.

Mr. William Clancey

Institute for Research on Learning

For me at IRL, the issue isn't so much what industry won't fund, but the need for matching funds. When you throw in a senior anthropologist, a computer scientist, a programmer and a couple of research assistants for a year, that's a very expensive project, and industry wants it done in nine months. If we had a source for matching funds to stretch that out so we had time for reflection, writing it out and pursuing some of the points in our own disciplines, it would make a huge difference.

Dr. Anne C. Petersen

National Science Foundation

Thank you very much. This really has been a very rich day. Let me try to summarize some of the important messages from the discussion. There appears to be consensus that this is clearly an exciting and important area of research. There are ample opportunities in this broad area that we have labeled Learning and Intelligent Systems. We heard quite broad consensus that the links to education are central in many people's minds. Everybody is beginning to speak in terms of lifelong learning, which is certainly something that we talk about at NSF as well. We need to remember the importance, when we're talking about educating young people, of the tools being developmentally appropriate in order to be effective.

I heard some explicit expressions of interest in two kinds of partnering. There clearly are some areas of mutual interest, where we could collaborate with co-funding. But there is a second model of partnering that involves different roles. For example, NSF could contribute basic research for more applied development of technology. There also appears to be scientific interest in educational uses of technology; here NSF could participate in research and learning and other areas. The different entities represented here approach these quite differently. There could be tremendous advantages to playing different roles in a common effort.

I also want to note that many of the people who could not be here today--the Department of Education, several other companies and several other foundations--have said that they are very interested in participating, or at least talking further with us about what might happen. So I think the potential group for this work is even broader than that here today.

I want to emphasize that we feel strongly that any effort that we would engage in would have to add value over what would ordinarily happen. We hope that such an effort would facilitate a leap beyond where we are now, and we want to think that way about it in terms of both research and education. We in the government are, of course, acutely aware of the fact that funding is very tight; we're not going to be able to do everything, and we're going to have to develop a way of deciding what the most important priorities are.

One question I have is what you think would be the most useful mode for subsequent interaction. I could imagine many people preferring us to set up an alias on e-mail and communicating that way, but I would bet that won't suit everybody's participation. Do we also need further face-to-face meetings? Sending things around on paper? What do you think might be useful ways to communicate with you?

Comment: You can never communicate enough. You have to do all of the above.

Dr. Petersen: I didn't mention the combination, but we will do that.

Comment: Perhaps a preliminary, high-level, 6- to 12-month agenda for steps you'd like to see happen.

Dr. Petersen: Well, there are things that we are already proceeding with, such as the educational technology effort Paula Tallal mentioned. We can certainly communicate about that. Again, there could be things we're doing that others might join in on, but what we're hoping is that there might be things that we actually proceed with together, from the start. We are clearly thinking about something bigger in this area for our 1998 budget.

We plan to do things at a smaller, perhaps more experimental level, next year. Anything is possible, within the constraints of available money. We did not want to move our own agenda very far until we had a chance for this conversation; we saw this as key. If we found that nobody came, that would have been a clear signal that we should just step out alone, doing whatever we are going to do. That is clearly not the case, as all have expressed significant interest in this area.

Whatever we do will have to be interactive. We could think about a steering group and take turns hosting as we move this along. We'll try to lay out the issues and get your feedback on it as we move ahead. And please, if there are any afterthoughts--I'm sure there are a lot of things all of you wanted to say that you didn't have a chance to say--just let us know. My e-mail address is All of us here at NSF read our e-mail regularly, including Neal Lane. We really are very excited by your participation. We're hopeful that together we can do something quite interesting and we look forward to working with you on this. Thank you.

1. Wiener, N. 1948. Cybernetics. Cambridge, MA: MIT Press.

2. James, W. 1892/1984. Psychology: Briefer Course. Cambridge, MA: Harvard University Press.

3. Postman, N. 1985. Amusing Ourselves to Death. New York: Penguin Books.

4. Illich, I. 1971. Deschooling Society. New York: Harper and Row.

5. Simon, H. 1981/1996. The Science of the Artificial. Cambridge, MA: MIT Press.