Clifford I. Nass of Stanford University and Robin Murphy of Texas A&M University are exploring ways to make rescue robots more user-friendly by incorporating lessons learned from studies of how humans interact with technology. Rescue robots serve as a trapped disaster victim's lifeline to the outside world. But they are worthless if the victim finds them scary, bossy, out-of-control or just plain creepy. Find out more in this Discovery.
Credit: Texas A&M University
Howie Choset at Carnegie Mellon University and colleagues are designing snake robots that can navigate a variety of terrains, surmount obstacles in their way and function in range of conditions. A snake robot has numerous degrees of freedom, which have to be coordinated before the robot can move. Find out more in this Discovery.
Credit: Howie Choset, Carnegie Mellon University
Almost all human beings acquire a language to the level of native competency before age 5. How do children accomplish this remarkable feat in such a short amount of time? Which aspects of language acquisition are biologically programmed into the human brain and which are based on experience? Do adults learn language differently from children? Researchers have long debated the answers to these questions, but there is one thing they agree on: language acquisition is a complex process. Learn more in this Special Report.
Credit: Art Explosion
The Division of Computer and Network Systems (CNS) of the Computer and Information Science and Engineering Directorate supports research and education activities that invent new computing and networking technologies and that explore new ways to make use of existing technologies.
The Neuromorphics Lab, part of the NSF-sponsored Center of Excellence for Learning in Education, Science and Technology (CELEST) at Boston University, has undertaken the ambitious project of creating a brain on a chip--a fundamental predecessor to the design of autonomous robotics and general intelligence.
In a first-of-its-kind experiment, a University of Vermont scientist created robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. These evolving robots learned to walk more rapidly than robots with fixed bodies and developed a more robust gait.
A hyper-realistic Einstein robot at the University of California, San Diego learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to "empower" their robot to learn to make realistic facial expressions.
June 27, 2011
Cooperative Robots That Learn = Less Work for Human Handlers
Researchers are developing a robot language so 'bots' can cooperate with each other
Learning a language can be difficult for some, but for babies it seems quite easy. With support from the National Science Foundation (NSF), linguist Jeffrey Heinz and mechanical engineer Bert Tanner have taken some cues from the way humans learn to put words and thoughts together, and are teaching language to robots.
This innovative collaboration began a few years ago at a meeting at the University of Delaware. The event, organized by the dean's office, brought recently hired faculty in arts and sciences, and engineering together; each gave a one-minute slide presentation about his or her research.
"That's how we became aware of what each other was doing," says Tanner. "We started discussing ideas about how we could collaborate, and the NSF project came as a result of that. Once we started seeing things aligning with each other and clicking together, we thought, 'Oh, maybe we really have something here that the world needs to know about.'"
One goal for this project is to design cooperative robots that can operate autonomously and communicate with each other in dangerous situations, such as in a fire or at a disaster site.
Robots at a building fire, for instance, could assess the situation and take the most appropriate action without a direct command from a human.
"We would like to make the robots adaptive--learn about their environment and reconfigure themselves based on the knowledge they acquire," explains Tanner.
He hopes one robot could follow another robot, watch what it was doing and infer it should be doing the same thing.
"The robots will be designed to do different tasks," says Heinz. "We have eyes that see, ears that hear and we have fingers that touch. We donít have a 'universal sensory organ.' Likewise in the robotics world, we're not going to design a universal robot that's going to be able to do anything and everything."
Each robot will play to its talents and strengths. The robots need to be aware of their own capabilities, those of the other robots around them and the overall goal of their mission.
"If the two robots are working together, then you can have one [ground robot] that can open the door and one [flying robot] that can fly through it," says Heinz. "In that sense, those two robots working together can do more than if they were working independently."
The researchers base their strategy for robot language on the structure of human languages.
Words and sentences in every language have complex rules and structures--do's and don'ts for what letters and sounds can be put in what order.
"So a robot's "sentence" in a sense, is just a sequence of actions that it is conducting," says Heinz. "And there will be constraints on the kinds of sequences of actions that a robot can do."
"For example," Heinz says, "In Latin, you can have Ls and Rs in words, but for the most part, you can't have two non-adjacent Ls unless you have an R in between them."
A robot's language follows a similar pattern. If a robot can do three things--move, grasp an object in its claw, and release that object, it can only do those things in a certain order. It cannot grasp an object twice in a row.
"If the robot has something in its claw, it cannot possibly hold another thing at the same time," says Tanner. "It has to lay it down before it picks up something else. We are trying to teach a robot these kinds of constraints, and this is where some of the techniques we find in linguistics come into play."
Heinz and Tanner want the robots to be able to answer questions about events happening around them. "Is the fire going to spread this way or that way? Is some unknown agent going to move this way or that way? These kinds of things," says Heinz.
And in designing solutions to the challenges in disaster situations, timing is crucial.
"There's always a tradeoff in how much you can compute, and how many solutions you can afford to lose," says Tanner. "For us, the priority is on being able to compute reasonable solutions quickly, rather than searching the whole possible space of solutions."
Heinz's and Tanner's research is helping to prove that two heads--or in this case two circuit boards--are better than one.
Any opinions, findings, conclusions or recommendations presented in this material are only those of the presenter grantee/researcher, author, or agency employee; and do not necessarily reflect the views of the National Science Foundation.