Email Print Share

News Release 09-101

The Next Best Thing to You

New avatar technology combines advances in artificial intelligence and computer image rendering

A still image of a Project LifeLike avatar conversing with a person.

A still image of a Project LifeLike avatar conversing with a person.


May 15, 2009

View a video interview with Project LifeLike leaders Avelino Gonzalez and Jason Leigh.

This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.

Have you ever wished you could be in two places at once? Perhaps you've had the desire to create a copy of yourself that could stand in for you at a meeting, freeing you up to work on more pressing matters. Thanks to a research project called LifeLike, that fantasy might be a little closer to reality.

Project LifeLike is a collaboration between the Intelligent Systems Laboratory (ISL) at the University of Central Florida (UCF) and the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC) that aims to create visualizations of people, or avatars, that are as realistic as possible. While their current results are far from perfect replications of a specific person, their work has advanced the field forward and opens up a host of possible applications in the not-too-distant future.

The EVL team, headed by Jason Leigh, an associate professor of computer science, is tasked with getting the visual aspects of the avatar just right. On the surface, this seems like a pretty straightforward task--anyone who has played a video game that features characters from movies or professional athletes is used to computer-generated images that look like real people.

But according to Leigh, it takes more than a good visual rendering to make an avatar truly seem like a human being. "Visual realism is tough," Leigh said in a recent interview. "Research shows that over 70% of communication is non-verbal," he said, and is dependent on subtle gestures, variations in a person's voice and other variables.

To get these non-verbal aspects right, the EVL team has to take precise 3-D measurements of the person that Project LifeLike seeks to copy, capturing the way their face moves and other body language so the program can replicate those fine details later.

The ISL team, headed by electrical engineering professor Avelino Gonzalez, focuses on applying artificial intelligence capabilities to the avatars. This includes technologies that allow computers to recognize and correctly understand natural language as it is being spoken as well as automated knowledge update and refinement, a process that allows the computer to 'learn' information and data it receives and apply it independently. The end goal, Gonzalez says, is that a person conversing with the avatar will have the same level of comfort and interaction that they would have with an actual person. Gonzalez sees the aims of Project LifeLike as fundamental to the field of artificial intelligence.

"We have applied artificial intelligence in many ways, but if you're really going to implement it," Gonzalez said in a recent interview, "the only way to do it is to do it though some sort of embodiment of a human, and that's an avatar."

The Project LifeLike team demonstrated the technology this past winter at NSF's headquarters in Arlington, Va. The team gathered motion and visual information on a NSF staff member, and gave the avatar system information about an upcoming NSF proposal solicitation. Other people were able to sit and talk to the avatar, which could converse with the speaker and answer questions about the solicitation. Colleagues of the NSF staffer were instantly able to recognize who the avatar represented and commented that it captured some of the person's mannerisms.

Gonzalez and Leigh believe this is just one possible application for this field. In the future, they believe, it may be possible for school children to interact with avatars of historical figures, or for job seekers to hone their interview skills by practicing with an avatar. While the technology may not be able to fill in for us yet, both researchers agree that in the coming decades, many of the 'people' we interact with won't actually be people at all.

-NSF-

Media Contacts
Dana W. Cruikshank, NSF, (703) 292-7738, email: dcruiksh@nsf.gov

The U.S. National Science Foundation propels the nation forward by advancing fundamental research in all fields of science and engineering. NSF supports research and people by providing facilities, instruments and funding to support their ingenuity and sustain the U.S. as a global leader in research and innovation. With a fiscal year 2023 budget of $9.5 billion, NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and institutions. Each year, NSF receives more than 40,000 competitive proposals and makes about 11,000 new awards. Those awards include support for cooperative research with industry, Arctic and Antarctic research and operations, and U.S. participation in international scientific efforts.

mail icon Get News Updates by Email 

Connect with us online
NSF website: nsf.gov
NSF News: nsf.gov/news
For News Media: nsf.gov/news/newsroom
Statistics: nsf.gov/statistics/
Awards database: nsf.gov/awardsearch/

Follow us on social
Twitter: twitter.com/NSF
Facebook: facebook.com/US.NSF
Instagram: instagram.com/nsfgov