CAREER: Understanding the Complexities of Animated Content
As immersive graphics environments become more viable as useful training and education tools, we must be able to understand animated content well enough to provide naive content creators with the ability to create compelling synthetic scenes. This understanding requires that we answer the following questions. How do we create characters that perform natural motions based on high-level direction? How do we direct synthetic characters at a high level? How do we structure interactive spaces for spatial and temporal interactions between humans and synthetic characters?
To answer these questions, we will conduct research in the following areas: motion capture re-sequencing, character interfaces and character programming. We will restrict ourselves to a particular domain and build an interactive space in order to study the above research questions. We have chosen quarterback training for the sport of American football as the specific problem domain for the following reasons.
Quarterback training requires compelling 3D content in order to produce convincing training situations.
Quarterback training is a physical task and involves spatial and temporal interaction between the coaches and players (where players include the real quarterback and the synthetic characters).
Game preparation requires time-critical training content creation.
Quarterback training requires use by domain experts (coaches and players) who may be naive computer users.
We have access to domain experts in quarterback training through the Oregon State University football program.
Our research will result in algorithms for generating natural motion from motion capture data based on high level input, guidelines for classifying character interaction techniques, and methods for directing characters within the interactive space. We plan to evaluate our research through various quantitative measures and usability studies to determine the quality of content and the ability of naive content creators to create and interact with the content within our interactive space framework.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
Metoyer, R.A., Xu, L, and Srinivasan, M. "A Tangible Interface for High-Level Direction of Multiple Animated Characters," Proceedings of Graphics Interface 2003, Halifax, Canada, 2003, p. 167.
Srinivasan, M. and Metoyer, R.. "Controllable Real-Time Locomotion Using Mobility Maps," Graphics Interface 2005, 2005, p. 51.
Terra, S.C., Metoyer, R.A.. "Performance Timing for Keyframe Animation," Symposium on Computer Animation, 2004, p. 253.
Jason Dagit, Joseph Lawrance, Christoph Neumann, Margaret Burnett, Ronald Metoyer, and Sam Adams. "Using Cognitive Dimensions: Advice from the Trenches," Journal of Visual Languages and Computing (JVLC), v.17, 2006, p. 302.
L. Casburn, M. Srinivasan, R. Metoyer, and M. Quinn. "A Data Driven Model of Pedestrian Movement," Proceedings of the 3rd International Conference on Pedestrian and Evacuation Dynamics, 2005, p. 189.
Ronald Metoyer, Victor Zordan , Benjamin Hermens, Chun-Chi Wu, and Marc Soriano. "Psychologically inspired anticipation and dynamic response for impacts to the head and upper body," IEEE Transactions on Visualization and Computer Graphics, v.14, 2008, p. 173.
Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu, C.C., Metoyer, R., Rose,R.. "Anticipation from Examples," ACM Virtual Reality Software and Technology, 2007.
Christoph Neumann, Ronald Metoyer, Margaret Burnett. "End-User Strategy Programming," Journal of Visual Languages and Computing, v.20, 2009.