12 research outputs found

    Building Parameterized Action Representations From Observation

    Get PDF
    Virtual worlds may be inhabited by intelligent agents who interact by performing various simple and complex actions. If the agents are human-like (embodied), their actions may be generated from motion capture or procedural animation. In this thesis, we introduce the CaPAR interactive system which combines both these approaches to generate agent-size neutral representations of actions within a framework called Parameterized Action Representation (PAR). Just as a person may learn a new complex physical task by observing another person doing it, our system observes a single trial of a human performing some complex task that involves interaction with self or other objects in the environment and automatically generates semantically rich information about the action. This information can be used to generate similar constrained motions for agents of different sizes. Human movement is captured by electromagnetic sensors. By computing motion zerocrossings and geometric spatial proximities, we isolate significant events, abstract both spatial and visual constraints from an agent\u27s action, and segment a given complex action into several simpler subactions. We analyze each independently and build individual PARs for them. Several PARs can be combined into one complex PAR representing the original activity. Within each motion segment, semantic and style information is extracted. The style information is used to generate the same constrained motion in other differently sized virtual agents by copying the end-effector velocity profile, by following a similar end-effector trajectory, or by scaling and mapping force interactions between the agent and an object. The semantic information is stored in a PAR. The extracted style and constraint information is stored in the corresponding agent and object models

    Animation Control for Real-Time Virtual Humans

    Get PDF
    The computation speed and control methods needed to portray 3D virtual humans suitable for interactive applications have improved dramatically in recent years. Real-time virtual humans show increasingly complex features along the dimensions of appearance, function, time, autonomy, and individuality. The virtual human architecture we’ve been developing at the University of Pennsylvania is representative of an emerging generation of such architectures and includes low-level motor skills, a mid-level parallel automata controller, and a high-level conceptual representation for driving virtual humans through complex tasks. The architecture—called Jack— provides a level of abstraction generic enough to encompass natural-language instruction representation as well as direct links from those instructions to animation control

    Posture Interpolation with Collision Avoidance

    Get PDF
    While interpolating between successive postures of an articulated figure is not mathematically difficult, it is much more useful to provide postural transactions that are behaviorally reasonable and that avoid collisions with nearby objects. We describe such a posture interpolator which begins with a number of pre-defined static postures. A finite state machine controls the transactions from any posture to a goal posture by finding the shortest path of required motion sequences between the two. If the motion between any two postures is not collision free, a collision avoidance strategy is invoked and the posture is changed to one that satisfies the required goal while respecting object and agent integrity

    A Parameterized Action Representation for Virtual Human Agents

    Get PDF
    We describe a Parameterized Action Representation (PAR) designed to bridge the gap between natural language instructions and the virtual agents who are to carry them out. The PAR is therefore constructed based jointly on implemented motion capabilities of virtual human figures and linguistic requirements for instruction interpretation. We will illustrate PAR and a real-time execution architecture controlling 3D animated virtual human avatars

    Real Time Virtual Humans

    Get PDF
    The last few years have seen great maturation in the computation speed and control methods needed to portray 3D virtual humans suitable for real interactive applications. Various dimensions of real-time virtual humans are considered, such as appearance and movement, autonomous action, and skills such as gesture, attention, and locomotion. A virtual human architecture includes low level motor skills, mid-level PaT-Net parallel finite-state machine controller, and a high level conceptual action representation that can be used to drive virtual humans through complex tasks. This structure offers a deep connection between natural language instructions and animation control

    Simulated Casualties and Medics for Emergency Training

    Get PDF
    The MediSim system extends virtual environment technology to allow medical personnel to interact with and train on simulated casualties. The casualty model employs a three-dimensional animated human body that displays appropriate physical and behavioral responses to injury and/or treatment. Medical corpsmen behaviors were developed to allow the actions of simulated medical personnel to conform to both military practice and medical protocols during patient assessment and stabilization. A trainee may initiate medic actions through a mouse and menu interface; a VR interface has also been created by Stansfield\u27s research group at Sandia National Labs

    Parameterized Action Representation and Natural Language Instructions for Dynamic Behavior Modification of Embodied Agents

    Get PDF
    We introduce a prototype for building a strategy game. A player can control and modify the behavior of all the characters in a game, and introduce new strategies, through the powerful medium of natural language instructions. We describe a Parameterized Action Representation (PAR) designed to bridge the gap between natural language instructions and the virtual agents who are to carry them out. We will illustrate PAR through an interactive demonstration of a multi-agent strategy game
    corecore