10 research outputs found

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Robust on-line adaptive footplant detection and enforcement for locomotion

    Get PDF
    A common problem in virtual character computer animation concerns the preservation of the basic foot-floor constraint (or footplant), consisting in detecting it before enforcing it. This paper describes a system capable of generating motion while continuously preserving the footplants for a realtime, dynamically evolving context. This system introduces a constraint detection method that improves classical techniques by adaptively selecting threshold values according to motion type and quality. The footplants are then enforced using a numerical inverse kinematics solver. As opposed to previous approaches, we define the footplant by attaching to it two effectors whose position at the beginning of the constraint can be modified, in order to place the foot on the ground, for example. However, the corrected posture at the constraint beginning is needed before it starts to ensure smoothness between the unconstrained and constrained states. We, therefore, present a new approach based on motion anticipation, which computes animation postures in advance, according to time-evolving motion parameters, such as locomotion speed and type. We illustrate our on-line approach with continuously modified locomotion patterns, and demonstrate its ability to correct motion artifacts, such as foot sliding, to change the constraint position and to modify from a straight to a curved walk motio

    Animating jellyfish through numerical simulation and symmetry exploitation

    Get PDF
    This thesis presents an automatic animation system for jellyfish that is based on a physical simulation of the organism and its surrounding fluid. Our goal is to explore the unusual style of locomotion, namely jet propulsion, which is utilized by jellyfish. The organism achieves this propulsion by contracting its body, expelling water, and propelling itself forward. The organism then expands again to refill itself with water for a subsequent stroke. We endeavor to model the thrust achieved by the jellyfish, and also the evolution of the organism's geometric configuration. We restrict our discussion of locomotion to fully grown adult jellyfish, and we restrict our study of locomotion to the resonant gait, which is the organism's most active mode of locomotion, and is characterized by a regular contraction rate that is near one of the creature's resonant frequencies. We also consider only species that are axially symmetric, and thus are able to reduce the dimensionality of our model. We can approximate the full 3D geometry of a jellyfish by simulating a 2D slice of the organism. This model reduction yields plausible results at a lower computational cost. From the 2D simulation, we extrapolate to a full 3D model. To prevent our extrapolated model from being artificially smooth, we give the final shape more variation by adding noise to the 3D geometry. This noise is inspired by empirical data of real jellyfish, and also by work with continuous noise functions from the graphics community. Our 2D simulations are done numerically with ideas from the field of computational fluid dynamics. Specifically, we simulate the elastic volume of the jellyfish with a spring-mass system, and we simulate the surrounding fluid using the semi-Lagrangian method. To couple the particle-based elastic representation with the grid-based fluid representation, we use the immersed boundary method. We find this combination of methods to be a very efficient means of simulating the 2D slice with a minimal compromise in physical accuracy

    Topology-based character motion synthesis

    Get PDF
    This thesis tackles the problem of automatically synthesizing motions of close-character interactions which appear in animations of wrestling and dancing. Designing such motions is a daunting task even for experienced animators as the close contacts between the characters can easily result in collisions or penetrations of the body segments. The main problem lies in the conventional representation of the character states that is based on the joint angles or the joint positions. As the relationships between the body segments are not encoded in such a representation, the path-planning for valid motions to switch from one posture to another requires intense random sampling and collision detection in the state-space. In order to tackle this problem, we consider to represent the status of the characters using the spatial relationship of the characters. Describing the scene using the spatial relationships can ease users and animators to analyze the scene and synthesize close interactions of characters. We first propose a method to encode the relationship of the body segments by using the Gauss Linking Integral (GLI), which is a value that specifies how much the body segments are winded around each other. We present how it can be applied for content-based retrieval of motion data of close interactions, and also for synthesis of close character interactions. Next, we propose a representation called Interaction Mesh, which is a volumetric mesh composed of points located at the joint position of the characters and vertices of the environment. This raw representation is more general compared to the tangle-based representation as it can describe interactions that do not involve any tangling nor contacts. We describe how it can be applied for motion editing and retargeting of close character interaction while avoiding penetration and pass-throughs of the body segments. The application of our research is not limited to computer animation but also to robotics, where making robots conduct complex tasks such as tangling, wrapping, holding and knotting are essential to let them assist humans for the daily life

    Motion capture based motion analysis and motion synthesis for human-like character animation.

    Get PDF
    Motion capture technology is recognised as a standard tool in the computer animation pipeline. It provides detailed movement for animators; however, it also introduces problems and brings concerns for creating realistic and convincing motion for character animation. In this thesis, the post-processing techniques are investigated that result in realistic motion generation. Anumber of techniques are introduced that are able to improve the quality of generated motion from motion capture data, especially when integrating motion transitions from different motion clips. The presented motion data reconstruction technique is able to build convincing realistic transitions from existing motion database, and overcome the inconsistencies introduced by traditional motion blending techniques. It also provides a method for animators to re-use motion data more efficiently. Along with the development of motion data transition reconstruction, the motion capture data mapping technique was investigated for skeletal movement estimation. The per-frame based method provides animators with a real-time and accurate solution for a key post-processing technique. Although motion capture systems capture physically-based motion for character animation, no physical information is included in the motion capture data file. Using the knowledge of biomechanics and robotics, the relevant information for the captured performer are able to be abstracted and a mathematical-physical model are able to be constructed; such information is then applied for physics-based motion data correction whenever the motion data is edited

    Mesh modification using deformation gradients

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 117-131).Computer-generated character animation, where human or anthropomorphic characters are animated to tell a story, holds tremendous potential to enrich education, human communication, perception, and entertainment. However, current animation procedures rely on a time consuming and difficult process that requires both artistic talent and technical expertise. Despite the tremendous amount of artistry, skill, and time dedicated to the animation process, there are few techniques to help with reuse. Although individual aspects of animation are well explored, there is little work that extends beyond the boundaries of any one area. As a consequence, the same procedure must be followed for each new character without the opportunity to generalize or reuse technical components. This dissertation describes techniques that ease the animation process by offering opportunities for reuse and a more intuitive animation formulation. A differential specification of arbitrary deformation provides a general representation for adapting deformation to different shapes, computing semantic correspondence between two shapes, and extrapolating natural deformation from a finite set of examples.(cont.) Deformation transfer adds a general-purpose reuse mechanism to the animation pipeline by transferring any deformation of a source triangle mesh onto a different target mesh. The transfer system uses a correspondence algorithm to build a discrete many-to-many mapping between the source and target triangles that permits transfer between meshes of different topology. Results demonstrate retargeting both kinematic poses and non-rigid deformations, as well as transfer between characters of different topological and anatomical structure. Mesh-based inverse kinematics extends the idea of traditional skeleton-based inverse kinematics to meshes by allowing the user to pose a mesh via direct manipulation. The user indicates the dass of meaningful deformations by supplying examples that can be created automatically with deformation transfer, sculpted, scanned, or produced by any other means. This technique is distinguished from traditional animation methods since it avoids the expensive character setup stage. It is distinguished from existing mesh editing algorithms since the user retains the freedom to specify the class of meaningful deformations. Results demonstrate an intuitive interface for posing meshes that requires only a small amount of user effort.by Robert Walker Sumner.Ph.D

    Interactive techniques for motion deformation of articulated figures using prioritized constraints

    Get PDF
    Convincingly animating virtual humans has become of great interest in many fields since recent years. In computer games for example, virtual humans often are the main characters. Failing to realistically animate them may wreck all previous efforts made to provide the player with an immersion feeling. At the same time, computer generated movies have become very popular and thus have increased the demand for animation realism. Indeed, virtual humans are now the new stars in movies like Final Fantasy or Shrek, or are even used for special effects in movies like Matrix. In this context, the virtual humans animations not only need to be realistic as for computer games, but really need to be expressive as for real actors. While creating animations from scratch is still widespread, it demands artistics skills and hours if not days to produce few seconds of animation. For these reasons, there has been a growing interest for motion capture: instead of creating a motion, the idea is to reproduce the movements of a live performer. However, motion capture is not perfect and still needs improvements. Indeed, the motion capture process involves complex techniques and equipments. This often results in noisy animations which must be edited. Moreover, it is hard to exactly foresee the final motion. For example, it often happens that the director of a movie decides to change the script. The animators then have to change part or the whole animation. The aim of this thesis is then to provide animators with interactive tools helping them to easily and rapidly modify preexisting animations. We first present our Inverse Kinematics solver used to enforce kinematic constraints at each time of an animation. Afterward, we propose a motion deformation framework offering the user a way to specify prioritized constraints and to edit an initial animation so that it may be used in a new context (characters, environment,etc). Finally, we introduce a semi-automatic algorithm to extract important motion features from motion capture animation which may serve as a first guess for the animators when specifying important characteristics an initial animation should respect

    Physics-based character locomotion control with large simulation time steps.

    Get PDF
    Physical simulated locomotion allows rich and varied interactions with environments and other characters. However, control is di cult due to factors such as a typical character's numerous degrees of freedom and small stability region, discontinuous ground contacts, and indirect control over the centre of mass. Previous academic work has made signi cant progress in addressing these problems, but typically uses simulation time steps much smaller than those suitable for games. This project deals with developing control strategies using larger time steps. After describing some introductory work showing the di culties of implementing a handcrafted controller with large physics time steps, three major areas of work are discussed. The rst area uses trajectory optimization to minimally alter reference motions to ensure physical validity, in order to improve simulated tracking. The approach builds on previous work which allows ground contacts to be modi ed as part of the optimization process, extending it to 3D problems. Incorporating contacts introduces di cult complementarity constraints, and an exact penalty method is shown here to improve solver robustness and performance compared to previous relaxation methods. Trajectory optimization is also used to modify reference motions to alter characteristics such as timing, stride length and heading direction, whilst maintaining physical validity, and to generate short transitions between existing motions. The second area uses a sampling-based approach, previously demonstrated with small time steps, to formulate open loop control policies which reproduce reference motions. As a prerequisite, the reproducibility of simulation output from a common game physics engine, PhysX, is examined and conditions leading to highly reproducible behaviour are determined. For large time steps, sampling is shown to be susceptible to physical inva- lidities in the reference motion but, using physically optimized motions, is successfully applied at 60 time steps per second. Finally, adaptations to an existing method using evolutionary algorithms to learn feedback policies are described. With large time steps, it is found to be necessary to use a dense feedback formulation and to introduce phase-dependence in order to obtain a successful controller, which is able to recover from impulses of several hundred Newtons applied for 0.1s. Additionally, it is shown that a recent machine learning approach based on support vector machines can identify whether disturbed character states will lead to failure, with high accuracy (99%) and with prediction times in the order of microseconds. Together, the trajectory optimization, open loop control, and feedback developments allow successful control for a walking motion at 60 time steps per second, with control and simulation time of 0.62ms per time step. This means that it could plausibly be used within the demanding performance constraints of games. Furthermore, the availability of rapid failure prediction for the controller will allow more high level control strategies to be explored in future

    On-line locomotion synthesis for virtual humans

    Get PDF
    Ever since the development of Computer Graphics in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing fascination for Computer Animation. This specific field of Computer Graphics gathers numerous techniques, especially for the animation of characters or virtual humans in movies and video games. To create such high-fidelity animations, a particular interest has been dedicated to motion capture, a technology which allows to record the 3D movement of a live performer. The resulting realism motion is convincing. However, this technique offers little control to animators, as the recorded motion can only be played back. Recently, many advances based on motion capture have been published, concerning slight but precise modifications of an original motion or the parameterization of large motion databases. The challenge consists in combining motion realism with an intuitive on-line motion control, while preserving real-time performances. In the first part of this thesis, we would like to add a brick in the wall of motion parameterization techniques based on motion capture, by introducing a generic motion modeling for locomotion and jump activities. For this purpose, we simplify the motion representation using a statistical method in order to facilitate the elaboration of an efficient parametric model. This model is structured in hierarchical levels, allowing an intuitive motion synthesis with high-level parameters. In addition, we present a space and time normalization process to adapt our model to characters of various sizes. In the second part, we integrate this motion modeling in an animation engine, thus allowing for the generation of a continuous stream of motion for virtual humans. We provide two additional tools to improve the flexibility of our engine. Based on the concept of motion anticipation, we first introduce an on-line method for detecting and enforcing foot-ground constraints. Hence, a straight line walking motion can be smoothly modified to a curved one. Secondly, we propose an approach for the automatic and coherent synthesis of transitions from locomotion to jump (and inversely) motions, by taking into account their respective properties. Finally, we consider the interaction of a virtual human with its environment. Given initial and final conditions set on the locomotion speed and foot positions, we propose a method which computes the corresponding trajectory. To illustrate this method, we propose a case study which mirrors as closely as possible the behavior of a human confronted with an obstacle: at any time, obstacles may be interactively created in front of a moving virtual human. Our method computes a trajectory allowing the virtual human to precisely jump over the obstacle in an on-line manner

    Physical Touch-Up of Human Motions

    No full text
    Many popular motion editing methods do not take physical principles into account potentially producing implausible motions. This paper introduces an efficient method for touching up edited motions to improve physical plausibility. We start by estimating a mass distribution consistent with reference motions known to be physically correct. The edited motion is then divided into ground and flight stages and adjusted to enforce appropriate physical laws for, respectively, zero moment point (ZMP) constraints and correct ballistic trajectory. Unlike previous methods, we do not solve a nonlinear optimization to calculate the adjustment. Instead, closed-form methods are used to construct a hierarchical displacement map which sequentially refines userspecified degrees of freedom at different scales. This is combined with standard methods for kinematic constraint enforcement, yielding an efficient and scalable editing method that allows users to model real human behaviors. The potential of our approach is demonstrated in a number of examples
    corecore