17 research outputs found

    Interactive motion deformation with prioritized constraints

    Full text link

    Motion Modeling: Can We Get Rid of Motion Capture?

    Get PDF
    For situations like crowd simulation, serious games, and VR-based training, flexible and spontaneous movements are extremely important. Motion models would be the best strategy to adopt, but unfortunately, they are very costly to develop and the results are disappointing. Motion capture is still the most popular way. The ultimate in terms of motion models seems to be data-driven. Motion retargeting and PCA-based models are well used but they still rely strongly to Motion Capture. In this paper, we try to analyze the situation and illustrate it using a few case studies

    Interactive Low-Dimensional Human Motion Synthesis by Combining Motion Models and PIK

    Get PDF
    This paper explores the issue of interactive low-dimensional human motion synthesis. We compare the performances of two motion models, i.e. Principal Components Analysis (PCA) or Probabilistic PCA (PPCA), for solving a constrained optimization problem within a low-dimensional latent space. We use PCA or PPCA as a first step of preprocessing to reduce the dimensionality of the database to make it tractable, and to encapsulate only the essential aspects of a specific motion pattern. Interactive user control is provided by formulating a low-dimensional optimization framework that uses a Prioritized Inverse Kinematics (PIK) strategy. The key insight of PIK is that the user can adjust a motion by adding constraints with different priorities. We demonstrate the robustness of our approach by synthesizing various styles of golf swing. This movement is challenging in the sense that it is highly coordinated and requires a great precision while moving with high speeds. Hence, any artifact is clearly noticeable in the solution movement. We simultaneously show results comparing local and global motion models regarding synthesis realism and performance. Finally, the quality of the synthesized animations is assessed by comparing our results against a per-frame PIK technique

    Simulation of Individual Spontaneous Reactive Behavior

    Get PDF
    The context of this work is the search for realism and believability of Virtual Humans. Our contribution to achieve this goal is to enable Virtual Humans (VH) to react to spontaneous events in virtual environments (VE). In order to reflect the individuality of each VH, these reactions have to be expressive and unique. In this paper we present firstly a model of reaction based on personality traits. The model was defined using statistical analysis of real people reacting to unexpected events. We also consider that the emotional state is involved in the modulation of reactions, thus we integrate a model of emotion update. Secondly, we present a semantic-based methodology to compose reactive animation sequences using inverse kinematics (IK) and key frame (KF) interpolation animation techniques. Finally, we present an application that demonstrates how Virtual Humans can produce diferent movements as reaction to unexpected stimuli, depending on their personality traits and emotional state

    Robust on-line adaptive footplant detection and enforcement for locomotion

    Get PDF
    A common problem in virtual character computer animation concerns the preservation of the basic foot-floor constraint (or footplant), consisting in detecting it before enforcing it. This paper describes a system capable of generating motion while continuously preserving the footplants for a realtime, dynamically evolving context. This system introduces a constraint detection method that improves classical techniques by adaptively selecting threshold values according to motion type and quality. The footplants are then enforced using a numerical inverse kinematics solver. As opposed to previous approaches, we define the footplant by attaching to it two effectors whose position at the beginning of the constraint can be modified, in order to place the foot on the ground, for example. However, the corrected posture at the constraint beginning is needed before it starts to ensure smoothness between the unconstrained and constrained states. We, therefore, present a new approach based on motion anticipation, which computes animation postures in advance, according to time-evolving motion parameters, such as locomotion speed and type. We illustrate our on-line approach with continuously modified locomotion patterns, and demonstrate its ability to correct motion artifacts, such as foot sliding, to change the constraint position and to modify from a straight to a curved walk motio

    Dynamic Obstacle Clearing for Real-time Character Animation

    Get PDF
    This paper proposes a novel method to control virtual characters in dynamic environments. A virtual character is animated by a locomotion and jumping engine, enabling production of continuous parameterized motions. At any time during runtime, flat obstacles (e.g. a puddle of water) can be created and placed in front of a character. The method first decides whether the character is able to get around or jump over the obstacle. Then the motion parameters are accordingly modified. The transition from locomotion to jump is performed with an improved motion blending technique. While traditional blending approaches let the user choose the transition time and duration manually, our approach automatically controls transitions between motion patterns whose parameters are not known in advance. In addition, according to the animation context, blending operations are executed during a precise period of time to preserve specific physical properties. This ensures coherent movements over the parameter space of the original input motions. The initial locomotion type and speed are smoothly varied with respect to the required jump type and length. This variation is carefully computed in order to place the take-off foot as close to the created obstacle as possible

    Towards adaptive and directable control of simulated creatures

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.Includes bibliographical references (p. 71-77).Interactive animation is used ubiquitously for entertainment and for the communication of ideas. Active creatures, such as humans, robots, and animals, are often at the heart of such animation and are required to interact in compelling and lifelike ways with their virtual environment. Physical simulation handles such interaction correctly, with a principled approach that adapts easily to different circumstances, changing environments, and unexpected disturbances. However, developing robust control strategies that result in natural motion of active creatures within physical simulation has proved to be a difficult problem. To address this issue, a new and versatile algorithm for the low-level control of animated characters has been developed and tested. It simplifies the process of creating control strategies by automatically accounting for many parameters of the simulation, including the physical properties of the creature and the contact forces between the creature and the virtual environment. This thesis describes two versions of the algorithm (one fast and one feature-rich) and the experiments conducted to evaluate its performance.(cont.) The results include interactive animations of active creatures manipulating objects and balancing in response to significant disturbances from their virtual environment. The algorithm is shown to be directable, adaptive, and fast and to hold promise for a new generation of interactive simulations that feature lifelike creatures acting with the same fluidity and grace exhibited by natural beings.by Yeuhi Abe.S.M
    corecore