19 research outputs found

    Simulation of human motion data using short-horizon model-predictive control

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 52-56).Many data-driven animation techniques are capable of producing high quality motions of human characters. Few techniques, however, are capable of generating motions that are consistent with physically simulated environments. Physically simulated characters, in contrast, are automatically consistent with the environment, but their motions are often unnatural because they are difficult to control. We present a model-predictive controller that yields natural motions by guiding simulated humans toward real motion data. During simulation, the predictive component of the controller solves a quadratic program to compute the forces for a short window of time into the future. These forces are then applied by a low-gain proportional-derivative component, which makes minor adjustments until the next planning cycle. The controller is fast enough for interactive systems such as games and training simulations. It requires no precomputation and little manual tuning. The controller is resilient to mismatches between the character dynamics and the input motion, which allows it to track motion capture data even where the real dynamics are not known precisely. The same principled formulation can generate natural walks, runs, and jumps in a number of different physically simulated surroundings.by Marco da Silva.S.M

    Velocity based controllers for dynamic character animation

    Get PDF
    Dynamic character animation is a technique used to create character movements based on physics laws. Proportional derivative (PD) controllers are one of the preferred techniques in real time character simulations for driving the state of the character from its current state to a new target-state. In this paper is presented an alternative approach named velocity based controllers that are able to introduce into the dynamical system desired limbs relative velocities as constraints. As a result, the presented technique takes into account all the dynamical system to calculate the forces that transform our character from its current state to the target-state. This technique allows realtime simulation, uses a straightforward parameterization for the character muscle force capabilities and it is robust to disturbances. The paper shows the controllers capabilities for the case of human gait animation.Postprint (published version

    A Method for Digital Representation of Human Movements

    Get PDF
    In this work we present a method to produce a model of human motion based on an expansion in functions series. The model is thought to reproduce the learned movements generalizing them to different conditions. We will show, with an example, how the proposed method is capable to produce the model from a reduced set of examples preserving the relevant features of the demonstrations while guaranteeing constraints at boundaries

    Whole-Body Motion Synthesis with LQP-Based Controller – Application to iCub

    Full text link

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Effects of Saltatory Rewards and Generalized Advantage Estimation on Reference-Based Deep Reinforcement Learning of Humanlike Motions

    Get PDF
    In the application of learning physics-based character skills, deep reinforcement learning (DRL) can lead to slow convergence and local optimum solutions during the training process of a reinforcement learning (RL) agent. With the presence of an environment with reward saltation, we can easily plan to magnify those saltatory rewards with the perspective of sample usage to increase the experience pool of an agent during this training process. In our work, we have proposed two modified algorithms. The first one is the addition of a parameter based reward optimization process to magnify the saltatory rewards and thus increasing an agent’s utilization of previous experiences. We have added this parameter based reward optimization with proximal policy optimization (PPO) algorithm. What’s more, the other proposed algorithm introduces generalized advantage estimation in estimating the advantage of the advantage actor critic (A2C) algorithm which resulted in faster convergence to the global optimal solutions of DRL. We have conducted all our experiments to measure their performances in a custom reinforcement learning environment built using a physics engine named PyBullet. In that custom environment, the RL agent has a humanoid body which learns humanlike motions, e.g., walk, run, spin, cartwheel, spinkick, and backflip, from imitating example reference motions using the RL algorithms. Our experiments have shown significant improvement in performance and convergence speed of DRL in this custom environment for learning humanlike motions using the modified versions of PPO and A2C if compared with their vanilla versions
    corecore