3,492 research outputs found
Real-Time Character Rise Motions
This paper presents an uncomplicated dynamic controller for generating
physically-plausible three-dimensional full-body biped character rise motions
on-the-fly at run-time. Our low-dimensional controller uses fundamental
reference information (e.g., center-of-mass, hands, and feet locations) to
produce balanced biped get-up poses by means of a real-time physically-based
simulation. The key idea is to use a simple approximate model (i.e., similar to
the inverted-pendulum stepping model) to create continuous reference
trajectories that can be seamlessly tracked by an articulated biped character
to create balanced rise-motions. Our approach does not use any key-framed data
or any computationally expensive processing (e.g., offline-optimization or
search algorithms). We demonstrate the effectiveness and ease of our technique
through example (i.e., a biped character picking itself up from different
laying positions)
Real-time biped character stepping
PhD ThesisA rudimentary biped activity that is essential in interactive evirtual worlds, such as
video-games and training simulations, is stepping. For example, stepping is fundamental in everyday terrestrial activities that include walking and balance recovery.
Therefore an effective 3D stepping control algorithm that is computationally fast
and easy to implement is extremely valuable and important to character animation
research. This thesis focuses on generating real-time controllable stepping motions
on-the-fly without key-framed data that are responsive and robust (e.g.,can remain
upright and balanced under a variety of conditions, such as pushes and dynami-
cally changing terrain). In our approach, we control the character’s direction and
speed by means of varying the stepposition and duration. Our lightweight stepping
model is used to create coordinated full-body motions, which produce directable
steps to guide the character with specific goals (e.g., following a particular path
while placing feet at viable locations). We also create protective steps in response
to random disturbances (e.g., pushes). Whereby, the system automatically calculates where and when to place the foot to remedy the disruption. In conclusion,
the inverted pendulum has a number of limitations that we address and resolve
to produce an improved lightweight technique that provides better control and
stability using approximate feature enhancements, for instance, ankle-torque and
elongated-body
Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control
Synthesizing Physically Plausible Human Motions in 3D Scenes
Synthesizing physically plausible human motions in 3D scenes is a challenging
problem. Kinematics-based methods cannot avoid inherent artifacts (e.g.,
penetration and foot skating) due to the lack of physical constraints.
Meanwhile, existing physics-based methods cannot generalize to multi-object
scenarios since the policy trained with reinforcement learning has limited
modeling capacity. In this work, we present a framework that enables physically
simulated characters to perform long-term interaction tasks in diverse,
cluttered, and unseen scenes. The key idea is to decompose human-scene
interactions into two fundamental processes, Interacting and Navigating, which
motivates us to construct two reusable Controller, i.e., InterCon and NavCon.
Specifically, InterCon contains two complementary policies that enable
characters to enter and leave the interacting state (e.g., sitting on a chair
and getting up). To generate interaction with objects at different places, we
further design NavCon, a trajectory following policy, to keep characters'
locomotion in the free space of 3D scenes. Benefiting from the divide and
conquer strategy, we can train the policies in simple environments and
generalize to complex multi-object scenes. Experimental results demonstrate
that our framework can synthesize physically plausible long-term human motions
in complex 3D scenes. Code will be publicly released at
https://github.com/liangpan99/InterScene
A multi-resolution approach for adapting close character interaction
Synthesizing close interactions such as dancing and fighting between characters is a challenging problem in computer animation. While encouraging results are presented in [Ho et al. 2010], the high computation cost makes the method unsuitable for interactive motion editing and synthesis. In this paper, we propose an efficient multiresolution approach in the temporal domain for editing and adapting close character interactions based on the Interaction Mesh framework. In particular, we divide the original large spacetime optimization problem into multiple smaller problems such that the user can observe the adapted motion while playing-back the movements during run-time. Our approach is highly parallelizable, and achieves high performance by making use of multi-core architectures. The method can be applied to a wide range of applications including motion editing systems for animators and motion retargeting systems for humanoid robots
Motion deformation style control technique for 3D humanoid character by using MoCap data
Realistic humanoid 3D character movement is very important to apply in the computer games, movies, virtual reality and mixed reality environment. This paper presents a technique to deform motion style using Motion Capture (MoCap) data based on computer animation system. By using MoCap data, natural human action style could be deforming. However, the structure hierarchy of humanoid in MoCap Data is very complex. This method allows humanoid character to respond naturally based on user motion input. Unlike existing 3D humanoid character motion editor, our method produces realistic final result and simulates new dynamic humanoid motion style based on simple user interface control. © 2016 Penerbit UTM Press. All rights reserve
QuestEnvSim: Environment-Aware Simulated Motion Tracking from Sparse Sensors
Replicating a user's pose from only wearable sensors is important for many
AR/VR applications. Most existing methods for motion tracking avoid environment
interaction apart from foot-floor contact due to their complex dynamics and
hard constraints. However, in daily life people regularly interact with their
environment, e.g. by sitting on a couch or leaning on a desk. Using
Reinforcement Learning, we show that headset and controller pose, if combined
with physics simulation and environment observations can generate realistic
full-body poses even in highly constrained environments. The physics simulation
automatically enforces the various constraints necessary for realistic poses,
instead of manually specifying them as in many kinematic approaches. These hard
constraints allow us to achieve high-quality interaction motions without
typical artifacts such as penetration or contact sliding. We discuss three
features, the environment representation, the contact reward and scene
randomization, crucial to the performance of the method. We demonstrate the
generality of the approach through various examples, such as sitting on chairs,
a couch and boxes, stepping over boxes, rocking a chair and turning an office
chair. We believe these are some of the highest-quality results achieved for
motion tracking from sparse sensor with scene interaction
- …