5 research outputs found
Robust on-line adaptive footplant detection and enforcement for locomotion
A common problem in virtual character computer animation concerns the preservation of the basic foot-floor constraint (or footplant), consisting in detecting it before enforcing it. This paper describes a system capable of generating motion while continuously preserving the footplants for a realtime, dynamically evolving context. This system introduces a constraint detection method that improves classical techniques by adaptively selecting threshold values according to motion type and quality. The footplants are then enforced using a numerical inverse kinematics solver. As opposed to previous approaches, we define the footplant by attaching to it two effectors whose position at the beginning of the constraint can be modified, in order to place the foot on the ground, for example. However, the corrected posture at the constraint beginning is needed before it starts to ensure smoothness between the unconstrained and constrained states. We, therefore, present a new approach based on motion anticipation, which computes animation postures in advance, according to time-evolving motion parameters, such as locomotion speed and type. We illustrate our on-line approach with continuously modified locomotion patterns, and demonstrate its ability to correct motion artifacts, such as foot sliding, to change the constraint position and to modify from a straight to a curved walk motio
Motion enriching using humanoide captured motions
Animated humanoid characters are a delight to watch. Nowadays they are extensively
used in simulators. In military applications animated characters are used for training
soldiers, in medical they are used for studying to detect the problems in the joints of a
patient, moreover they can be used for instructing people for an event(such as weather
forecasts or giving a lecture in virtual environment). In addition to these environments
computer games and 3D animation movies are taking the benefit of animated characters
to be more realistic. For all of these mediums motion capture data has a great impact
because of its speed and robustness and the ability to capture various motions.
Motion capture method can be reused to blend various motion styles. Furthermore we
can generate more motions from a single motion data by processing each joint data
individually if a motion is cyclic. If the motion is cyclic it is highly probable that each
joint is defined by combinations of different signals. On the other hand, irrespective of
method selected, creating animation by hand is a time consuming and costly process for
people who are working in the art side. For these reasons we can use the databases
which are open to everyone such as Computer Graphics Laboratory of Carnegie Mellon
University.Creating a new motion from scratch by hand by using some spatial tools (such as 3DS
Max, Maya, Natural Motion Endorphin or Blender) or by reusing motion captured data
has some difficulties. Irrespective of the motion type selected to be animated
(cartoonish, caricaturist or very realistic) human beings are natural experts on any kind
of motion. Since we are experienced with other peoples’ motions, and comparing each
motion to the others, we can easily judge one individual’s mood from his/her body
language. As being a natural master of human motions it is very difficult to convince
people by a humanoid character’s animation since the recreated motions can include
some unnatural artifacts (such as foot-skating, flickering of a joint)
Dynamic Obstacle Clearing for Real-time Character Animation
This paper proposes a novel method to control virtual characters in dynamic environments. A virtual character is animated by a locomotion and jumping engine, enabling production of continuous parameterized motions. At any time during runtime, flat obstacles (e.g. a puddle of water) can be created and placed in front of a character. The method first decides whether the character is able to get around or jump over the obstacle. Then the motion parameters are accordingly modified. The transition from locomotion to jump is performed with an improved motion blending technique. While traditional blending approaches let the user choose the transition time and duration manually, our approach automatically controls transitions between motion patterns whose parameters are not known in advance. In addition, according to the animation context, blending operations are executed during a precise period of time to preserve specific physical properties. This ensures coherent movements over the parameter space of the original input motions. The initial locomotion type and speed are smoothly varied with respect to the required jump type and length. This variation is carefully computed in order to place the take-off foot as close to the created obstacle as possible
On-line locomotion synthesis for virtual humans
Ever since the development of Computer Graphics in the industrial and academic worlds in the seventies, public knowledge and expertise have grown in a tremendous way, notably because of the increasing fascination for Computer Animation. This specific field of Computer Graphics gathers numerous techniques, especially for the animation of characters or virtual humans in movies and video games. To create such high-fidelity animations, a particular interest has been dedicated to motion capture, a technology which allows to record the 3D movement of a live performer. The resulting realism motion is convincing. However, this technique offers little control to animators, as the recorded motion can only be played back. Recently, many advances based on motion capture have been published, concerning slight but precise modifications of an original motion or the parameterization of large motion databases. The challenge consists in combining motion realism with an intuitive on-line motion control, while preserving real-time performances. In the first part of this thesis, we would like to add a brick in the wall of motion parameterization techniques based on motion capture, by introducing a generic motion modeling for locomotion and jump activities. For this purpose, we simplify the motion representation using a statistical method in order to facilitate the elaboration of an efficient parametric model. This model is structured in hierarchical levels, allowing an intuitive motion synthesis with high-level parameters. In addition, we present a space and time normalization process to adapt our model to characters of various sizes. In the second part, we integrate this motion modeling in an animation engine, thus allowing for the generation of a continuous stream of motion for virtual humans. We provide two additional tools to improve the flexibility of our engine. Based on the concept of motion anticipation, we first introduce an on-line method for detecting and enforcing foot-ground constraints. Hence, a straight line walking motion can be smoothly modified to a curved one. Secondly, we propose an approach for the automatic and coherent synthesis of transitions from locomotion to jump (and inversely) motions, by taking into account their respective properties. Finally, we consider the interaction of a virtual human with its environment. Given initial and final conditions set on the locomotion speed and foot positions, we propose a method which computes the corresponding trajectory. To illustrate this method, we propose a case study which mirrors as closely as possible the behavior of a human confronted with an obstacle: at any time, obstacles may be interactively created in front of a moving virtual human. Our method computes a trajectory allowing the virtual human to precisely jump over the obstacle in an on-line manner
On-line Adapted Transition between Locomotion and Jump
Motion blending is widely accepted as a standard technique in computer animation, allowing the generation of new motions by interpolation and/or transition between motion capture sequences. To ensure smooth and seamless results, an important property has to be taken into account: similar constraints sequences have to be time aligned. But traditional blending approaches let the user choose manually the transition time and duration. In addition, according to the animation context, blending operations should not be performed immediately. They can only occur during a precise period of time, while preserving specific physical properties. We present in this paper an improved blending technique allowing automatic controlled transition between motion patterns whose parameters are not known in advance. This approach ensures coherent movements over the parameter space of the original input motions. To illustrate our approach, we focus on walking and running motions blended with jumps, where animators may vary the jump length and style. The proposed method specifies automatically the support phases of the input motions, and controls on the fly a correct transition time. Moreover the current locomotion type and speed are smoothly adapted given a specific jump type and length