10,934 research outputs found
Dynamic modelling of articulated figures suitable for the purpose of computer animation
The animation of articulated bodies presents interest in the areas of biomechanics, sports, medicine and the entertainment industry. Traditional motion control methods for these bodies, such as kinematics and rotoscoping are either expensive to use or very laborious. The motion of articulated bodies is complex mostly because of their number of articulations and the diversity of possible motions.
This thesis investigates the possibility of using dynamic analysis in order to define the motion of articulated bodies. Dynamic analysis uses physical quantities such as forces, torques and accelerations, to calculate the motion of the body. The method used in this thesis is based upon the inverse Lagrangian dynamics formulation, which, given the accelerations, velocities and positions of each of the articulations of the body, finds the forces or torques that are necessary to generate such motion. Dynamic analysis offers the possibility of generating more realistic motion and also of automating the process of motion control. The Lagrangian formulation was used first in robotics and thus the necessary adaptations for using it in computer animation are presented.
An analytical method for the calculation of ground reaction forces is also derived, as these are the most important external forces in the case of humans and the other animals that are of special interest in computer animation. The application of dynamic analysis in bipedal walking is investigated. Two models of increasing complexity are discussed. The issue of motion specification for articulated bodies is also examined. A software environment, Solaris, is described which includes the facility of dynamic and kinematic motion control for articulated bodies. Finally, the advantages and problematics of dynamic analysis with respect to kinematics and other methods are discussed
Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control
Recommended from our members
View-dependent adaptive cloth simulation
This paper describes a method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening. Given a prescribed camera motion, the method adjusts the criteria controlling refinement to account for visibility and apparent size in the camera's view. Objectionable dynamic artifacts are avoided by anticipative refinement and smoothed coarsening. This approach preserves the appearance of detailed cloth throughout the animation while avoiding the wasted effort of simulating details that would not be discernible to the viewer. The computational savings realized by this method increase as scene complexity grows, producing a 2× speed-up for a single character and more than 4× for a small group
Animating Virtual Human for Virtual Batik Modeling
This research paper describes a development of animating virtual human for virtual
batik modeling project. The objectives of this project are to animate the virtual
human, to map the cloth with the virtual human body, to present the batik cloth, and
to evaluate the application in terms of realism of virtual human look, realism of
virtual human movement, realism of 3D scene, application suitability, application
usability, fashion suitability and user acceptance. The final goal is to accomplish an
animated virtual human for virtual batik modeling. There are 3 essential phases
which research and analysis (data collection of modeling and animating technique),
development (model and animate virtual human, map cloth to body and add a music)
and evaluation (evaluation of realism of virtual human look, realism of virtual human
movement, realism of props, application suitability, application usability, fashion
suitability and user acceptance). The result for application usability is the highest
percentage which 90%. Result show that this application is useful to the people. In
conclusion, this project has met the objective, which the realism is achieved by used a
suitable technique for modeling and animating
Capture, Learning, and Synthesis of 3D Speaking Styles
Audio-driven 3D facial animation has been widely explored, but achieving
realistic, human-like performance is still unsolved. This is due to the lack of
available 3D datasets, models, and standard evaluation metrics. To address
this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans
captured at 60 fps and synchronized audio from 12 speakers. We then train a
neural network on our dataset that factors identity from facial motion. The
learned model, VOCA (Voice Operated Character Animation) takes any speech
signal as input - even speech in languages other than English - and
realistically animates a wide range of adult faces. Conditioning on subject
labels during training allows the model to learn a variety of realistic
speaking styles. VOCA also provides animator controls to alter speaking style,
identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball
rotations) during animation. To our knowledge, VOCA is the only realistic 3D
facial animation model that is readily applicable to unseen subjects without
retargeting. This makes VOCA suitable for tasks like in-game video, virtual
reality avatars, or any scenario in which the speaker, speech, or language is
not known in advance. We make the dataset and model available for research
purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201
Human Factors Simulation Research at the University of Pennsylvania
Jack is a Silicon Graphics Iris 4D workstation-based system for the definition, manipulation, animation, and human factors performance analysis of simulated human figures. Built on a powerful representation for articulated figures, Jack offers the interactive user a simple, intuitive, and yet extremely capable interface into any 3-D articulated world. Jack incorporates sophisticated systems for anthropometric human figure generation, multiple limb positioning under constraints, view assessment, and strength model-based performance simulation of human figures. Geometric workplace models may be easily imported into Jack. Various body geometries may be used, from simple polyhedral volumes to contour-scanned real figures. High quality graphics of environments and clothed figures are easily obtained. Descriptions of some work in progress are also included
- …