27 research outputs found
Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation
Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes
Motion enriching using humanoide captured motions
Animated humanoid characters are a delight to watch. Nowadays they are extensively
used in simulators. In military applications animated characters are used for training
soldiers, in medical they are used for studying to detect the problems in the joints of a
patient, moreover they can be used for instructing people for an event(such as weather
forecasts or giving a lecture in virtual environment). In addition to these environments
computer games and 3D animation movies are taking the benefit of animated characters
to be more realistic. For all of these mediums motion capture data has a great impact
because of its speed and robustness and the ability to capture various motions.
Motion capture method can be reused to blend various motion styles. Furthermore we
can generate more motions from a single motion data by processing each joint data
individually if a motion is cyclic. If the motion is cyclic it is highly probable that each
joint is defined by combinations of different signals. On the other hand, irrespective of
method selected, creating animation by hand is a time consuming and costly process for
people who are working in the art side. For these reasons we can use the databases
which are open to everyone such as Computer Graphics Laboratory of Carnegie Mellon
University.Creating a new motion from scratch by hand by using some spatial tools (such as 3DS
Max, Maya, Natural Motion Endorphin or Blender) or by reusing motion captured data
has some difficulties. Irrespective of the motion type selected to be animated
(cartoonish, caricaturist or very realistic) human beings are natural experts on any kind
of motion. Since we are experienced with other peoples’ motions, and comparing each
motion to the others, we can easily judge one individual’s mood from his/her body
language. As being a natural master of human motions it is very difficult to convince
people by a humanoid character’s animation since the recreated motions can include
some unnatural artifacts (such as foot-skating, flickering of a joint)
MotionAug: Augmentation with Physical Correction for Human Motion Prediction
This paper presents a motion data augmentation scheme incorporating motion
synthesis encouraging diversity and motion correction imposing physical
plausibility. This motion synthesis consists of our modified Variational
AutoEncoder (VAE) and Inverse Kinematics (IK). In this VAE, our proposed
sampling-near-samples method generates various valid motions even with
insufficient training motion data. Our IK-based motion synthesis method allows
us to generate a variety of motions semi-automatically. Since these two schemes
generate unrealistic artifacts in the synthesized motions, our motion
correction rectifies them. This motion correction scheme consists of imitation
learning with physics simulation and subsequent motion debiasing. For this
imitation learning, we propose the PD-residual force that significantly
accelerates the training process. Furthermore, our motion debiasing
successfully offsets the motion bias induced by imitation learning to maximize
the effect of augmentation. As a result, our method outperforms previous
noise-based motion augmentation methods by a large margin on both Recurrent
Neural Network-based and Graph Convolutional Network-based human motion
prediction models. The code is available at
https://github.com/meaten/MotionAug.Comment: Accepted at CVPR202
Space-time sketching of character animation
International audienceWe present a space-time abstraction for the sketch-based design of character animation. It allows animators to draft a full coordinated motion using a single stroke called the space-time curve (STC). From the STC we compute a dynamic line of action (DLOA) that drives the motion of a 3D character through projective constraints. Our dynamic models for the line's motion are entirely geometric, require no pre-existing data, and allow full artistic control. The resulting DLOA can be refined by over-sketching strokes along the space-time curve, or by composing another DLOA on top leading to control over complex motions with few strokes. Additionally , the resulting dynamic line of action can be applied to arbitrary body parts or characters. To match a 3D character to the 2D line over time, we introduce a robust matching algorithm based on closed-form solutions, yielding a tight match while allowing squash and stretch of the character's skeleton. Our experiments show that space-time sketching has the potential of bringing animation design within the reach of beginners while saving time for skilled artists
Touché: Data-Driven Interactive Sword Fighting in Virtual Reality
VR games offer new freedom for players to interact naturally using motion. This makes it harder to design games that react to player motions convincingly. We present a framework for VR sword fighting experiences against a virtual character that simplifies the necessary technical work to achieve a convincing simulation. The framework facilitates VR design by abstracting from difficult details on the lower “physical” level of interaction, using data-driven models to automate both the identification of user actions and the synthesis of character animations. Designers are able to specify the character's behaviour on a higher “semantic” level using parameterised building blocks, which allow for control over the experience while minimising manual development work. We conducted a technical evaluation, a questionnaire study and an interactive user study. Our results suggest that the framework produces more realistic and engaging interactions than simple hand-crafted interaction logic, while supporting a controllable and understandable behaviour design
A Vector Field Design Approach to Animated Transitions
Animated transitions can be effective in explaining and exploring a small number of visualizations where there are drastic changes in the scene over a short interval of time. This is especially true if data elements cannot be visually distinguished by other means. Current research in animated transitions has mainly focused on linear transitions (all elements follow straight line paths) or enhancing coordinated motion through bundling of linear trajectories. In this paper, we introduce animated transition design, a technique to build smooth, non-linear transitions for clustered data with either minimal or no user involvement. The technique is flexible and simple to implement, and has the additional advantage that it explicitly enhances coordinated motion and can avoid crowding, which are both important factors to support object tracking in a scene. We investigate its usability, provide preliminary evidence for the effectiveness of this technique through metric evaluations and user study and discuss limitations and future directions
A survey on human performance capture and animation
With the rapid development of computing technology, three-dimensional (3D) human body
models and their dynamic motions are widely used in the digital entertainment industry. Human perfor-
mance mainly involves human body shapes and motions. Key research problems include how to capture
and analyze static geometric appearance and dynamic movement of human bodies, and how to simulate
human body motions with physical e�ects. In this survey, according to main research directions of human body performance capture and animation, we summarize recent advances in key research topics, namely
human body surface reconstruction, motion capture and synthesis, as well as physics-based motion sim-
ulation, and further discuss future research problems and directions. We hope this will be helpful for
readers to have a comprehensive understanding of human performance capture and animatio
Recommended from our members
Interactive human locomotion using motion graphs and mobility maps
Graph-based approaches for sequencing motion capture data have produced some of the most realistic and controllable character motion to date. Most previous graph-based approaches have employed a run-time global search to find paths through the motion graph that meet user-defined constraints such as a desired locomotion path. Such searches do not scale well to large numbers of characters. In this thesis, we describe a locomotion approach that benefits from the realism of graph-based approaches while maintaining basic user control and scaling well to large numbers of characters. Our approach is based on precomputing multiple least cost sequences from every state in a state-action graph. We store these precomputed sequences in a data structure called a mobility map and perform a local search of this map at run-time to generate motion sequences in real time that achieve user constraints in a natural manner. We demonstrate the quality of the motion through various example locomotion tasks including target
tracking and collision avoidance. We demonstrate scalability by animating crowds of up to a hundred and fifty rendered articulated walking characters at real-time rates