3,147 research outputs found
3D performance capture for facial animation
This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence
Breathing Life into Faces: Speech-driven 3D Facial Animation with Natural Head Pose and Detailed Shape
The creation of lifelike speech-driven 3D facial animation requires a natural
and precise synchronization between audio input and facial expressions.
However, existing works still fail to render shapes with flexible head poses
and natural facial details (e.g., wrinkles). This limitation is mainly due to
two aspects: 1) Collecting training set with detailed 3D facial shapes is
highly expensive. This scarcity of detailed shape annotations hinders the
training of models with expressive facial animation. 2) Compared to mouth
movement, the head pose is much less correlated to speech content.
Consequently, concurrent modeling of both mouth movement and head pose yields
the lack of facial movement controllability. To address these challenges, we
introduce VividTalker, a new framework designed to facilitate speech-driven 3D
facial animation characterized by flexible head pose and natural facial
details. Specifically, we explicitly disentangle facial animation into head
pose and mouth movement and encode them separately into discrete latent spaces.
Then, these attributes are generated through an autoregressive process
leveraging a window-based Transformer architecture. To augment the richness of
3D facial animation, we construct a new 3D dataset with detailed shapes and
learn to synthesize facial details in line with speech content. Extensive
quantitative and qualitative experiments demonstrate that VividTalker
outperforms state-of-the-art methods, resulting in vivid and realistic
speech-driven 3D facial animation
FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation
Facial expression analysis based on machine learning requires large number of
well-annotated data to reflect different changes in facial motion. Publicly
available datasets truly help to accelerate research in this area by providing
a benchmark resource, but all of these datasets, to the best of our knowledge,
are limited to rough annotations for action units, including only their
absence, presence, or a five-level intensity according to the Facial Action
Coding System. To meet the need for videos labeled in great detail, we present
a well-annotated dataset named FEAFA for Facial Expression Analysis and 3D
Facial Animation. One hundred and twenty-two participants, including children,
young adults and elderly people, were recorded in real-world conditions. In
addition, 99,356 frames were manually labeled using Expression Quantitative
Tool developed by us to quantify 9 symmetrical FACS action units, 10
asymmetrical (unilateral) FACS action units, 2 symmetrical FACS action
descriptors and 2 asymmetrical FACS action descriptors, and each action unit or
action descriptor is well-annotated with a floating point number between 0 and
1. To provide a baseline for use in future research, a benchmark for the
regression of action unit values based on Convolutional Neural Networks are
presented. We also demonstrate the potential of our FEAFA dataset for 3D facial
animation. Almost all state-of-the-art algorithms for facial animation are
achieved based on 3D face reconstruction. We hence propose a novel method that
drives virtual characters only based on action unit value regression of the 2D
video frames of source actors.Comment: 9 pages, 7 figure
Investigating facial animation production through artistic inquiry
Studies into dynamic facial expressions tend to make use of experimental methods based on objectively manipulated stimuli. New techniques for displaying increasingly realistic facial movement and methods of measuring observer responses are typical of computer animation and psychology facial expression research. However, few projects focus on the artistic nature of performance production. Instead, most concentrate on the naturalistic appearance of posed or acted expressions. In this paper, the authors discuss a method for exploring the creative process of emotional facial expression animation, and ask whether anything can be learned about authentic dynamic expressions through artistic inquiry
A Platform Independent Architecture for Virtual Characters and Avatars
We have developed a Platform Independent Architecture for Virtual Characters and Avatars (PIAVCA), a character animation system that aims to be independent of any underlying graphics framework and so be easily portable. PIAVCA supports body animation based on a skeletal representation and facial animation based on morph targets
A practice-led approach to facial animation research
In facial expression research, it is well established that certain emotional expressions are universally recognized. Studies into the observer perception of dynamic expressions have built upon this research by highlighting the importance of particular facial regions, timings, and temporal configurations to perception and interpretation. In many studies, the stimuli for such studies have been generated through posing by non-experts or performances by trained actors. However, skilled character animators are capable of crafting recognizable, believable emotional facial expressions as a part of their professional practice. âEmotional Avatarsâ was conceived as an interdisciplinary research project which would draw upon the knowledge of animation practice and emotional psychology. The aim of the project was to jointly investigate the artistic generation and observer perception of emotional expression animation to determine whether the nuances of emotional facial expression could be artistically choreographed to enhance audience interpretation
- âŠ