2,443 research outputs found
Region Based Adversarial Synthesis of Facial Action Units
Facial expression synthesis or editing has recently received increasing
attention in the field of affective computing and facial expression modeling.
However, most existing facial expression synthesis works are limited in paired
training data, low resolution, identity information damaging, and so on. To
address those limitations, this paper introduces a novel Action Unit (AU) level
facial expression synthesis method called Local Attentive Conditional
Generative Adversarial Network (LAC-GAN) based on face action units
annotations. Given desired AU labels, LAC-GAN utilizes local AU regional rules
to control the status of each AU and attentive mechanism to combine several of
them into the whole photo-realistic facial expressions or arbitrary facial
expressions. In addition, unpaired training data is utilized in our proposed
method to train the manipulation module with the corresponding AU labels, which
learns a mapping between a facial expression manifold. Extensive qualitative
and quantitative evaluations are conducted on the commonly used BP4D dataset to
verify the effectiveness of our proposed AU synthesis method.Comment: Accepted by MMM202
Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein Generative Adversarial Nets
In this work, we propose a novel approach for generating videos of the six
basic facial expressions given a neutral face image. We propose to exploit the
face geometry by modeling the facial landmarks motion as curves encoded as
points on a hypersphere. By proposing a conditional version of manifold-valued
Wasserstein generative adversarial network (GAN) for motion generation on the
hypersphere, we learn the distribution of facial expression dynamics of
different classes, from which we synthesize new facial expression motions. The
resulting motions can be transformed to sequences of landmarks and then to
images sequences by editing the texture information using another conditional
Generative Adversarial Network. To the best of our knowledge, this is the first
work that explores manifold-valued representations with GAN to address the
problem of dynamic facial expression generation. We evaluate our proposed
approach both quantitatively and qualitatively on two public datasets;
Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the
effectiveness of our approach in generating realistic videos with continuous
motion, realistic appearance and identity preservation. We also show the
efficiency of our framework for dynamic facial expressions generation, dynamic
facial expression transfer and data augmentation for training improved emotion
recognition models
Facial Expression Restoration Based on Improved Graph Convolutional Networks
Facial expression analysis in the wild is challenging when the facial image
is with low resolution or partial occlusion. Considering the correlations among
different facial local regions under different facial expressions, this paper
proposes a novel facial expression restoration method based on generative
adversarial network by integrating an improved graph convolutional network
(IGCN) and region relation modeling block (RRMB). Unlike conventional graph
convolutional networks taking vectors as input features, IGCN can use tensors
of face patches as inputs. It is better to retain the structure information of
face patches. The proposed RRMB is designed to address facial generative tasks
including inpainting and super-resolution with facial action units detection,
which aims to restore facial expression as the ground-truth. Extensive
experiments conducted on BP4D and DISFA benchmarks demonstrate the
effectiveness of our proposed method through quantitative and qualitative
evaluations.Comment: Accepted by MMM202
- …