452 research outputs found
A Decoupled 3D Facial Shape Model by Adversarial Training
Data-driven generative 3D face models are used to compactly encode facial
shape data into meaningful parametric representations. A desirable property of
these models is their ability to effectively decouple natural sources of
variation, in particular identity and expression. While factorized
representations have been proposed for that purpose, they are still limited in
the variability they can capture and may present modeling artifacts when
applied to tasks such as expression transfer. In this work, we explore a new
direction with Generative Adversarial Networks and show that they contribute to
better face modeling performances, especially in decoupling natural factors,
while also achieving more diverse samples. To train the model we introduce a
novel architecture that combines a 3D generator with a 2D discriminator that
leverages conventional CNNs, where the two components are bridged by a geometry
mapping layer. We further present a training scheme, based on auxiliary
classifiers, to explicitly disentangle identity and expression attributes.
Through quantitative and qualitative results on standard face datasets, we
illustrate the benefits of our model and demonstrate that it outperforms
competing state of the art methods in terms of decoupling and diversity.Comment: camera-ready version for ICCV'1
Deep Learning Based Human Emotional State Recognition in a Video
Human emotions play significant role in everyday life. There are a lot of applications of automatic emotion recognition in medicine, e-learning, monitoring, marketing etc. In this paper the method and neural network architecture for real-time human emotion recognition by audio-visual data are proposed. To classify one of seven emotions, deep neural networks, namely, convolutional and recurrent neural networks are used. Visual information is represented by a sequence of 16 frames of 96 × 96 pixels, and audio information - by 140 features for each of a sequence of 37 temporal windows. To reduce the number of audio features autoencoder was used. Audio information in conjunction with visual one is shown to increase recognition accuracy up to 12%. The developed system being not demanding to be computing resources is dynamic in terms of selection of parameters, reducing or increasing the number of emotion classes, as well as the ability to easily add, accumulate and use information from other external devices for further improvement of classification accuracy
- …