36 research outputs found
Dynamic Facial Expression Generation on Hilbert Hypersphere with Conditional Wasserstein Generative Adversarial Nets
In this work, we propose a novel approach for generating videos of the six
basic facial expressions given a neutral face image. We propose to exploit the
face geometry by modeling the facial landmarks motion as curves encoded as
points on a hypersphere. By proposing a conditional version of manifold-valued
Wasserstein generative adversarial network (GAN) for motion generation on the
hypersphere, we learn the distribution of facial expression dynamics of
different classes, from which we synthesize new facial expression motions. The
resulting motions can be transformed to sequences of landmarks and then to
images sequences by editing the texture information using another conditional
Generative Adversarial Network. To the best of our knowledge, this is the first
work that explores manifold-valued representations with GAN to address the
problem of dynamic facial expression generation. We evaluate our proposed
approach both quantitatively and qualitatively on two public datasets;
Oulu-CASIA and MUG Facial Expression. Our experimental results demonstrate the
effectiveness of our approach in generating realistic videos with continuous
motion, realistic appearance and identity preservation. We also show the
efficiency of our framework for dynamic facial expressions generation, dynamic
facial expression transfer and data augmentation for training improved emotion
recognition models
Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories
In this paper, we propose a new approach for facial expression recognition
using deep covariance descriptors. The solution is based on the idea of
encoding local and global Deep Convolutional Neural Network (DCNN) features
extracted from still images, in compact local and global covariance
descriptors. The space geometry of the covariance matrices is that of Symmetric
Positive Definite (SPD) matrices. By conducting the classification of static
facial expressions using Support Vector Machine (SVM) with a valid Gaussian
kernel on the SPD manifold, we show that deep covariance descriptors are more
effective than the standard classification with fully connected layers and
softmax. Besides, we propose a completely new and original solution to model
the temporal dynamic of facial expressions as deep trajectories on the SPD
manifold. As an extension of the classification pipeline of covariance
descriptors, we apply SVM with valid positive definite kernels derived from
global alignment for deep covariance trajectories classification. By performing
extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that
both the proposed static and dynamic approaches achieve state-of-the-art
performance for facial expression recognition outperforming many recent
approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A,
Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial
Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018,
Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159."
arXiv admin note: substantial text overlap with arXiv:1805.0386
Face Detection for Augmented Reality Application Using Boosting-based Techniques
Augmented reality has gained an increasing research interest over the few last years. Customers requirements have become more intense and more demanding, the need of the different industries to re-adapt their products and enhance them by recent advances in the computer vision and more intelligence has become a necessary. In this work we present a marker-less augmented reality application that can be used and expanded in the e-commerce industry. We take benefit of the well known boosting techniques to train and evaluate different face detectors using the multi-block local binary features. The work purpose is to select the more relevant training parameters in order to maximize the classification accuracy. Using the resulted face detector, the position of the face will serve as a marker in the proposed augmented reality
Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories
International audienceIn this paper, we propose a new approach for facial expression recognition using deep covariance descriptors. The solution is based on the idea of encoding local and global Deep Convolutional Neural Network (DCNN) features extracted from still images, in compact local and global covariance descriptors. The space geometry of the covariance matrices is that of Symmetric Positive Definite (SPD) matrices. By conducting the classification of static facial expressions using Support Vector Machine (SVM) with a valid Gaussian kernel on the SPD manifold, we show that deep covariance descriptors are more effective than the standard classification with fully connected layers and softmax. Besides, we propose a completely new and original solution to model the temporal dynamic of facial expressions as deep trajectories on the SPD manifold. As an extension of the classification pipeline of covariance descriptors, we apply SVM with valid positive definite kernels derived from global alignment for deep covariance trajectories classification. By performing extensive experiments on the Oulu-CASIA, CK+, SFEW and AFEW datasets, we show that both the proposed static and dynamic approaches achieve state-of-the-art performance for facial expression recognition outperforming many recent approaches
Quelles caractéristiques géométriques faciales 3D donnent votre identité ?
Session "Articles"National audienceLa reconnaissance de visages 3D basée sur les courbes faciales 3D de différentes natures (courbes de niveaux, courbes iso-géodésiques, courbes radiales, profils, polarisation géodésique, etc), est une problématique de reconnaissance des formes largement abordée dans la littérature. Cette représentation par des courbes permet notamment d'analyser localement la forme de la surface faciale contrairement aux approches basées sur les surfaces entières. Elle a l'avantage de faire face aux variations de la pose (le visage test peut correspondre seulement à une partie du visage enrôlé) ou dans le cas des données manquantes (visage altéré par les occultations). Deux questions qui n'ont pas été abordés dans la littérature sont: Est ce que l'utilisation de toutes les courbes du visage aboutissent aux meilleures performances? Y a-t-il des courbes faciales plus pertinentes que d'autres? Nous essayons de répondre à ces questions dans cet article. Premièrement, nous représentons les surfaces faciales comme des collections de courbes de niveaux et radiales. Ensuite, en utilisant la géométrie Riemannienne nous analysons leurs formes. Enfin nous utilisons l'algorithme AdaBoost pour sélectionner les courbes (caractéristiques géométriques) les plus discriminantes. Les expérimentations, réalisées sur la base FRGCv2 avec le protocole standard, donne un taux de reconnaissance de 98.02% qui est un résultat compétitif vis-à -vis de l'état de l'ar
Positive/Negative Emotion Detection from RGB-D upper Body Images
International audienceThe ability to identify users'mental states represents a valu-able asset for improving human-computer interaction. Considering that spontaneous emotions are conveyed mostly through facial expressions and the upper Body movements, we propose to use these modalities together for the purpose of negative/positive emotion classification. A method that allows the recognition of mental states from videos is pro-posed. Based on a dataset composed with RGB-D movies a set of indic-tors of positive and negative is extracted from 2D (RGB) information. In addition, a geometric framework to model the depth flows and capture human body dynamics from depth data is proposed. Due to temporal changes in pixel and depth intensity which characterize spontaneous emo-tions dataset, the depth features are used to define the relation between changes in upper body movements and the affect. We describe a space of depth and texture information to detect the mood of people using upper body postures and their evolution across time. The experimentation has been performed on Cam3D dataset and has showed promising results
SĂ©lection de courbes de la surface nasale pour l'authentification de personnes en utilisant Adaboost
National audienceNous proposons dans cet article d'étudier l'apport de chaque courbe de la région nasale, dans le cadre de la biométrie faciale 3D, en utilisant l'algorithme Adaboost. En effet, nous représentons les surfaces nasales par des collections de courbes fermées appelées courbes nasales puis de comparer celles-ci dans l'espace des courbes fermées en s'appuyant sur une analyse riemannienne de cet espace. En considérant qu'à chacune des courbes on peut associer un classifieur faible, nous proposons de construire un classifieur final basé sur l'algorithme Adaboost. Le boosting permet d'optimiser les performances individuelles de chacune des courbes. Les expérimentations sur un ensemble de données de la base FRGC v2 (Face Recognition Grand Challenge) montre l'amélioration nette des résultats d'authentification
Quelles caractéristiques géométriques faciales 3D donnent votre identité ?
Session "Articles"National audienceLa reconnaissance de visages 3D basée sur les courbes faciales 3D de différentes natures (courbes de niveaux, courbes iso-géodésiques, courbes radiales, profils, polarisation géodésique, etc), est une problématique de reconnaissance des formes largement abordée dans la littérature. Cette représentation par des courbes permet notamment d'analyser localement la forme de la surface faciale contrairement aux approches basées sur les surfaces entières. Elle a l'avantage de faire face aux variations de la pose (le visage test peut correspondre seulement à une partie du visage enrôlé) ou dans le cas des données manquantes (visage altéré par les occultations). Deux questions qui n'ont pas été abordés dans la littérature sont: Est ce que l'utilisation de toutes les courbes du visage aboutissent aux meilleures performances? Y a-t-il des courbes faciales plus pertinentes que d'autres? Nous essayons de répondre à ces questions dans cet article. Premièrement, nous représentons les surfaces faciales comme des collections de courbes de niveaux et radiales. Ensuite, en utilisant la géométrie Riemannienne nous analysons leurs formes. Enfin nous utilisons l'algorithme AdaBoost pour sélectionner les courbes (caractéristiques géométriques) les plus discriminantes. Les expérimentations, réalisées sur la base FRGCv2 avec le protocole standard, donne un taux de reconnaissance de 98.02% qui est un résultat compétitif vis-à -vis de l'état de l'ar