343,303 research outputs found
Automatic Analysis of Facial Expressions Based on Deep Covariance Trajectories
In this paper, we propose a new approach for facial expression recognition
using deep covariance descriptors. The solution is based on the idea of
encoding local and global Deep Convolutional Neural Network (DCNN) features
extracted from still images, in compact local and global covariance
descriptors. The space geometry of the covariance matrices is that of Symmetric
Positive Definite (SPD) matrices. By conducting the classification of static
facial expressions using Support Vector Machine (SVM) with a valid Gaussian
kernel on the SPD manifold, we show that deep covariance descriptors are more
effective than the standard classification with fully connected layers and
softmax. Besides, we propose a completely new and original solution to model
the temporal dynamic of facial expressions as deep trajectories on the SPD
manifold. As an extension of the classification pipeline of covariance
descriptors, we apply SVM with valid positive definite kernels derived from
global alignment for deep covariance trajectories classification. By performing
extensive experiments on the Oulu-CASIA, CK+, and SFEW datasets, we show that
both the proposed static and dynamic approaches achieve state-of-the-art
performance for facial expression recognition outperforming many recent
approaches.Comment: A preliminary version of this work appeared in "Otberdout N, Kacem A,
Daoudi M, Ballihi L, Berretti S. Deep Covariance Descriptors for Facial
Expression Recognition, in British Machine Vision Conference 2018, BMVC 2018,
Northumbria University, Newcastle, UK, September 3-6, 2018. ; 2018 :159."
arXiv admin note: substantial text overlap with arXiv:1805.0386
Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition
This paper presents a robust and dynamic face recognition technique based on
the extraction and matching of devised probabilistic graphs drawn on SIFT
features related to independent face areas. The face matching strategy is based
on matching individual salient facial graph characterized by SIFT features as
connected to facial landmarks such as the eyes and the mouth. In order to
reduce the face matching errors, the Dempster-Shafer decision theory is applied
to fuse the individual matching scores obtained from each pair of salient
facial features. The proposed algorithm is evaluated with the ORL and the IITK
face databases. The experimental results demonstrate the effectiveness and
potential of the proposed face recognition technique also in case of partially
occluded faces.Comment: 8 pages, 2 figure
Ensemble of Hankel Matrices for Face Emotion Recognition
In this paper, a face emotion is considered as the result of the composition
of multiple concurrent signals, each corresponding to the movements of a
specific facial muscle. These concurrent signals are represented by means of a
set of multi-scale appearance features that might be correlated with one or
more concurrent signals. The extraction of these appearance features from a
sequence of face images yields to a set of time series. This paper proposes to
use the dynamics regulating each appearance feature time series to recognize
among different face emotions. To this purpose, an ensemble of Hankel matrices
corresponding to the extracted time series is used for emotion classification
within a framework that combines nearest neighbor and a majority vote schema.
Experimental results on a public available dataset shows that the adopted
representation is promising and yields state-of-the-art accuracy in emotion
classification.Comment: Paper to appear in Proc. of ICIAP 2015. arXiv admin note: text
overlap with arXiv:1506.0500
Facial Asymmetry Analysis Based on 3-D Dynamic Scans
Facial dysfunction is a fundamental symptom which often relates to many neurological illnesses, such as stroke, Bell’s palsy, Parkinson’s disease, etc. The current methods for detecting and assessing facial dysfunctions mainly rely on the trained practitioners which have significant limitations as they are often subjective. This paper presents a computer-based methodology of facial asymmetry analysis which aims for automatically detecting facial dysfunctions. The method is based on dynamic 3-D scans of human faces. The preliminary evaluation results testing on facial sequences from Hi4D-ADSIP database suggest that the proposed method is able to assist in the quantification and diagnosis of facial dysfunctions for neurological patients
Island Loss for Learning Discriminative Features in Facial Expression Recognition
Over the past few years, Convolutional Neural Networks (CNNs) have shown
promise on facial expression recognition. However, the performance degrades
dramatically under real-world settings due to variations introduced by subtle
facial appearance changes, head pose variations, illumination changes, and
occlusions.
In this paper, a novel island loss is proposed to enhance the discriminative
power of the deeply learned features. Specifically, the IL is designed to
reduce the intra-class variations while enlarging the inter-class differences
simultaneously. Experimental results on four benchmark expression databases
have demonstrated that the CNN with the proposed island loss (IL-CNN)
outperforms the baseline CNN models with either traditional softmax loss or the
center loss and achieves comparable or better performance compared with the
state-of-the-art methods for facial expression recognition.Comment: 8 pages, 3 figure
- …