23,622 research outputs found

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance

    Analyse locale de la forme 3D pour la reconnaissance d'expressions faciales

    Get PDF
    National audienceIn this paper we propose a novel approach for indentityindependent 3D facial expression recognition. Our approach is based on shape analysis of local patches extracted from 3D facial shape model. A Riemannian framework is applied to compute geodesic distances between correspondent patches belonging to different faces of the BU-3DFE database and conveying different expressions. Quantitative measures of similarity are obtained and then used as inputs to several classification methods. Using Multiboosting and Support Vector Machines (SVM) classifiers, we achieved average recognition rates respectively equal to 98.81% and 97.75%.Dans cet article, nous proposons une nouvelle approche pour la reconnaissance d'expressions faciales 3D invariante par rapport à l'identité. Cette approche est basée sur l'analyse de formes de " patches "locaux extraits à partir de modèles de visages 3D. Un cadre Riemannien est utilisé pour le calcul de distances géodésiques entre les patches correspondants appartenant a des visages différents sous différentes expressions. Des mesures quantitatives de similarité sont alors obtenues et sont utilisées comme des paramètres d'entrée pour des algorithmes de classification multiclasses. En utilisant des techniques de Multiboosting et de Machines à Vecteurs de Support (SVM), les taux de reconnaissance des six expressions de base obtenus sur la base BU-3DFE sont respectivement 98.81% et 97.75%

    Facial Expression Recognition

    Get PDF

    Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression

    Full text link
    We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity-expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
    • …
    corecore