8 research outputs found

    Fusion d'Experts pour une Biométrie Faciale 3D Robuste aux Déformations

    Get PDF
    Session "Posters"National audienceNous étudions dans cet article l'apport de la géométrie tridimensionnelle du visage dans la reconnaissance des individus. La principale contribution est d'associer plusieurs experts (matcheurs) de biométrie faciale 3D afin d'achever de meilleures performances comparées aux performances individuelles de chacun, notamment en présence d'expressions. Les experts utilisés sont : (E1) Courbes radiales élastiques, (E2) MS-eLBP, une version étendue multi-échelle de l'opérateur LBP, (E3) l'algorithme de recalage non-rigide TPS, en plus d'un expert de référence (Eref) l'algorithme de recalage rigide connu ICP. Profitant de la complémentarité de chacun des experts, la présente approche affiche un taux d'identification qui dépasse les 99% en présence d'expressions faciales sur la base FRGCv2. Une étude comparative avec l'état de l'art confirme le choix et l'intérêt de combiner plusieurs experts afin d'achever de meilleurs performance

    Spoofing Face Recognition with 3D Masks

    Get PDF
    Spoofing is the act of masquerading as a valid user by falsifying data to gain an illegitimate access. Vulnerability of recognition systems to spoofing attacks (presentation attacks) is still an open security issue in biometrics domain and among all biometric traits, face is exposed to the most serious threat, since it is particularly easy to access and reproduce. In the literature, many different types of face spoofing attacks have been examined and various algorithms have been proposed to detect them. Mainly focusing on 2D attacks forged by displaying printed photos or replaying recorded videos on mobile devices, a significant portion of these studies ground their arguments on the flatness of the spoofing material in front of the sensor. However, with the advancements in 3D reconstruction and printing technologies, this assumption can no longer be maintained. In this paper, we aim to inspect the spoofing potential of subject-specific 3D facial masks for different recognition systems and address the detection problem of this more complex attack type. In order to assess the spoofing performance of 3D masks against 2D, 2.5D and 3D face recognition and to analyse various texture based countermeasures using both 2D and 2.5D data, a parallel study with comprehensive experiments is performed on two datasets: The Morpho database which is not publicly available and the newly distributed 3D mask attack database

    Contributions à l'analyse de visages en 3D (approche régions, approche holistique et étude de dégradations)

    Get PDF
    Historiquement et socialement, le visage est chez l'humain une modalité de prédilection pour déterminer l'identité et l'état émotionnel d'une personne. Il est naturellement exploité en vision par ordinateur pour les problèmes de reconnaissance de personnes et d'émotions. Les algorithmes d'analyse faciale automatique doivent relever de nombreux défis : ils doivent être robustes aux conditions d'acquisition ainsi qu'aux expressions du visage, à l'identité, au vieillissement ou aux occultations selon le scénario. La modalité 3D a ainsi été récemment investiguée. Elle a l'avantage de permettre aux algorithmes d'être, en principe, robustes aux conditions d'éclairage ainsi qu'à la pose. Cette thèse est consacrée à l'analyse de visages en 3D, et plus précisément la reconnaissance faciale ainsi que la reconnaissance d'expressions faciales en 3D sans texture. Nous avons dans un premier temps axé notre travail sur l'apport que pouvait constituer une approche régions aux problèmes d'analyse faciale en 3D. L'idée générale est que le visage, pour réaliser les expressions faciales, est déformé localement par l'activation de muscles ou de groupes musculaires. Il est alors concevable de décomposer le visage en régions mimiques et statiques, et d'en tirer ainsi profit en analyse faciale. Nous avons proposé une paramétrisation spécifique, basée sur les distances géodésiques, pour rendre la localisation des régions mimiques et statiques le plus robustes possible aux expressions. Nous avons également proposé une approche régions pour la reconnaissance d'expressions du visage, qui permet de compenser les erreurs liées à la localisation automatique de points d'intérêt. Les deux approches proposées dans ce chapitre ont été évaluées sur des bases standards de l'état de l'art. Nous avons également souhaité aborder le problème de l'analyse faciale en 3D sous un autre angle, en adoptant un système de cartes de représentation de la surface 3D. Nous avons ainsi proposé de projeter sur le plan 2D des informations liées à la topologie de la surface 3D, à l'aide d'un descripteur géométrique inspiré d'une mesure de courbure moyenne. Les problèmes de reconnaissance faciale et de reconnaissance d'expressions 3D sont alors ramenés à ceux de l'analyse faciale en 2D. Nous avons par exemple utilisé SIFT pour l'extraction puis l'appariement de points d'intérêt en reconnaissance faciale. En reconnaissance d'expressions, nous avons utilisé une méthode de description des visages basée sur les histogrammes de gradients orientés, puis classé les expressions à l'aide de SVM multi-classes. Dans les deux cas, une méthode de fusion simple permet l'agrégation des résultats obtenus à différentes échelles. Ces deux propositions ont été évaluées sur la base BU-3DFE, montrant de bonnes performances tout en étant complètement automatiques. Enfin, nous nous sommes intéressés à l'impact des dégradations des modèles 3D sur les performances des algorithmes d'analyse faciale. Ces dégradations peuvent avoir plusieurs origines, de la capture physique du visage humain au traitement des données en vue de leur interprétation par l'algorithme. Après une étude des origines et une théorisation des types de dégradations potentielles, nous avons défini une méthodologie permettant de chiffrer leur impact sur des algorithmes d'analyse faciale en 3D. Le principe est d'exploiter une base de données considérée sans défauts, puis de lui appliquer des dégradations canoniques et quantifiables. Les algorithmes d'analyse sont alors testés en comparaison sur les bases dégradées et originales. Nous avons ainsi comparé le comportement de 4 algorithmes de reconnaissance faciale en 3D, ainsi que leur fusion, en présence de dégradations, validant par la diversité des résultats obtenus la pertinence de ce type d'évaluation.Historically and socially, the human face is one of the most natural modalities for determining the identity and the emotional state of a person. It has been exploited by computer vision scientists within the automatic facial analysis domain. Still, proposed algorithms classically encounter a number of shortcomings. They must be robust to varied acquisition conditions. Depending on the scenario, they must take into account intra-class variations such as expression, identity (for facial expression recognition), aging, occlusions. Thus, the 3D modality has been suggested as a counterpoint for a number of those issues. In principle, 3D views of an object are insensitive to lightning conditions. They are, theoretically, pose-independant as well. The present thesis work is dedicated to 3D Face Analysis. More precisely, it is focused on non-textured 3D Face Recognition and 3D Facial Expression Recognition. In the first instance, we have studied the benefits of a region-based approach to 3D Face Analysis problems. The general concept is that a face, when performing facial expressions, is deformed locally by the activation of muscles or groups of muscles. We then assumed that it was possible to decompose the face into several regions of interest, assumed to be either mimic or static. We have proposed a specific facial surface parametrization, based upon geodesic distance. It is designed to make region localization as robust as possible regarding expression variations. We have also used a region-based approach for 3D facial expression recognition, which allows us to compensate for errors relative to automatic landmark localization. We also wanted to experiment with a Representation Map system. Here, the main idea is to project 3D surface topology data on the 2D plan. This translation to the 2D domain allows us to benefit from the large amount of related works in the litterature. We first represent the face as a set of maps representing different scales, with the help of a geometric operator inspired by the Mean Curvature measure. For Facial Recognition, we perform a SIFT keypoints extraction. Then, we match extracted keypoints between corresponding maps. As for Facial Expression Recognition, we normalize and describe every map thanks to the Histograms of Oriented Gradients algorithm. We further classify expressions using multi-class SVM. In both cases, a simple fusion step allows us to aggregate the results obtained on every single map. Finally, we have studied the impact of 3D models degradations over the performances of 3D facial analysis algorithms. A 3D facial scan may be an altered representation of its real life model, because of several reasons, which range from the physical caption of the human model to data processing. We propose a methodology that allows us to quantify the impact of every single type of degradation over the performances of 3D face analysis algorithms. The principle is to build a database regarded as free of defaults, then to apply measurable degradations to it. Algorithms are further tested on clean and degraded datasets, which allows us to quantify the performance loss caused by degradations. As an experimental proof of concept, we have tested four different algorithms, as well as their fusion, following the aforementioned protocol. With respect to the various types of contemplated degradations, the diversity of observed behaviours shows the relevance of our approach.LYON-Ecole Centrale (690812301) / SudocSudocFranceF

    New Experiments on ICP-Based 3D Face Recognition and Authentication

    No full text
    International audienceIn this paper, we discuss new experiments on face recognition and authentication based on dimensional surface matching. While most of existing methods use facial intensity images, a newest ones focus on introducing depth information to surmount some of classical face recognition problems such as pose, illumination, and facial expression variations. The presented matching algorithm is based on ICP (iterative closest point) that provides perfectly the posture of presented probe. In addition, the similarity metric is given by spatial deviation between the overlapped parts in matched surfaces. The general paradigm consists in building a full 3D face gallery using a laser-based scanner (the off-line phase). At the on-line phase, identification or verification, only one captured 2.5D face model is performed with the whole set of 3D faces from the gallery or compared to the 3D face model of the genuine, respectively. This probe model can be acquired from arbitrary viewpoint, with arbitrary facial expressions, and under arbitrary lighting conditions. A new multi-view registered 3D face database, including these variations, is developed within BioSecure Workshop 2005 in order to perform significant experiment

    Heritability of facial morphology

    Get PDF
    Facial recognition methodologies, widely used today in everything from automatic passport controls at airports to unlocking devices on mobile phones, has developed greatly in recent years. The methodologies vary from feature based landmark comparisons in 2D and 3D, utilising Principal Component Analysis (PCA) to surface-based Iterative Closest Point Algorithm (ICP) analysis and a wide variety of techniques in between. The aim of all facial recognition software (FCS) is to find or match a target face with a reference face of a known individual from an existing database. FCS, however, faces many challenges including temporal variations due to development/ageing and variations in facial expression. To determine any quantifiable heritability of facial morphology using this resource, one has to look for faces with enough demonstrable similarities to predict a possible genetic link, instead of the ordinary matching of the same individual’s face in different instances. With the exception of identical twins, this means the introduction of many more variables into the equation of how to relate faces to each other. Variation due to both developmental and degenerative aging becomes a much greater issue than in previous matching situations, especially when comparing parents with children. Additionally, sexual dimorphism is encountered with cross gender relationships, for example, between mothers and sons. Non-inherited variables are also encountered such as BMI, facial disfigurement and the effects of dental work and tooth loss. For this study a Trimmed Iterative Closest Point Algorithm (TrICP) was applied to three-dimensional surfaces scans, created using a white light scanner and Flexscan 3D, of the faces of 41 families consisting of 139 individuals. The TrICP algorithm produced 7176 Mesh-to-mesh Values (MMV) for each of seven sections of the face (Whole face, Eyes, Nose, Mouth, Eyes-Nose, Eyes-Nose-Mouth, and Eyes-Nose- Mouth-Chin). Receiver Operated Characteristic (ROC) analysis was then conducted for each of the seven sections of the face within 11 predetermined categories of relationship, in order to assess the utility of the method for predicting familial relationships (sensitivity/specificity). Additionally, the MMVs of three single features, (eyes, nose and mouth) were combined to form four combination areas which were analysed within the same 11 relationship categories. Overall the relationship between sisters showed the most similarity across all areas of the face with the clear exception of the mouth. Where female to female comparison was conducted the mouth consistently negatively affected the results. The father-daughter relationship showed the least similarity overall and was only significant for three of the 11 portions of the face. In general, the combination of three single features achieved greater accuracy as shown by Areas Under the Curve (AUC) than all other portions of the face and single features were less predictive than the face as a whole
    corecore