28 research outputs found

    L’INTÉGRALE FLOUE DANS LA FUSION D’UN SYSTÈME MULTI- CLASSIFIEURS POUR LA RECONNAISSANCE DE VISAGES.

    Get PDF
    Notre objectif c’est l’identification des personnes par la modalité visage en se basant sur une fusion multi classifieurs. Donc, la fusion est prise en considération. Nous traitons la question de la fusion et ses différents niveaux. Particulièrement la fusion des scores qui fait objet de notre travail. Les principales méthodes de normalisation des scores Znorm, QLQ et function double sigmoide associées à la combinaison par la logique floue sont étudiées. Nous proposons l’application des intégrales floues de Sugeno et de Choquet pour la fusion des scores d’un système multi-classifieurs de vérification faciale. Les systèmes de combinaison des scores sont construits par l’extraction des caractéristiques de visage par l’utilisation des ondelettes de Gabor et l’Analyse en Composantes Principales (ACP) plus le modèle discriminant Linéaire Amélioré de Fisher (EFM) comme méthode de réduction d’espace de données

    Improving CNN-based Person Re-identification using score Normalization

    Full text link
    Person re-identification (PRe-ID) is a crucial task in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. However, it is a challenging task due to changes in illumination, background, and viewpoint. Efficient feature extraction and metric learning algorithms are essential for a successful PRe-ID system. This paper proposes a novel approach for PRe-ID, which combines a Convolutional Neural Network (CNN) based feature extraction method with Cross-view Quadratic Discriminant Analysis (XQDA) for metric learning. Additionally, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores is implemented. The proposed approach is tested on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S, and promising results are obtained. For example, without normalization, the rank-20 rate accuracies of the GRID, CUHK01, VIPeR and PRID450S datasets were 61.92%, 83.90%, 92.03%, 96.22%; however, after score normalization, they have increased to 64.64%, 89.30%, 92.78%, and 98.76%, respectively. Accordingly, the promising results on four challenging datasets indicate the effectiveness of the proposed approach.Comment: 5 pages, 6 figures and 2 table

    ‫Reconnaissance Biométrique par Fusion Multimodale du Visage 2D et 3D

    Get PDF
    Facial recognition is one of the best biometric modalities for applications related to the identification or verification of people. Indeed, it is the modality used by humans. It is non-intrusive and socially well accepted. Unfortunately, human faces are similar and therefore offer little possibility of distinction from other biometric modalities, such as fingerprints and iris. Moreover, when it comes to 2D images of faces, the intra-class variations, due to factors as diverse as the changes in lighting conditions, variation of cosmetics and pose, are generally greater than the inter-class variations. classes, which makes 2D face recognition unreliable under real-world conditions. Recently, 3D representations of faces have been widely studied by the scientific community to address unresolved issues in 2D facial recognition.This thesis is devoted to robust facial recognition using 2D and 3D facial data fusion. We devote the first part of our study to uni-modal face verification and 2D multi-face algorithms. First, we study several methods to select the best face authentication systems. Next, we present multi-modality and score fusion methods for both combination and classification approaches. Finally, merging methods of scores are compared on the XM2VTS face database.In the second part, we propose an automatic face authentication algorithm by merging two multimodal systems (multi-algorithms and multi-sensors 2D + 3D). First, we corrected the rotation of the head by the ICP algorithm and then presented six local feature extraction methods (MSLBP, proposed CSL, Gabor Wavelets, LBP, LPQ and BSIF). The classification of the characteristics is carried out by the cosine metric after reduction of space by EFM, then fusion at scores level by a good classifier with two classes SVM. Finally, the application is performed on the CASIA 3D and Bosphorus databases. In the last part, we study the uni-modal 2D and 3D and multimodal (2D + 3D) face verification based on the fusion of local information. The study consists of three main stages (preprocessing, feature extraction and classification). In the first step, a preprocessing phase is necessary. The ICP algorithm is used to align all faces and the PS approach is used to reduce the influence of dimming for 2D images. In the second step, we used four local descriptors (LBP, LPQ, BSIF and proposed Statistical LBP). After extracting the features, the 2D or 3D facial image is divided into 10 regions and each region is divided into 15 small blocks. The extraction of local characteristics is summarized by the corresponding histograms of each block. In the last step, we propose to use EDA coupled to WCCN for histogram dimension reduction for each region. We validate our proposed methods by comparing them with those existing in the scientific literature on the FRGC v2 and CASIA 3D databases.La reconnaissance faciale est l'une des meilleures modalités biométriques pour des applicationsliées à l'identification ou l'authentification de personnes. En effet, c'est la modalité utilisée par leshumains. Elle est non intrusive et socialement bien acceptée. Malheureusement, les visages humainssont semblables et offrent par conséquent une faible possibilité de distinction par rapport à d'autresmodalités biométriques, comme par exemple, les empreintes digitales et l'iris. Par ailleurs, lorsqu'ils'agit d'images 2D de visages, les variations intra-classe, dues à des facteurs aussi divers que leschangements des conditions d'éclairage, variation de cosmétiques et de pose, sont généralementsupérieures aux variations inter classes, ce qui rend la reconnaissance faciale 2D peu fiable dans desconditions réelles. Récemment, les représentations 3D de visages ont été largement étudiées par lacommunauté scientifique pour pallier les problèmes non résolus dans la reconnaissance faciale 2D.Cette thèse est consacrée à la reconnaissance faciale robuste utilisant la fusion des données faciales 2Det 3D.Nous consacrons la première partie de notre étude à la vérification de visage uni-modale etmulti-algorithmes de visage 2D. Tout d’abord, nous étudions plusieurs méthodes pour sélectionner lesmeilleurs systèmes d’authentification de visages. Ensuite, nous présentons la multi-modalité et lesméthodes de fusion de scores pour les deux approches combinaison et classification. Enfin lesméthodes de fusion de scores sont comparées sur la base de données des visages XM2VTS.Dans la deuxième partie nous proposons un algorithme automatique d'authentification duvisage par la fusion de deux systèmes multimodaux (multi-algorithmes et multi-capteurs 2D + 3D).Tout d'abord, nous avons corrigé la rotation de la tête par l'algorithme ICP, puis présenté six méthodesd'extraction de caractéristiques locales (MSLBP, CSL proposée, Ondelettes de Gabor, LBP, LPQ etBSIF). La classification des caractéristiques est réalisée par le métrique cosinus après réductiond'espace par EFM, puis fusion au niveau des scores par un bon classificateur à deux classes SVM.Enfin, l'application est réalisée sur les bases de données CASIA 3D et Bosphorus.Dans la dernière partie, nous étudions la vérification de visage uni-modale 2D et 3D etmultimodale (2D+3D) basée sur la fusion de l'information locale. L'étude comprend trois étapesprincipales (prétraitement, extraction de caractéristiques et classification). Dans la première étape, une phase de prétraitement est nécessaire. L’algorithme ICP est utilisé pour aligner tous les visages et l'approche PS est utilisée pour réduire l'influence de la variation de l'éclairage pour les images 2D. Dans la deuxième étape nous avons utilisé quatre descripteurs locaux (LBP, LPQ, BSIF et Statistical LBP proposée). Après extraction des caractéristiques, l'image faciale 2D ou 3D est divisée en 10 régions et chaque région est divisée en 15 petites blocs. L'extraction de caractéristiques locales est résumée par les histogrammes correspondants de chaque bloc. Dans la dernière étape, nous proposons d'utiliser EDA couplée à WCCN pour la réduction de dimension des histogrammes pour chaque région. Nous validons nos méthodes proposées en les comparant avec celles existant dans la littérature scientifique sur les bases de données FRGC v2 et CASIA 3D

    Etude de la fusion de modalités pour l’authentification en biométrie (visage, voix)

    Get PDF
    L'identification et/ou la vérification des visages possède plusieurs avantages sur les autres technologies biométriques : elle est naturelle, non intrusive et facile à utiliser. Les systèmes biométriques (unimodaux) permettent de reconnaître une personne en utilisant une seule modalité biométrique, mais ne peuvent pas garantir avec certitude une bonne identification. Alors la solution est la mise en place de systèmes biométriques multimodaux obtenus en fusionnant plusieurs systèmes de reconnaissance de visages. Dans ce présent travail, nous abordons plusieurs points importants concernant la biométrie multimodale. Tout d’abord, après avoir dressé un état de l’art de la reconnaissance de visages et étudié plusieurs méthodes pour sélectionner les meilleurs systèmes d’authentification de visages. Ensuite, nous présentons la multimodalité et les méthodes de fusion de score pour les deux approches combinaison et classification. Enfin les méthodes de fusion de scores sont comparées sur la base de données des visages XM2VTS et les scores de visages et voix de XM2VTS selon son protocole associé (protocole de Lausanne 1). The identification and / or verification of faces have several advantages over other biometric technologies: it is natural, not intrusive and easy to use. Unimodal biometric systems can recognize a person using a single biometric modality, but cannot guarantee with certainty the proper identification. So the solution is the establishment of multimodal biometric systems obtained by merging several face recognition systems. In this work, we discuss several important issues concerning the multimodal biometrics. First, after taking a state of the art of face recognition and studied several methods for selecting the best face authentication systems. Then, we present multimodality and fusion methods score for both combination and classification approaches. Finally, the fusion methods are compared on the XM2VTS database of faces and scores of faces and voices according XM2VTS its associated protocol (Lausanne Protocol 1)

    Learning multi-view deep and shallow features through new discriminative subspace for bi-subject and tri-subject kinship verification

    No full text
    International audienceThis paper presents the combination of deep and shallow features (multi-view features) using the proposed metric learning (SILD+WCCN/LR) approach for kinship verification. Our approach based on an automatic and more efficient two-step learning into deep/shallow information. First, five layers for deep features and five shallow features (i.e. texture and shape), representing more precisely facial features involved in kinship relations (Father-Son, Father-Daughter, Mother-Son, and Mother-Daughter) are used to train the proposed Side-Information based Linear Discriminant Analysis integrating Within Class Covariance Normalization (SILD+WCCN) method. Then, each of the features projected through the discriminative subspace of the proposed SILD+WCCN metric learning method. Finally, a Logistic Regression (LR) method is used to fuse the six scores of the projected features. To show the effectiveness of our SILD+WCNN method, we do some experiments on LFW database. In term of evaluation, the proposed automatic Facial Kinship Verification (FKV) is compared with existing ones to show its effectiveness, using two challenging kinship databases. The experimental results showed the superiority of our FKV against existing ones and reached verification rates of 86.20% and 88.59% for bi-subject matching on the KinFaceW-II and TSKinFace databases, respectively. Verification rates for tri-subject matching of 90.94% and 91.23% on the available TSKinFace database for Father-Mother-Son and Father-Mother-Daughter, respectively

    Robust multimodal 2Dand 3D face authentication using local feature fusion

    No full text
    IF=1.43International audienceIn this work, we present a robust face authentication approach merging multiple descriptors and exploiting both 3D and 2D information. First, we correct the heads rotation in 3D by iterative closest point algorithm, followed by an efficient preprocessing phase. Then, we extract different features namely: multi-scale local binary patterns (MSLBP), novel statistical local features (SLF), Gabor wavelets, and scale invariant feature transform (SIFT). The principal component analysis followed by enhanced fisher linear discriminant model is used for dimensionality reduction and classification. Finally, fusion at the score level is carried out using two-class support vector machines. Extensive experiments are conducted on the CASIA 3D faces database. The evaluation of individual descriptors clearly showed the superiority of the proposed SLF features. In addition, applying the (3D+2D) multimodal score level fusion, the best result is obtained by combining the SLF with the MSLBP+SIFT descriptor yielding in an equal error rate of 0.98 % and a recognition rate of RR=97.22%

    Kinship verification from face images in discriminative subspaces of color components

    No full text
    International audienceAutomatic facial kinship verification is a challenging topic in computer vision due to its complexity and its important role in many applications such as finding missing children and forensics. This paper presents a Facial Kinship Verification (FKV) approach based on an automatic and more efficient two-step learning into color/texture information. Most of the proposed methods in automatic kinship verification from face images consider the luminance information only (i.e. gray-scale) and exclude the chrominance information (i.e. color) that can be helpful, as an additional cue, for predicting relationships. We explore the joint use of color-texture information from the chrominance and the luminance channels by extracting complementary low-level features from different color spaces. More specifically, the features are extracted from each color channel of the face image and fused to achieve better discrimination. We investigate different descriptors on the existing face kinship databases, illustrating the usefulness of color information, compared with the gray-scale counterparts, in seven various color spaces. Especially, we generate from each color space three subspaces projection matrices and then score fusion methodology to fuse three distances belonging to each test pair face images. Experiments on three benchmark databases, namely the Cornell KinFace, the KinFaceW (I & II) and the TSKinFace database, show superior results compared to the state of the art

    Feature fusion via deep random forest for facial age estimation

    No full text
    International audienceIn the last few years, human age estimation from face images attracted the attention of many researchers in computer vision and machine learning fields. This is due to its numerous applications. In this paper, we propose a new architecture for age estimation based on facial images. It is mainly based on a cascade of classification trees ensembles, which are known recently as a Deep Random Forest. Our architecture is composed of two types of DRF. The first type extends and enhances the feature representation of a given facial descriptor. The second type operates on the fused form of all enhanced representations in order to provide a prediction for the age while taking into account the fuzziness property of the human age. While the proposed methodology is able to work with all kinds of image features, the face descriptors adopted in this work used off-the-shelf deep features allowing to retain both the rich deep features and the powerful enhancement and decision provided by the proposed architecture. Experiments conducted on six public databases prove the superiority of the proposed architecture over other state-of-the-art methods. (C) 2020 Elsevier Ltd. All rights reserved

    Kinship verification based deep and tensor features through extreme learning machine

    No full text
    Abstract Checking the kinship of facial images is a difficult research topic in computer vision that has attracted attention in recent years. The methods suggested so far are not strong enough to predict kinship relationships only by facial appearance. To mitigate this problem, we propose a new approach called Deep-Tensor+ELM to kinship verification based on deep (VGG-Face descriptor) and tensor (BSIF-Tensor & LPQ-Tensor using MSIDA method) features through Extreme Learning Machine (ELM). While ELM aims to deal with small size training features dimension, deep and tensor features are proven to provide significant enhancement over shallow features or vector-based counterparts. We evaluate our proposed method on the largest kinship benchmark namely FIW database using four Grandparent-Grandchild relations (GF-GD, GF-GS, GM-GD and GM-GS). The results obtained are positively compared with some modern methods, including those that rely on deep learning
    corecore