6 research outputs found
Face recognition using a hybrid SVM–LBP approach and the Indian movie face database
Local binary patterns (LBP) are an effective texture descriptor for face recognition. In this work, a LBP-based hybrid system for face recognition is proposed. Thus, the dimensionality of LBP histograms is reduced by using principal component analysis and the classification is performed with support vector machines. The experiments were completed using the challenging Indian Movie Face Database and show that our method achieves high recognition rates while reducing 95% the dimensions of the original LBP histograms. Moreover, our algorithm is compared against some state-of-the-art approaches. The results indicate that our method outperforms other approaches, with accurate face recognition results
Learning discriminative local binary patterns for face recognition
variations thereof are a popular local visual descriptor for face recognition. So far, most variations of LBP are designed by hand or are learned with non-supervised methods. In this work we propose a simple method to learn discriminative LBPs in a supervised manner. The method represents an LBP-like descriptor as a set of pixel comparisons within a neighborhood and heuristically seeks for a set of pixel comparisons so as to maximize a Fisher separability criterion for the resulting his-tograms. Tests on standard face recognition datasets show that this method can create compact yet discriminative descriptors. I
Face Recognition from Face Signatures
This thesis presents techniques for detecting and recognizing faces under various
imaging conditions. In particular, it presents a system that combines several
methods for face detection and recognition. Initially, the faces in the images are
located using the Viola-Jones method and each detected face is represented by
a subimage. Then, an eye and mouth detection method is used to identify the
coordinates of the eyes and mouth, which are then used to update the subimages
so that the subimages contain only the face area. After that, a method based
on Bayesian estimation and a fuzzy membership function is used to identify the
actual faces on both subimages (obtained from the first and second steps). Then, a
face similarity measure is used to locate the oval shape of a face in both subimages.
The similarity measures between the two faces are compared and the one with
the highest value is selected.
In the recognition task, the Trace transform method is used to extract the
face signatures from the oval shape face. These signatures are evaluated using
the BANCA and FERET databases in authentication tasks. Here, the signatures
with discriminating ability are selected and were used to construct a classifier.
However, the classifier was shown to be a weak classifier. This problem is
tackled by constructing a boosted assembly of classifiers developed by a Gentle
Adaboost algorithm. The proposed methodologies are evaluated using a family
album database
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira