158 research outputs found

    Face Recognition: An Engineering Approach

    Get PDF
    In computer vision, face recognition is the process of labeling a face as recognized or unrecognized. The process is based on a pipeline that goes through collection, detection, pre-processing, and recognition stages. The focus of this study is on the last stage of the pipeline with the assumption that images have already been collected and pre-processed. Conventional solutions to face recognition use the entire facial image as the input to their algorithms. We present a different approach where the input to the recognition algorithm is the individual segment of the face such as the left eye, the right eye, the nose, and the mouth. Two separate experiments are conducted on the AT&T database of faces [1]. In the first experiment, the entire image is used to run the Eigen-face, the Fisher-face, and the local binary pattern algorithms. For each run, accuracy and error rate of the results are tabulated and analyzed. In the second experiment, extracted facial feature segments are used as the input to the same algorithms. The output from each algorithm is subsequently labeled and placed in the appropriate feature class. Our analysis shows how the granularity of collected data for each segmented class can be leveraged to obtain an improved accuracy rate over the full face approach

    Cross-Spectral Face Recognition Between Near-Infrared and Visible Light Modalities.

    Get PDF
    In this thesis, improvement of face recognition performance with the use of images from the visible (VIS) and near-infrared (NIR) spectrum is attempted. Face recognition systems can be adversely affected by scenarios which encounter a significant amount of illumination variation across images of the same subject. Cross-spectral face recognition systems using images collected across the VIS and NIR spectrum can counter the ill-effects of illumination variation by standardising both sets of images. A novel preprocessing technique is proposed, which attempts the transformation of faces across both modalities to a feature space with enhanced correlation. Direct matching across the modalities is not possible due to the inherent spectral differences between NIR and VIS face images. Compared to a VIS light source, NIR radiation has a greater penetrative depth when incident on human skin. This fact, in addition to the greater number of scattering interactions within the skin by rays from the NIR spectrum can alter the morphology of the human face enough to disable a direct match with the corresponding VIS face. Several ways to bridge the gap between NIR-VIS faces have been proposed previously. Mostly of a data-driven approach, these techniques include standardised photometric normalisation techniques and subspace projections. A generative approach driven by a true physical model has not been investigated till now. In this thesis, it is proposed that a large proportion of the scattering interactions present in the NIR spectrum can be accounted for using a model for subsurface scattering. A novel subsurface scattering inversion (SSI) algorithm is developed that implements an inversion approach based on translucent surface rendering by the computer graphics field, whereby the reversal of the first order effects of subsurface scattering is attempted. The SSI algorithm is then evaluated against several preprocessing techniques, and using various permutations of feature extraction and subspace projection algorithms. The results of this evaluation show an improvement in cross spectral face recognition performance using SSI over existing Retinex-based approaches. The top performing combination of an existing photometric normalisation technique, Sequential Chain, is seen to be the best performing with a Rank 1 recognition rate of 92. 5%. In addition, the improvement in performance using non-linear projection models shows an element of non-linearity exists in the relationship between NIR and VIS

    Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition

    Full text link
    © 1979-2012 IEEE. To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract 'Multi-Directional Multi-Level Dual-Cross Patterns' (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme

    Unifying the Visible and Passive Infrared Bands: Homogeneous and Heterogeneous Multi-Spectral Face Recognition

    Get PDF
    Face biometrics leverages tools and technology in order to automate the identification of individuals. In most cases, biometric face recognition (FR) can be used for forensic purposes, but there remains the issue related to the integration of technology into the legal system of the court. The biggest challenge with the acceptance of the face as a modality used in court is the reliability of such systems under varying pose, illumination and expression, which has been an active and widely explored area of research over the last few decades (e.g. same-spectrum or homogeneous matching). The heterogeneous FR problem, which deals with matching face images from different sensors, should be examined for the benefit of military and law enforcement applications as well. In this work we are concerned primarily with visible band images (380-750 nm) and the infrared (IR) spectrum, which has become an area of growing interest.;For homogeneous FR systems, we formulate and develop an efficient, semi-automated, direct matching-based FR framework, that is designed to operate efficiently when face data is captured using either visible or passive IR sensors. Thus, it can be applied in both daytime and nighttime environments. First, input face images are geometrically normalized using our pre-processing pipeline prior to feature-extraction. Then, face-based features including wrinkles, veins, as well as edges of facial characteristics, are detected and extracted for each operational band (visible, MWIR, and LWIR). Finally, global and local face-based matching is applied, before fusion is performed at the score level. Although this proposed matcher performs well when same-spectrum FR is performed, regardless of spectrum, a challenge exists when cross-spectral FR matching is performed. The second framework is for the heterogeneous FR problem, and deals with the issue of bridging the gap across the visible and passive infrared (MWIR and LWIR) spectrums. Specifically, we investigate the benefits and limitations of using synthesized visible face images from thermal and vice versa, in cross-spectral face recognition systems when utilizing canonical correlation analysis (CCA) and locally linear embedding (LLE), a manifold learning technique for dimensionality reduction. Finally, by conducting an extensive experimental study we establish that the combination of the proposed synthesis and demographic filtering scheme increases system performance in terms of rank-1 identification rate

    A survey on heterogeneous face recognition: Sketch, infra-red, 3D and low-resolution

    Get PDF
    Heterogeneous face recognition (HFR) refers to matching face imagery across different domains. It has received much interest from the research community as a result of its profound implications in law enforcement. A wide variety of new invariant features, cross-modality matching models and heterogeneous datasets are being established in recent years. This survey provides a comprehensive review of established techniques and recent developments in HFR. Moreover, we offer a detailed account of datasets and benchmarks commonly used for evaluation. We finish by assessing the state of the field and discussing promising directions for future research

    Feature extraction techniques for face identification

    Get PDF
    For face recognition, it is very important determining which features of the faces will be used in the classification process. The identification based on appearance uses the pixels of the corresponding image to extract the features. Using the pixels directly is not very efficient due to the high dimensionality of the resulting features which results in a poor discriminative capability between different persons and in increased computational complexity. Implementing any kind of data transform could be a good strategy for reducing the dimensionality of the data and increasing the discriminator capability. Using PCA or DCT transforms it is possible to implement systems with a good rate of recognition if the number of recognizable persons is low. In this project, it has been investigated others features extraction techniques, especially the ones based on Local Binary Patterns

    Homogeneous and Heterogeneous Face Recognition: Enhancing, Encoding and Matching for Practical Applications

    Get PDF
    Face Recognition is the automatic processing of face images with the purpose to recognize individuals. Recognition task becomes especially challenging in surveillance applications, where images are acquired from a long range in the presence of difficult environments. Short Wave Infrared (SWIR) is an emerging imaging modality that is able to produce clear long range images in difficult environments or during night time. Despite the benefits of the SWIR technology, matching SWIR images against a gallery of visible images presents a challenge, since the photometric properties of the images in the two spectral bands are highly distinct.;In this dissertation, we describe a cross spectral matching method that encodes magnitude and phase of multi-spectral face images filtered with a bank of Gabor filters. The magnitude of filtered images is encoded with Simplified Weber Local Descriptor (SWLD) and Local Binary Pattern (LBP) operators. The phase is encoded with Generalized Local Binary Pattern (GLBP) operator. Encoded multi-spectral images are mapped into a histogram representation and cross matched by applying symmetric Kullback-Leibler distance. Performance of the developed algorithm is demonstrated on TINDERS database that contains long range SWIR and color images acquired at a distance of 2, 50, and 106 meters.;Apart from long acquisition range, other variations and distortions such as pose variation, motion and out of focus blur, and uneven illumination may be observed in multispectral face images. Recognition performance of the face recognition matcher can be greatly affected by these distortions. It is important, therefore, to ensure that matching is performed on high quality images. Poor quality images have to be either enhanced or discarded. This dissertation addresses the problem of selecting good quality samples.;The last chapters of the dissertation suggest a number of modifications applied to the cross spectral matching algorithm for matching low resolution color images in near-real time. We show that the method that encodes the magnitude of Gabor filtered images with the SWLD operator guarantees high recognition rates. The modified method (Gabor-SWLD) is adopted in a camera network set up where cameras acquire several views of the same individual. The designed algorithm and software are fully automated and optimized to perform recognition in near-real time. We evaluate the recognition performance and the processing time of the method on a small dataset collected at WVU

    Linear subspace methods in face recognition

    Get PDF
    Despite over 30 years of research, face recognition is still one of the most difficult problems in the field of Computer Vision. The challenge comes from many factors affecting the performance of a face recognition system: noisy input, training data collection, speed-accuracy trade-off, variations in expression, illumination, pose, or ageing. Although relatively successful attempts have been made for special cases, such as frontal faces, no satisfactory methods exist that work under completely unconstrained conditions. This thesis proposes solutions to three important problems: lack of training data, speed-accuracy requirement, and unconstrained environments. The problem of lacking training data has been solved in the worst case: single sample per person. Whitened Principal Component Analysis is proposed as a simple but effective solution. Whitened PCA performs consistently well on multiple face datasets. Speed-accuracy trade-off problem is the second focus of this thesis. Two solutions are proposed to tackle this problem. The first solution is a new feature extraction method called Compact Binary Patterns which is about three times faster than Local Binary Patterns. The second solution is a multi-patch classifier which performs much better than a single classifier without compromising speed. Two metric learning methods are introduced to solve the problem of unconstrained face recognition. The first method called Indirect Neighourhood Component Analysis combines the best ideas from Neighourhood Component Analysis and One-shot learning. The second method, Cosine Similarity Metric Learning, uses Cosine Similarity instead of the more popular Euclidean distance to form the objective function in the learning process. This Cosine Similarity Metric Learning method produces the best result in the literature on the state-of-the-art face dataset: the Labelled Faces in the Wild dataset. Finally, a full face verification system based on our real experience taking part in ICPR 2010 Face Verification contest is described. Many practical points are discussed
    • …
    corecore