34 research outputs found

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160

    Hyperspectral Data Acquisition and Its Application for Face Recognition

    Get PDF
    Current face recognition systems are rife with serious challenges in uncontrolled conditions: e.g., unrestrained lighting, pose variations, accessories, etc. Hyperspectral imaging (HI) is typically employed to counter many of those challenges, by incorporating the spectral information within different bands. Although numerous methods based on hyperspectral imaging have been developed for face recognition with promising results, three fundamental challenges remain: 1) low signal to noise ratios and low intensity values in the bands of the hyperspectral image specifically near blue bands; 2) high dimensionality of hyperspectral data; and 3) inter-band misalignment (IBM) correlated with subject motion during data acquisition. This dissertation concentrates mainly on addressing the aforementioned challenges in HI. First, to address low quality of the bands of the hyperspectral image, we utilize a custom light source that has more radiant power at shorter wavelengths and properly adjust camera exposure times corresponding to lower transmittance of the filter and lower radiant power of our light source. Second, the high dimensionality of spectral data imposes limitations on numerical analysis. As such, there is an emerging demand for robust data compression techniques with lows of less relevant information to manage real spectral data. To cope with these challenging problems, we describe a reduced-order data modeling technique based on local proper orthogonal decomposition in order to compute low-dimensional models by projecting high-dimensional clusters onto subspaces spanned by local reduced-order bases. Third, we investigate 11 leading alignment approaches to address IBM correlated with subject motion during data acquisition. To overcome the limitations of the considered alignment approaches, we propose an accurate alignment approach ( A3) by incorporating the strengths of point correspondence and a low-rank model. In addition, we develop two qualitative prediction models to assess the alignment quality of hyperspectral images in determining improved alignment among the conducted alignment approaches. Finally, we show that the proposed alignment approach leads to promising improvement on face recognition performance of a probabilistic linear discriminant analysis approach

    A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition

    Get PDF
    The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum produces images that have improved face recognition performance under varying lighting conditions. This is because long-wave infrared images are the result of emitted, rather than reflected, light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images have also improved face recognition under varying pose and lighting. The opacity of glass to long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces the recognition performance. This thesis presents the design and performance evaluation of a novel camera system which is capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth video images via a common optical path requiring no spatial registration between sensors beyond scaling for differences in sensor sizes. Experiments using a range of established face recognition methods and multi-class SVM classifiers show that the fused output from our camera system not only outperforms the single modality images for face recognition, but that the adaptive fusion methods used produce consistent increases in recognition accuracy under varying pose, lighting and with the presence of eyeglasses

    The system of facial recognition in the infrared range

    Get PDF
    In this paper, a new approach is introduced upgrading the complex object recognizing monitoring system up to the image processing system capable of operating both in the visible and the infrared wavelength ranges. For this purpose, both new algorithmic software and user interface are provided that require from the operator neither special knowledge, nor specific competencies in the fields of object detection, tracking and recognition, while allowing determining the thermal imager parameters necessary for constructing a high-quality image of an object. There are formulated the conditions required for obtaining such image that, by its quality, would make the satisfactory detection of the desired object possible. By means of the conducted tests, it is demonstrated that the application of the proposed mathematical and algorithmic support of the complex monitoring and control system provides the solution for the problem of the highly accurate individual recognition

    A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition

    Get PDF
    The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum produces images that have improved face recognition performance under varying lighting conditions. This is because long-wave infrared images are the result of emitted, rather than reflected, light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images have also improved face recognition under varying pose and lighting. The opacity of glass to long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces the recognition performance. This thesis presents the design and performance evaluation of a novel camera system which is capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth video images via a common optical path requiring no spatial registration between sensors beyond scaling for differences in sensor sizes. Experiments using a range of established face recognition methods and multi-class SVM classifiers show that the fused output from our camera system not only outperforms the single modality images for face recognition, but that the adaptive fusion methods used produce consistent increases in recognition accuracy under varying pose, lighting and with the presence of eyeglasses

    An Extensive Review on Spectral Imaging in Biometric Systems: Challenges and Advancements

    Full text link
    Spectral imaging has recently gained traction for face recognition in biometric systems. We investigate the merits of spectral imaging for face recognition and the current challenges that hamper the widespread deployment of spectral sensors for face recognition. The reliability of conventional face recognition systems operating in the visible range is compromised by illumination changes, pose variations and spoof attacks. Recent works have reaped the benefits of spectral imaging to counter these limitations in surveillance activities (defence, airport security checks, etc.). However, the implementation of this technology for biometrics, is still in its infancy due to multiple reasons. We present an overview of the existing work in the domain of spectral imaging for face recognition, different types of modalities and their assessment, availability of public databases for sake of reproducible research as well as evaluation of algorithms, and recent advancements in the field, such as, the use of deep learning-based methods for recognizing faces from spectral images

    Robust Tensor Preserving Projection for Multispectral Face Recognition

    Get PDF
    Multiple imaging modalities based face recognition has become a hot research topic. A great number of multispectral face recognition algorithms/systems have been designed in the last decade. How to extract features of different spectrum has still been an important issue for face recognition. To address this problem, we propose a robust tensor preserving projection (RTPP) algorithm which represents a multispectral image as a third-order tensor. RTPP constructs sparse neighborhoods and then computes weights of the tensor. RTPP iteratively obtains one spectral space transformation matrix through preserving the sparse neighborhoods. Due to sparse representation, RTPP can not only keep the underlying spatial structure of multispectral images but also enhance robustness. The experiments on both Equinox and DHUFO face databases show that the performance of the proposed method is better than those of related algorithms
    corecore