744 research outputs found

    Infrared face recognition: a comprehensive review of methodologies and databases

    Full text link
    Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.Comment: Pattern Recognition, 2014. arXiv admin note: substantial text overlap with arXiv:1306.160

    A novel multispectral and 2.5D/3D image fusion camera system for enhanced face recognition

    Get PDF
    The fusion of images from the visible and long-wave infrared (thermal) portions of the spectrum produces images that have improved face recognition performance under varying lighting conditions. This is because long-wave infrared images are the result of emitted, rather than reflected, light and are therefore less sensitive to changes in ambient light. Similarly, 3D and 2.5D images have also improved face recognition under varying pose and lighting. The opacity of glass to long-wave infrared light, however, means that the presence of eyeglasses in a face image reduces the recognition performance. This thesis presents the design and performance evaluation of a novel camera system which is capable of capturing spatially registered visible, near-infrared, long-wave infrared and 2.5D depth video images via a common optical path requiring no spatial registration between sensors beyond scaling for differences in sensor sizes. Experiments using a range of established face recognition methods and multi-class SVM classifiers show that the fused output from our camera system not only outperforms the single modality images for face recognition, but that the adaptive fusion methods used produce consistent increases in recognition accuracy under varying pose, lighting and with the presence of eyeglasses

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Uncertainty Theories Based Iris Recognition System

    Get PDF
    The performance and robustness of the iris-based recognition systems still suffer from imperfection in the biometric information. This paper makes an attempt to address these imperfections and deals with important problem for real system. We proposed a new method for iris recognition system based on uncertainty theories to treat imperfection iris feature. Several factors cause different types of degradation in iris data such as the poor quality of the acquired pictures, the partial occlusion of the iris region due to light spots, or lenses, eyeglasses, hair or eyelids, and adverse illumination and/or contrast. All of these factors are open problems in the field of iris recognition and affect the performance of iris segmentation, its feature extraction or decision making process, and appear as imperfections in the extracted iris feature. The aim of our experiments is to model the variability and ambiguity in the iris data with the uncertainty theories. This paper illustrates the importance of the use of this theory for modeling or/and treating encountered imperfections. Several comparative experiments are conducted on two subsets of the CASIA-V4 iris image database namely Interval and Synthetic. Compared to a typical iris recognition system relying on the uncertainty theories, experimental results show that our proposed model improves the iris recognition system in terms of Equal Error Rates (EER), Area Under the receiver operating characteristics Curve (AUC) and Accuracy Recognition Rate (ARR) statistics

    Uncertainty Theories Based Iris Recognition System

    Get PDF
    The performance and robustness of the iris-based recognition systems still suffer from imperfection in the biometric information. This paper makes an attempt to address these imperfections and deals with important problem for real system. We proposed a new method for iris recognition system based on uncertainty theories to treat imperfection iris feature. Several factors cause different types of degradation in iris data such as the poor quality of the acquired pictures, the partial occlusion of the iris region due to light spots, or lenses, eyeglasses, hair or eyelids, and adverse illumination and/or contrast. All of these factors are open problems in the field of iris recognition and affect the performance of iris segmentation, its feature extraction or decision making process, and appear as imperfections in the extracted iris feature. The aim of our experiments is to model the variability and ambiguity in the iris data with the uncertainty theories. This paper illustrates the importance of the use of this theory for modeling or/and treating encountered imperfections. Several comparative experiments are conducted on two subsets of the CASIA-V4 iris image database namely Interval and Synthetic. Compared to a typical iris recognition system relying on the uncertainty theories, experimental results show that our proposed model improves the iris recognition system in terms of Equal Error Rates (EER), Area Under the receiver operating characteristics Curve (AUC) and Accuracy Recognition Rate (ARR) statistics.

    Iris Recognition in Multiple Spectral Bands: From Visible to Short Wave Infrared

    Get PDF
    The human iris is traditionally imaged in Near Infrared (NIR) wavelengths (700nm-900nm) for iris recognition. The absorption co-efficient of color inducing pigment in iris, called Melanin, decreases after 700nm thus minimizing its effect when iris is imaged at wavelengths greater than 700nm. This thesis provides an overview and explores the efficacy of iris recognition at different wavelength bands ranging from visible spectrum (450nm-700nm) to NIR (700nm-900nm) and Short Wave Infrared (900nm-1600nm). Different matching methods are investigated at different wavelength bands to facilitate cross-spectral iris recognition.;The iris recognition analysis in visible wavelengths provides a baseline performance when iris is captured using common digital cameras. A novel blob-based matching algorithm is proposed to match RGB (visible spectrum) iris images. This technique generates a match score based on the similarity between blob like structures in the iris images. The matching performance of the blob based matching method is compared against that of classical \u27Iris Code\u27 matching method, SIFT-based matching method and simple correlation matching, and results indicate that the blob-based matching method performs reasonably well. Additional experiments on the datasets show that the iris images can be matched with higher confidence for light colored irides than dark colored irides in the visible spectrum.;As part of the analysis in the NIR spectrum, iris images captured in visible spectrum are matched against those captured in the NIR spectrum. Experimental results on the WVU multispectral dataset show promise in achieving a good recognition performance when the images are captured using the same sensor under the same illumination conditions and at the same resolution. A new proprietary \u27FaceIris\u27 dataset is used to investigate the ability to match iris images from a high resolution face image in visible spectrum against an iris image acquired in NIR spectrum. Matching in \u27FaceIris\u27 dataset presents a scenario where the two images to be matched are obtained by different sensors at different wavelengths, at different ambient illumination and at different resolution. Cross-spectral matching on the \u27FaceIris\u27 dataset presented a challenge to achieve good performance. Also, the effect of the choice of the radial and angular parameters of the normalized iris image on matching performance is presented. The experiments on WVU multispectral dataset resulted in good separation between genuine and impostor score distributions for cross-spectral matching which indicates that iris images in obtained in visible spectrum can be successfully matched against NIR iris images using \u27IrisCode\u27 method.;Iris is also analyzed in the Short Wave Infrared (SWIR) spectrum to study the feasibility of performing iris recognition at these wavelengths. An image acquisition setup was designed to capture the iris at 100nm interval spectral bands ranging from 950nm to 1650nm. Iris images are analyzed at these wavelengths and various observations regarding the brightness, contrast and textural content are discussed. Cross-spectral and intra-spectral matching was carried out on the samples collected from 25 subjects. Experimental results on this small dataset show the possibility of performing iris recognition in 950nm-1350nm wavelength range. Fusion of match scores from intra-spectral matching at different wavelength bands is shown to improve matching performance in the SWIR domain

    A theoretical eye model for uncalibrated real-time eye gaze estimation

    Get PDF
    Computer vision systems that monitor human activity can be utilized for many diverse applications. Some general applications stemming from such activity monitoring are surveillance, human-computer interfaces, aids for the handicapped, and virtual reality environments. For most of these applications, a non-intrusive system is desirable, either for reasons of covertness or comfort. Also desirable is generality across users, especially for humancomputer interfaces and surveillance. This thesis presents a method of gaze estimation that, without calibration, determines a relatively unconstrained user’s overall horizontal eye gaze. Utilizing anthropometric data and physiological models, a simple, yet general eye model is presented. The equations that describe the gaze angle of the eye in this model are presented. The procedure for choosing the proper features for gaze estimation is detailed and the algorithms utilized to find these points are described. Results from manual and automatic feature extraction are presented and analyzed. The error observed from this model is around 3± and the error observed from the implementation is around 6±. This amount of error is comparable to previous eye gaze estimation algorithms and it validates this model. The results presented across a set of subjects display consistency, which proves the generality of this model. A real-time implementation that operates around 17 frames per second displays the efficiency of the algorithms implemented. While there are many interesting directions for future work, the goals of this thesis were achieved

    Semi-Supervised Pattern Recognition and Machine Learning for Eye-Tracking

    Get PDF
    The first step in monitoring an observer’s eye gaze is identifying and locating the image of their pupils in video recordings of their eyes. Current systems work under a range of conditions, but fail in bright sunlight and rapidly varying illumination. A computer vision system was developed to assist with the recognition of the pupil in every frame of a video, in spite of the presence of strong first-surface reflections off of the cornea. A modified Hough Circle detector was developed that incorporates knowledge that the pupil is darker than the surrounding iris of the eye, and is able to detect imperfect circles, partial circles, and ellipses. As part of processing the image is modified to compensate for the distortion of the pupil caused by the out-of-plane rotation of the eye. A sophisticated noise cleaning technique was developed to mitigate first surface reflections, enhance edge contrast, and reduce image flare. Semi-supervised human input and validation is used to train the algorithm. The final results are comparable to those achieved using a human analyst, but require only a tenth of the human interaction
    corecore