3,073 research outputs found

    Fusion of Visual and Thermal Images Using Genetic Algorithms

    Get PDF
    Demands for reliable person identification systems have increased significantly due to highly security risks in our daily life. Recently, person identification systems are built upon the biometrics techniques such as face recognition. Although face recognition systems have reached a certain level of maturity, their accomplishments in practical applications are restricted by some challenges, such as illumination variations. Current visual face recognition systems perform relatively well under controlled illumination conditions while thermal face recognition systems are more advantageous for detecting disguised faces or when there is no illumination control. A hybrid system utilizing both visual and thermal images for face recognition will be beneficial. The overall goal of this research is to develop computational methods that improve image quality by fusing visual and thermal face images. First, three novel algorithms were proposed to enhance visual face images. In those techniques, specifical nonlinear image transfer functions were developed and parameters associated with the functions were determined by image statistics, making the algorithms adaptive. Second, methods were developed for registering the enhanced visual images to their corresponding thermal images. Landmarks in the images were first detected and a subset of those landmarks were selected to compute a transformation matrix for the registration. Finally, A Genetic algorithm was proposed to fuse the registered visual and thermal images. Experimental results showed that image quality can be significantly improved using the proposed framework

    Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition

    Full text link
    This paper presents a comparative study of two different methods, which are based on fusion and polar transformation of visual and thermal images. Here, investigation is done to handle the challenges of face recognition, which include pose variations, changes in facial expression, partial occlusions, variations in illumination, rotation through different angles, change in scale etc. To overcome these obstacles we have implemented and thoroughly examined two different fusion techniques through rigorous experimentation. In the first method log-polar transformation is applied to the fused images obtained after fusion of visual and thermal images whereas in second method fusion is applied on log-polar transformed individual visual and thermal images. After this step, which is thus obtained in one form or another, Principal Component Analysis (PCA) is applied to reduce dimension of the fused images. Log-polar transformed images are capable of handling complicacies introduced by scaling and rotation. The main objective of employing fusion is to produce a fused image that provides more detailed and reliable information, which is capable to overcome the drawbacks present in the individual visual and thermal face images. Finally, those reduced fused images are classified using a multilayer perceptron neural network. The database used for the experiments conducted here is Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal and visual face images. The second method has shown better performance, which is 95.71% (maximum) and on an average 93.81% as correct recognition rate.Comment: Proceedings of IEEE Workshop on Computational Intelligence in Biometrics and Identity Management (IEEE CIBIM 2011), Paris, France, April 11 - 15, 201

    The Science of Disguise

    Get PDF
    Technological advances have made digital cameras ubiquitous, to the point where it is difficult to purchase even a mobile phone without one. Coupled with similar advances in face recognition technology, we are seeing a marked increase in the use of biometrics, such as face recognition, to identify individuals. However, remaining unrecognized in an era of ubiquitous camera surveillance remains desirable to some citizens, notably those concerned with privacy. Since biometrics are an intrinsic part of a person\u27s identity, it may be that the only means of evading detection is through disguise. We have created a comprehensive database of high-quality imagery that will allow us to explore the effectiveness of disguise as an approach to avoiding unwanted recognition. Using this database, we have evaluated the performance of a variety of automated machine-based face recognition algorithms on disguised faces. Our data-driven analysis finds that for the sample population contained in our database: (1) disguise is effective; (2) there are significant performance differences between individuals and demographic groups; and (3) elements including coverage, contrast, and disguise combination are determinative factors in the success or failure of face recognition algorithms on an image. In this dissertation, we examine the present-day uses of face recognition and their interplay with privacy concerns. We sketch the capabilities of a new database of facial imagery, unique both in the diversity of the imaged population, and in the diversity and consistency of disguises applied to each subject. We provide an analysis of disguise performance based on both a highly-rated commercial face recognition system and an open-source algorithm available to the FR community. Finally, we put forth hypothetical models for these results, and provide insights into the types of disguises that are the most effective at defeating facial recognition for various demographic populations. As cameras become more sophisticated and algorithms become more advanced, disguise may become less effective. For security professionals, this is a laudable outcome; privacy advocates will certainly feel differently

    Hyper-realistic Face Masks in a Live Passport-Checking Task

    Get PDF
    Hyper-realistic face masks have been used as disguises in at least one border crossing and in numerous criminal cases. Experimental tests using these masks have shown that viewers accept them as real faces under a range of conditions. Here, we tested mask detection in a live identity verification task. Fifty-four visitors at the London Science Museum viewed a mask wearer at close range (2 m) as part of a mock passport check. They then answered a series of questions designed to assess mask detection, while the masked traveller was still in view. In the identity matching task, 8% of viewers accepted the mask as matching a real photo of someone else, and 82% accepted the match between masked person and masked photo. When asked if there was any reason to detain the traveller, only 13% of viewers mentioned a mask. A further 11% picked disguise from a list of suggested reasons. Even after reading about mask-related fraud, 10% of viewers judged that the traveller was not wearing a mask. Overall, mask detection was poor and was not predicted by unfamiliar face matching performance. We conclude that hyper-realistic face masks could go undetected during live identity checks

    Mitigating the effect of covariates in face recognition

    Get PDF
    Current face recognition systems capture faces of cooperative individuals in controlled environment as part of the face recognition process. It is therefore possible to control lighting, pose, background, and quality of images. However, in a real world application, we have to deal with both ideal and imperfect data. Performance of current face recognition systems is affected for such non-ideal and challenging cases. This research focuses on designing algorithms to mitigate the effect of covariates in face recognition.;To address the challenge of facial aging, an age transformation algorithm is proposed that registers two face images and minimizes the aging variations. Unlike the conventional method, the gallery face image is transformed with respect to the probe face image and facial features are extracted from the registered gallery and probe face images. The variations due to disguises cause change in visual perception, alter actual data, make pertinent facial information disappear, mask features to varying degrees, or introduce extraneous artifacts in the face image. To recognize face images with variations due to age progression and disguises, a granular face verification approach is designed which uses dynamic feed-forward neural architecture to extract 2D log polar Gabor phase features at different granularity levels. The granular levels provide non-disjoint spatial information which is combined using the proposed likelihood ratio based Support Vector Machine match score fusion algorithm. The face verification algorithm is validated using five face databases including the Notre Dame face database, FG-Net face database and three disguise face databases.;The information in visible spectrum images is compromised due to improper illumination whereas infrared images provide invariance to illumination and expression. A multispectral face image fusion algorithm is proposed to address the variations in illumination. The Support Vector Machine based image fusion algorithm learns the properties of the multispectral face images at different resolution and granularity levels to determine optimal information and combines them to generate a fused image. Experiments on the Equinox and Notre Dame multispectral face databases show that the proposed algorithm outperforms existing algorithms. We next propose a face mosaicing algorithm to address the challenge due to pose variations. The mosaicing algorithm generates a composite face image during enrollment using the evidence provided by frontal and semiprofile face images of an individual. Face mosaicing obviates the need to store multiple face templates representing multiple poses of a users face image. Experiments conducted on three different databases indicate that face mosaicing offers significant benefits by accounting for the pose variations that are commonly observed in face images.;Finally, the concept of online learning is introduced to address the problem of classifier re-training and update. A learning scheme for Support Vector Machine is designed to train the classifier in online mode. This enables the classifier to update the decision hyperplane in order to account for the newly enrolled subjects. On a heterogeneous near infrared face database, the case study using Principal Component Analysis and C2 feature algorithms shows that the proposed online classifier significantly improves the verification performance both in terms of accuracy and computational time

    Deep Spiking Neural Network for Video-based Disguise Face Recognition Based on Dynamic Facial Movements

    Get PDF
    With the increasing popularity of social media andsmart devices, the face as one of the key biometrics becomesvital for person identification. Amongst those face recognitionalgorithms, video-based face recognition methods could make useof both temporal and spatial information just as humans do toachieve better classification performance. However, they cannotidentify individuals when certain key facial areas like eyes or noseare disguised by heavy makeup or rubber/digital masks. To thisend, we propose a novel deep spiking neural network architecturein this study. It takes dynamic facial movements, the facial musclechanges induced by speaking or other activities, as the sole input.An event-driven continuous spike-timing dependent plasticitylearning rule with adaptive thresholding is applied to train thesynaptic weights. The experiments on our proposed video-baseddisguise face database (MakeFace DB) demonstrate that theproposed learning method performs very well - it achieves from95% to 100% correct classification rates under various realisticexperimental scenario
    • …
    corecore