91,337 research outputs found

    Illumination and Expression Invariant Face Recognition: Toward Sample Quality-based Adaptive Fusion

    Get PDF
    The performance of face recognition schemes is adversely affected as a result of significant to moderate variation in illumination, pose, and facial expressions. Most existing approaches to face recognition tend to deal with one of these problems by controlling the other conditions. Beside strong efficiency requirements, face recognition systems on constrained mobile devices and PDA's are expected to be robust against all variations in recording conditions that arise naturally as a result of the way such devices are used. Wavelet-based face recognition schemes have been shown to meet well the efficiency requirements. Wavelet transforms decompose face images into different frequency subbands at different scales, each giving rise to different representation of the face, and thereby providing the ingredients for a multi-stream approach to face recognition which stand a real chance of achieving acceptable level of robustness. This paper is concerned with the best fusion strategy for a multi-stream face recognition scheme. By investigating the robustness of different wavelet subbands against variation in lighting conditions and expressions, we shall demonstrate the shortcomings of current non-adaptive fusion strategies and argue for the need to develop an image quality based, intelligent, dynamic fusion strategy

    A robust illumination-invariant face recognition based on fusion of thermal IR, maximum filter and visible image

    Get PDF
    Face recognition has many challenges especially in real life detection, whereby to maintain consistency in getting an accurate recognition is almost impossible. Even for well-established state-of-the-art algorithms or methods will produce low accuracy in recognition if it was conducted under poor or bad lighting. To create a more robust face recognition with illumination invariant, this paper proposed an algorithm using a triple fusion approach. We are also implementing a hybrid method that combines the active approach by implementing thermal infrared imaging and also the passive approach of Maximum Filter and visual image. These approaches allow us to improve the image pre-processing as well as feature extraction and face detection, even if we capture a person’s face image in total darkness. In our experiment, Extended Yale B database are tested with Maximum Filter and compared against other state-of-the-filters. We have conduct-ed several experiments on mid-wave and long-wave thermal Infrared performance during pre-processing and saw that it is capable to improve recognition beyond what meets the eye. In our experiment, we found out that PCA eigenface cannot be produced in a poor or bad illumination. Mid-wave thermal creates the heat signature in the body and the Maximum Filter maintains the fine edges that are easily used by any classifiers such as SVM, OpenCV or even kNN together with Euclidian distance to perform face recognition. These configurations have been assembled for a face recognition portable robust system and the result showed that creating fusion between these processed image illumination invariants during preprocessing show far better results than just using visible image, thermal image or maximum filtered image separately

    Robust thermal face recognition using region classifiers

    Get PDF
    This paper presents a robust approach for recognition of thermal face images based on decision level fusion of 34 different region classifiers. The region classifiers concentrate on local variations. They use singular value decomposition (SVD) for feature extraction. Fusion of decisions of the region classifier is done by using majority voting technique. The algorithm is tolerant against false exclusion of thermal information produced by the presence of inconsistent distribution of temperature statistics which generally make the identification process difficult. The algorithm is extensively evaluated on UGC-JU thermal face database, and Terravic facial infrared database and the recognition performance are found to be 95.83% and 100%, respectively. A comparative study has also been made with the existing works in the literature

    Neighborhood Defined Feature Selection Strategy for Improved Face Recognition in Different Sensor Modalitie

    Get PDF
    A novel feature selection strategy for improved face recognition in images with variations due to illumination conditions, facial expressions, and partial occlusions is presented in this dissertation. A hybrid face recognition system that uses feature maps of phase congruency and modular kernel spaces is developed. Phase congruency provides a measure that is independent of the overall magnitude of a signal, making it invariant to variations in image illumination and contrast. A novel modular kernel spaces approach is developed and implemented on the phase congruency feature maps. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The unique modularization procedure developed in this research takes into consideration that the facial variations in a real world scenario are confined to local regions. The additional pixel dependencies that are considered based on their importance help in providing additional information for classification. This procedure also helps in robust localization of the variations, further improving classification accuracy. The effectiveness of the new feature selection strategy has been demonstrated by employing it in two specific applications via face authentication in low resolution cameras and face recognition using multiple sensors (visible and infrared). The face authentication system uses low quality images captured by a web camera. The optical sensor of the web camera is very sensitive to environmental illumination variations. It is observed that the feature selection policy overcomes the facial and environmental variations. A methodology based on multiple training images and clustering is also incorporated to overcome the additional challenges of computational efficiency and the subject\u27s non involvement. A multi-sensor image fusion based face recognition methodology that uses the proposed feature selection technique is presented in this dissertation. Research studies have indicated that complementary information from different sensors helps in improving the recognition accuracy compared to individual modalities. A decision level fusion methodology is also developed which provides better performance compared to individual as well as data level fusion modalities. The new decision level fusion technique is also robust to registration discrepancies, which is a very important factor in operational scenarios. Research work is progressing to use the new face recognition technique in multi-view images by employing independent systems for separate views and integrating the results with an appropriate voting procedure

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Hybrid Approach for Face Recognition Using DWT and LBP

    Get PDF
    Authentication of individuals plays a vital role to check intrusions in any online digital system. Most commonly and securely used techniques are biometric fingerprint reader and face recognition. Face recognition is the process of identification of individuals by their facial images, as faces are rarely matched. Face recognition technique merely considering test images and compare this with number of trained images stored in database and then conclude whether the test images matches with any trained images. In this paper we have discussed two hybrid techniques local binary pattern (LBP) and Discrete Wavelet Transform (DWT) for face images to extract feature stored in database by applying principal component analysis for fusion and same process is done for test images. Then K-nearest neighbor (KNN) classifier is used to classify images and measure the accuracy. Our proposed model achieved 95% accuracy. The aim of this paper is to develop a robust method for face recognition and classification of individuals to improve the recognition rate, efficiency of the system and for lesser complexity

    Face recognition using color local binary pattern from mutually independent color channels

    Get PDF
    In this paper, a high performance face recognition system based on local binary pattern (LBP) using the probability distribution functions (PDF) of pixels in different mutually independent color channels which are robust to frontal homogenous illumination and planer rotation is proposed. The illumination of faces is enhanced by using the state-of-the-art technique which is using discrete wavelet transform (DWT) and singular value decomposition (SVD). After equalization, face images are segmented by use of local Successive Mean Quantization Transform (SMQT) followed by skin color based face detection system. Kullback-Leibler Distance (KLD) between the concatenated PDFs of a given face obtained by LBP and the concatenated PDFs of each face in the database is used as a metric in the recognition process. Various decision fusion techniques have been used in order to improve the recognition rate. The proposed system has been tested on the FERET, HP, and Bosphorus face databases. The proposed system is compared with conventional and thestate-of-the-art techniques. The recognition rates obtained using FVF approach for FERET database is 99.78% compared with 79.60% and 68.80% for conventional gray scale LBP and Principle Component Analysis (PCA) based face recognition techniques respectively.Comment: 11 pages in EURASIP Journal on Image and Video Processing, 201

    Robust correlated and individual component analysis

    Get PDF
    © 1979-2012 IEEE.Recovering correlated and individual components of two, possibly temporally misaligned, sets of data is a fundamental task in disciplines such as image, vision, and behavior computing, with application to problems such as multi-modal fusion (via correlated components), predictive analysis, and clustering (via the individual ones). Here, we study the extraction of correlated and individual components under real-world conditions, namely i) the presence of gross non-Gaussian noise and ii) temporally misaligned data. In this light, we propose a method for the Robust Correlated and Individual Component Analysis (RCICA) of two sets of data in the presence of gross, sparse errors. We furthermore extend RCICA in order to handle temporal incongruities arising in the data. To this end, two suitable optimization problems are solved. The generality of the proposed methods is demonstrated by applying them onto 4 applications, namely i) heterogeneous face recognition, ii) multi-modal feature fusion for human behavior analysis (i.e., audio-visual prediction of interest and conflict), iii) face clustering, and iv) thetemporal alignment of facial expressions. Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methodson these application domains, outperforming other state-of-the-art methods in the field

    Robust face recognition

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Face recognition is one of the most important and promising biometric techniques. In face recognition, a similarity score is automatically calculated between face images to further decide their identity. Due to its non-invasive characteristics and ease of use, it has shown great potential in many real-world applications, e.g., video surveillance, access control systems, forensics and security, and social networks. This thesis addresses key challenges inherent in real-world face recognition systems including pose and illumination variations, occlusion, and image blur. To tackle these challenges, a series of robust face recognition algorithms are proposed. These can be summarized as follows: In Chapter 2, we present a novel, manually designed face image descriptor named “Dual-Cross Patterns” (DCP). DCP efficiently encodes the seconder-order statistics of facial textures in the most informative directions within a face image. It proves to be more descriptive and discriminative than previous descriptors. We further extend DCP into a comprehensive face representation scheme named “Multi-Directional Multi-Level Dual-Cross Patterns” (MDML-DCPs). MDML-DCPs efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. MDML-DCPs achieves the best performance on the challenging FERET, FRGC 2.0, CAS-PEAL-R1, and LFW databases. In Chapter 3, we develop a deep learning-based face image descriptor named “Multimodal Deep Face Representation” (MM-DFR) to automatically learn face representations from multimodal image data. In brief, convolutional neural networks (CNNs) are designed to extract complementary information from the original holistic face image, the frontal pose image rendered by 3D modeling, and uniformly sampled image patches. The recognition ability of each CNN is optimized by carefully integrating a number of published or newly developed tricks. A feature level fusion approach using stacked auto-encoders is designed to fuse the features extracted from the set of CNNs, which is advantageous for non-linear dimension reduction. MM-DFR achieves over 99% recognition rate on LFW using publicly available training data. In Chapter 4, based on our research on handcrafted face image descriptors, we propose a powerful pose-invariant face recognition (PIFR) framework capable of handling the full range of pose variations within ±90° of yaw. The framework has two parts: the first is Patch-based Partial Representation (PBPR), and the second is Multi-task Feature Transformation Learning (MtFTL). PBPR transforms the original PIFR problem into a partial frontal face recognition problem. A robust patch-based face representation scheme is developed to represent the synthesized partial frontal faces. For each patch, a transformation dictionary is learnt under the MtFTL scheme. The transformation dictionary transforms the features of different poses into a discriminative subspace in which face matching is performed. The PBPR-MtFTL framework outperforms previous state-of-the-art PIFR methods on the FERET, CMU-PIE, and Multi-PIE databases. In Chapter 5, based on our research on deep learning-based face image descriptors, we design a novel framework named Trunk-Branch Ensemble CNN (TBE-CNN) to handle challenges in video-based face recognition (VFR) under surveillance circumstances. Three major challenges are considered: image blur, occlusion, and pose variation. First, to learn blur-robust face representations, we artificially blur training data composed of clear still images to account for a shortfall in real-world video training data. Second, to enhance the robustness of CNN features to pose variations and occlusion, we propose the TBE-CNN architecture, which efficiently extracts complementary information from holistic face images and patches cropped around facial components. Third, to further promote the discriminative power of the representations learnt by TBE-CNN, we propose an improved triplet loss function. With the proposed techniques, TBE-CNN achieves state-of-the-art performance on three popular video face databases: PaSC, COX Face, and YouTube Faces
    • …
    corecore