54,337 research outputs found

    Image-Quality-Based Adaptive Face Recognition

    Get PDF
    The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition

    Illumination Insensitive Face Recognition Using Gradientfaces

    Get PDF
    The performance of most existing face recognition methods is highly sensitive to illumination variation. It will be seriously degraded if the training/testing faces under variable lighting. Thus, illumination variation is one of the most significant factor affecting the performance of face recognition and has received much attention in recent years. In this paper we propose a novel method called gradientface for face recognition under varying illumination. When we rarely know the strength, direction or number of light sources. The proposed method has the ability to extract illumination insensitive measure, which is then used for face recognition. The merits of this method is that neither does it require any lighting assumption nor does it need any training images. Gradientface method reaches very high recognition rate of 98.96% in the test on yele B face database. Further more the experimental results on Yale database validate that gradient faces is also insensitive to image noise and object artifacts such as facia;l expression

    Human Face Recognition

    Get PDF
    Face recognition, as the main biometric used by human beings, has become more popular for the last twenty years. Automatic recognition of human faces has many commercial and security applications in identity validation and recognition and has become one of the hottest topics in the area of image processing and pattern recognition since 1990. Availability of feasible technologies as well as the increasing request for reliable security systems in today’s world has been a motivation for many researchers to develop new methods for face recognition. In automatic face recognition we desire to either identify or verify one or more persons in still or video images of a scene by means of a stored database of faces. One of the important features of face recognition is its non-intrusive and non-contact property that distinguishes it from other biometrics like iris or finger print recognition that require subjects’ participation. During the last two decades several face recognition algorithms and systems have been proposed and some major advances have been achieved. As a result, the performance of face recognition systems under controlled conditions has now reached a satisfactory level. These systems, however, face some challenges in environments with variations in illumination, pose, expression, etc. The objective of this research is designing a reliable automated face recognition system which is robust under varying conditions of noise level, illumination and occlusion. A new method for illumination invariant feature extraction based on the illumination-reflectance model is proposed which is computationally efficient and does not require any prior information about the face model or illumination. A weighted voting scheme is also proposed to enhance the performance under illumination variations and also cancel occlusions. The proposed method uses mutual information and entropy of the images to generate different weights for a group of ensemble classifiers based on the input image quality. The method yields outstanding results by reducing the effect of both illumination and occlusion variations in the input face images

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Robust Face Alignment for Illumination and Pose Invariant Face Recognition

    Get PDF
    In building a face recognition system for real-life scenarios, one usually faces the problem that is the selection of a feature-space and preprocessing methods such as alignment under varying illumination conditions and poses. In this study, we developed a robust face alignment approach based on Active Appearance Model (AAM) by inserting an illumination normalization module into the standard AAM searching procedure and inserting different poses of the same identity into the training set. The modified AAM search can now handle both illumination and pose variations in the same epoch, hence it provides better convergence in both point-to-point and point-to-curve senses. We also investigate how face recognition performance is affected by the selection of feature space as well as the proposed alignment method. The experimental results show that the combined pose alignment and illumination normalization methods increase the recognition rates considerably for all featurespaces. 1

    Illumination invariant face recognition

    Get PDF
    Few of the face recognition methods reported in the literature are capable of recognising faces under varying illumination conditions. The paper discusses a method which can achieve a higher recognition rate than those obtained for existing methods. The novelty of this new method is the use of an embossing technique to process a face image before presenting it to a standard face recognition system. Using a large database of face images, the performance of the proposed method is evaluated by comparing it against the performances of three existing methods. The experimental results demonstrate the successfulness of the proposed method

    An Illumination Invariant Accurate Face Recognition with Down Scaling of DCT Coefficients

    Get PDF
    In this paper, a novel approach for illumination normalization under varying lighting conditions is presented. Our approach utilizes the fact that discrete cosine transform (DCT) low-frequency coefficients correspond to illumination variations in a digital image. Under varying illuminations, the images captured may have low contrast; initially we apply histogram equalization on these for contrast stretching. Then the low-frequency DCT coefficients are scaled down to compensate the illumination variations. The value of scaling down factor and the number of low-frequency DCT coefficients, which are to be re-scaled, are obtained experimentally. The classification is done using k-nearest neighbor classification and nearest mean classification on the images obtained by inverse DCT on the processed coefficients. The correlation coefficient and Euclidean distance obtained using principal component analysis are used as distance metrics in classification. We have tested our face recognition method using Yale face database B. The results show that our method performs without any error (100% face recognition performance) even on the most extreme illumination variations. There are different schemes in the literature for illumination normalization under varying lighting conditions, but no one is claimed to give 100% recognition rate under all illumination variations for this database. The proposed technique is computationally efficient and can easily be implemented for real time face recognition system
    corecore