349 research outputs found

    Image quality-based adaptive illumination normalisation for face recognition

    Get PDF
    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired

    Image-Quality-Based Adaptive Face Recognition

    Get PDF
    The accuracy of automated face recognition systems is greatly affected by intraclass variations between enrollment and identification stages. In particular, changes in lighting conditions is a major contributor to these variations. Common approaches to address the effects of varying lighting conditions include preprocessing face images to normalize intraclass variations and the use of illumination invariant face descriptors. Histogram equalization is a widely used technique in face recognition to normalize variations in illumination. However, normalizing well-lit face images could lead to a decrease in recognition accuracy. The multiresolution property of wavelet transforms is used in face recognition to extract facial feature descriptors at different scales and frequencies. The high-frequency wavelet subbands have shown to provide illumination-invariant face descriptors. However, the approximation wavelet subbands have shown to be a better feature representation for well-lit face images. Fusion of match scores from low- and high-frequency-based face representations have shown to improve recognition accuracy under varying lighting conditions. However, the selection of fusion parameters for different lighting conditions remains unsolved. Motivated by these observations, this paper presents adaptive approaches to face recognition to overcome the adverse effects of varying lighting conditions. Image quality, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures. Image quality is also used to adaptively select fusion parameters for wavelet-based multistream face recognition

    Illumination and Expression Invariant Face Recognition: Toward Sample Quality-based Adaptive Fusion

    Get PDF
    The performance of face recognition schemes is adversely affected as a result of significant to moderate variation in illumination, pose, and facial expressions. Most existing approaches to face recognition tend to deal with one of these problems by controlling the other conditions. Beside strong efficiency requirements, face recognition systems on constrained mobile devices and PDA's are expected to be robust against all variations in recording conditions that arise naturally as a result of the way such devices are used. Wavelet-based face recognition schemes have been shown to meet well the efficiency requirements. Wavelet transforms decompose face images into different frequency subbands at different scales, each giving rise to different representation of the face, and thereby providing the ingredients for a multi-stream approach to face recognition which stand a real chance of achieving acceptable level of robustness. This paper is concerned with the best fusion strategy for a multi-stream face recognition scheme. By investigating the robustness of different wavelet subbands against variation in lighting conditions and expressions, we shall demonstrate the shortcomings of current non-adaptive fusion strategies and argue for the need to develop an image quality based, intelligent, dynamic fusion strategy

    Enhancing face recognition at a distance using super resolution

    Get PDF
    The characteristics of surveillance video generally include low-resolution images and blurred images. Decreases in image resolution lead to loss of high frequency facial components, which is expected to adversely affect recognition rates. Super resolution (SR) is a technique used to generate a higher resolution image from a given low-resolution, degraded image. Dictionary based super resolution pre-processing techniques have been developed to overcome the problem of low-resolution images in face recognition. However, super resolution reconstruction process, being ill-posed, and results in visual artifacts that can be visually distracting to humans and/or affect machine feature extraction and face recognition algorithms. In this paper, we investigate the impact of two existing super-resolution methods to reconstruct a high resolution from single/multiple low-resolution images on face recognition. We propose an alternative scheme that is based on dictionaries in high frequency wavelet subbands. The performance of the proposed method will be evaluated on databases of high and low-resolution images captured under different illumination conditions and at different distances. We shall demonstrate that the proposed approach at level 3 DWT decomposition has superior performance in comparison to the other super resolution methods

    LBP based on multi wavelet sub-bands feature extraction used for face recognition

    Get PDF
    The strategy of extracting discriminant features from a face image is immensely important to accurate face recognition. This paper proposes a feature extraction algorithm based on wavelets and local binary patterns (LBPs). The proposed method decomposes a face image into multiple sub-bands of frequencies using wavelet transform. Each sub-band in the wavelet domain is divided into non-overlapping sub-regions. Then LBP histograms based on the traditional 8-neighbour sampling points are extracted from the approximation sub-band, whilst 4-neighbour sampling points are used to extract LBPHs from detail sub-bands. Finally, all LBPHs are concatenated into a single feature histogram to effectively represent the face image. Euclidean distance is used to measure the similarity of different feature histograms and the final recognition is performed by the nearest-neighbour classifier. The above strategy was tested on two publicly available face databases (Yale and ORL) using different scenarios and different combination of sub-bands. Results show that the proposed method outperforms the traditional LBP based features

    Construction of dictionaries to reconstruct high-resolution images for face recognition

    Get PDF
    This paper presents an investigation into the construction of over-complete dictionaries to use in reconstructing a super resolution image from a single input low-resolution image for face recognition at a distance. The ultimate aim is to exploit the recently developed Compressive Sensing (CS) theory to develop scalable face recognition schemes that do not require training. Here we shall demonstrate that dictionaries that satisfy the Restricted Isometry Property (RIP) used for CS can achieve face recognition accuracy levels as good as those achieved by dictionaries that are learned from face image databases using elaborate procedures

    eBiometrics: an enhanced multi-biometrics authentication technique for real-time remote applications on mobile devices

    Get PDF
    The use of mobile communication devices with advance sensors is growing rapidly. These sensors are enabling functions such as Image capture, Location applications, and Biometric authentication such as Fingerprint verification and Face & Handwritten signature recognition. Such ubiquitous devices are essential tools in today's global economic activities enabling anywhere-anytime financial and business transactions. Cryptographic functions and biometric-based authentication can enhance the security and confidentiality of mobile transactions. Using Biometric template security techniques in real-time biometric-based authentication are key factors for successful identity verification solutions, but are venerable to determined attacks by both fraudulent software and hardware. The EU-funded SecurePhone project has designed and implemented a multimodal biometric user authentication system on a prototype mobile communication device. However, various implementations of this project have resulted in long verification times or reduced accuracy and/or security. This paper proposes to use built-in-self-test techniques to ensure no tampering has taken place on the verification process prior to performing the actual biometric authentication. These techniques utilises the user personal identification number as a seed to generate a unique signature. This signature is then used to test the integrity of the verification process. Also, this study proposes the use of a combination of biometric modalities to provide application specific authentication in a secure environment, thus achieving optimum security level with effective processing time. I.e. to ensure that the necessary authentication steps and algorithms running on the mobile device application processor can not be undermined or modified by an imposter to get unauthorized access to the secure system
    • …
    corecore