11 research outputs found
Invariant Object Recognition Using Radon-based Transform
The properties of the Radon transform are used to derive the transformation invariant to translation, rotation and scaling. The invariant transformation involves translation compensation, angle representation and 1-D Fourier transform. The new object recognition method is studied experimentally in two domains, mammogram labels recognition and face recognition. For mammogram labels, the recognition accuracy is 97 %, while in case of faces it reaches 96 %
Image quality-based adaptive illumination normalisation for face recognition
Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired
A survey of face detection, extraction and recognition
The goal of this paper is to present a critical survey of existing literatures on human face recognition over the last 4-5 years. Interest and research activities in face recognition have increased significantly over the past few years, especially after the American airliner tragedy on September 11 in 2001. While this growth largely is driven by growing application demands, such as static matching of controlled photographs as in mug shots matching, credit card verification to surveillance video images, identification for law enforcement and authentication for banking and security system access, advances in signal analysis techniques, such as wavelets and neural networks, are also important catalysts. As the number of proposed techniques increases, survey and evaluation becomes important
Neighborhood Defined Feature Selection Strategy for Improved Face Recognition in Different Sensor Modalitie
A novel feature selection strategy for improved face recognition in images with variations due to illumination conditions, facial expressions, and partial occlusions is presented in this dissertation. A hybrid face recognition system that uses feature maps of phase congruency and modular kernel spaces is developed. Phase congruency provides a measure that is independent of the overall magnitude of a signal, making it invariant to variations in image illumination and contrast. A novel modular kernel spaces approach is developed and implemented on the phase congruency feature maps. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The unique modularization procedure developed in this research takes into consideration that the facial variations in a real world scenario are confined to local regions. The additional pixel dependencies that are considered based on their importance help in providing additional information for classification. This procedure also helps in robust localization of the variations, further improving classification accuracy. The effectiveness of the new feature selection strategy has been demonstrated by employing it in two specific applications via face authentication in low resolution cameras and face recognition using multiple sensors (visible and infrared).
The face authentication system uses low quality images captured by a web camera. The optical sensor of the web camera is very sensitive to environmental illumination variations. It is observed that the feature selection policy overcomes the facial and environmental variations. A methodology based on multiple training images and clustering is also incorporated to overcome the additional challenges of computational efficiency and the subject\u27s non involvement. A multi-sensor image fusion based face recognition methodology that uses the proposed feature selection technique is presented in this dissertation. Research studies have indicated that complementary information from different sensors helps in improving the recognition accuracy compared to individual modalities. A decision level fusion methodology is also developed which provides better performance compared to individual as well as data level fusion modalities. The new decision level fusion technique is also robust to registration discrepancies, which is a very important factor in operational scenarios.
Research work is progressing to use the new face recognition technique in multi-view images by employing independent systems for separate views and integrating the results with an appropriate voting procedure
Feature extraction and information fusion in face and palmprint multimodal biometrics
ThesisMultimodal biometric systems that integrate the biometric traits from several
modalities are able to overcome the limitations of single modal biometrics. Fusing
the information at an earlier level by consolidating the features given by different
traits can give a better result due to the richness of information at this stage. In this
thesis, three novel methods are derived and implemented on face and palmprint
modalities, taking advantage of the multimodal biometric fusion at feature level.
The benefits of the proposed method are the enhanced capabilities in discriminating
information in the fused features and capturing all of the information required to
improve the classification performance. Multimodal biometric proposed here
consists of several stages such as feature extraction, fusion, recognition and
classification.
Feature extraction gathers all important information from the raw images. A
new local feature extraction method has been designed to extract information from
the face and palmprint images in the form of sub block windows. Multiresolution
analysis using Gabor transform and DCT is computed for each sub block window to
produce compact local features for the face and palmprint images. Multiresolution
Gabor analysis captures important information in the texture of the images while
DCT represents the information in different frequency components. Important
features with high discrimination power are then preserved by selecting several low
frequency coefficients in order to estimate the model parameters.
The local features extracted are fused in a new matrix interleaved method. The
new fused feature vector is higher in dimensionality compared to the original feature
vectors from both modalities, thus it carries high discriminating power and contains
rich statistical information. The fused feature vector also has larger data points in
the feature space which is advantageous for the training process using statistical
methods. The underlying statistical information in the fused feature vectors is
captured using GMM where several numbers of modal parameters are estimated
from the distribution of fused feature vector.
Maximum likelihood score is used to measure a degree of certainty to perform
recognition while maximum likelihood score normalization is used for classification
process. The use of likelihood score normalization is found to be able to suppress an
imposter likelihood score when the background model parameters are estimated
from a pool of users which include statistical information of an imposter. The
present method achieved the highest recognition accuracy 97% and 99.7% when
tested using FERET-PolyU dataset and ORL-PolyU dataset respectively.Universiti Malaysia Perlis and Ministry of Higher Education
Malaysi