1,639 research outputs found

    Hybrid component-based face recognition.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Pietermaritzburg.Facial recognition (FR) is the trusted biometric method for authentication. Compared to other biometrics such as signature; which can be compromised, facial recognition is non-intrusive and it can be apprehended at a distance in a concealed manner. It has a significant role in conveying the identity of a person in social interaction and its performance largely depends on a variety of factors such as illumination, facial pose, expression, age span, hair, facial wear, and motion. In the light of these considerations this dissertation proposes a hybrid component-based approach that seeks to utilise any successfully detected components. This research proposes a facial recognition technique to recognize faces at component level. It employs the texture descriptors Grey-Level Co-occurrence (GLCM), Gabor Filters, Speeded-Up Robust Features (SURF) and Scale Invariant Feature Transforms (SIFT), and the shape descriptor Zernike Moments. The advantage of using the texture attributes is their simplicity. However, they cannot completely characterise the whole face recognition, hence the Zernike Moments descriptor was used to compute the shape properties of the selected facial components. These descriptors are effective facial components feature representations and are robust to illumination and pose changes. Experiments were performed on four different state of the art facial databases, the FERET, FEI, SCface and CMU and Error-Correcting Output Code (ECOC) was used for classification. The results show that component-based facial recognition is more effective than whole face and the proposed methods achieve 98.75% of recognition accuracy rate. This approach performs well compared to other componentbased facial recognition approaches

    Face Recognition using Segmental Euclidean Distance

    Get PDF
    In this paper an attempt has been made to detect the face using the combination of integral image along with the cascade structured classifier which is built using Adaboost learning algorithm. The detected faces are then passed through a filtering process for discarding the non face regions. They are individually split up into five segments consisting of forehead, eyes, nose, mouth and chin. Each segment is considered as a separate image and Eigenface also called principal component analysis (PCA) features of each segment is computed. The faces having a slight pose are also aligned for proper segmentation. The test image is also segmented similarly and its PCA features are found. The segmental Euclidean distance classifier is used for matching the test image with the stored one. The success rate comes out to be 88 per cent on the CG(full) database created from the databases of California Institute and Georgia Institute. However the performance of this approach on ORL(full) database with the same features is only 70 per cent. For the sake of comparison, DCT(full) and fuzzy features are tried on CG and ORL databases but using a well known classifier, support vector machine (SVM). Results of recognition rate with DCT features on SVM classifier are increased by 3 per cent over those due to PCA features and Euclidean distance classifier on the CG database. The results of recognition are improved to 96 per cent with fuzzy features on ORL database with SVM.Defence Science Journal, 2011, 61(5), pp.431-442, DOI:http://dx.doi.org/10.14429/dsj.61.117

    Biometric Person Identification Using Near-infrared Hand-dorsa Vein Images

    Get PDF
    Biometric recognition is becoming more and more important with the increasing demand for security, and more usable with the improvement of computer vision as well as pattern recognition technologies. Hand vein patterns have been recognised as a good biometric measure for personal identification due to many excellent characteristics, such as uniqueness and stability, as well as difficulty to copy or forge. This thesis covers all the research and development aspects of a biometric person identification system based on near-infrared hand-dorsa vein images. Firstly, the design and realisation of an optimised vein image capture device is presented. In order to maximise the quality of the captured images with relatively low cost, the infrared illumination and imaging theory are discussed. Then a database containing 2040 images from 102 individuals, which were captured by this device, is introduced. Secondly, image analysis and the customised image pre-processing methods are discussed. The consistency of the database images is evaluated using mean squared error (MSE) and peak signal-to-noise ratio (PSNR). Geometrical pre-processing, including shearing correction and region of interest (ROI) extraction, is introduced to improve image consistency. Image noise is evaluated using total variance (TV) values. Grey-level pre-processing, including grey-level normalisation, filtering and adaptive histogram equalisation are applied to enhance vein patterns. Thirdly, a gradient-based image segmentation algorithm is compared with popular algorithms in references like Niblack and Threshold Image algorithm to demonstrate its effectiveness in vein pattern extraction. Post-processing methods including morphological filtering and thinning are also presented. Fourthly, feature extraction and recognition methods are investigated, with several new approaches based on keypoints and local binary patterns (LBP) proposed. Through comprehensive comparison with other approaches based on structure and texture features as well as performance evaluation using the database created with 2040 images, the proposed approach based on multi-scale partition LBP is shown to provide the best recognition performance with an identification rate of nearly 99%. Finally, the whole hand-dorsa vein identification system is presented with a user interface for administration of user information and for person identification

    Revocable and non-invertible multibiometric template protection based on matrix transformation

    Get PDF
    Biometric authentication refers to the use of measurable characteristics (or features) of the human body to provide secure, reliable and convenient access to a computer system or physical environment. These features (physiological or behavioural) are unique to individual subjects because they are usually obtained directly from their owner's body. Multibiometric authentication systems use a combination of two or more biometric modalities to provide improved performance accuracy without offering adequate protection against security and privacy attacks. This paper proposes a multibiometric matrix transformation based technique, which protects users of multibiometric systems from security and privacy attacks. The results of security and privacy analyses show that the approach provides high-level template security and user privacy compared to previous one-way transformation techniques

    HUMAN FACE RECOGNITION BASED ON FRACTAL IMAGE CODING

    Get PDF
    Human face recognition is an important area in the field of biometrics. It has been an active area of research for several decades, but still remains a challenging problem because of the complexity of the human face. In this thesis we describe fully automatic solutions that can locate faces and then perform identification and verification. We present a solution for face localisation using eye locations. We derive an efficient representation for the decision hyperplane of linear and nonlinear Support Vector Machines (SVMs). For this we introduce the novel concept of ρ\rho and η\eta prototypes. The standard formulation for the decision hyperplane is reformulated and expressed in terms of the two prototypes. Different kernels are treated separately to achieve further classification efficiency and to facilitate its adaptation to operate with the fast Fourier transform to achieve fast eye detection. Using the eye locations, we extract and normalise the face for size and in-plane rotations. Our method produces a more efficient representation of the SVM decision hyperplane than the well-known reduced set methods. As a result, our eye detection subsystem is faster and more accurate. The use of fractals and fractal image coding for object recognition has been proposed and used by others. Fractal codes have been used as features for recognition, but we need to take into account the distance between codes, and to ensure the continuity of the parameters of the code. We use a method based on fractal image coding for recognition, which we call the Fractal Neighbour Distance (FND). The FND relies on the Euclidean metric and the uniqueness of the attractor of a fractal code. An advantage of using the FND over fractal codes as features is that we do not have to worry about the uniqueness of, and distance between, codes. We only require the uniqueness of the attractor, which is already an implied property of a properly generated fractal code. Similar methods to the FND have been proposed by others, but what distinguishes our work from the rest is that we investigate the FND in greater detail and use our findings to improve the recognition rate. Our investigations reveal that the FND has some inherent invariance to translation, scale, rotation and changes to illumination. These invariances are image dependent and are affected by fractal encoding parameters. The parameters that have the greatest effect on recognition accuracy are the contrast scaling factor, luminance shift factor and the type of range block partitioning. The contrast scaling factor affect the convergence and eventual convergence rate of a fractal decoding process. We propose a novel method of controlling the convergence rate by altering the contrast scaling factor in a controlled manner, which has not been possible before. This helped us improve the recognition rate because under certain conditions better results are achievable from using a slower rate of convergence. We also investigate the effects of varying the luminance shift factor, and examine three different types of range block partitioning schemes. They are Quad-tree, HV and uniform partitioning. We performed experiments using various face datasets, and the results show that our method indeed performs better than many accepted methods such as eigenfaces. The experiments also show that the FND based classifier increases the separation between classes. The standard FND is further improved by incorporating the use of localised weights. A local search algorithm is introduced to find a best matching local feature using this locally weighted FND. The scores from a set of these locally weighted FND operations are then combined to obtain a global score, which is used as a measure of the similarity between two face images. Each local FND operation possesses the distortion invariant properties described above. Combined with the search procedure, the method has the potential to be invariant to a larger class of non-linear distortions. We also present a set of locally weighted FNDs that concentrate around the upper part of the face encompassing the eyes and nose. This design was motivated by the fact that the region around the eyes has more information for discrimination. Better performance is achieved by using different sets of weights for identification and verification. For facial verification, performance is further improved by using normalised scores and client specific thresholding. In this case, our results are competitive with current state-of-the-art methods, and in some cases outperform all those to which they were compared. For facial identification, under some conditions the weighted FND performs better than the standard FND. However, the weighted FND still has its short comings when some datasets are used, where its performance is not much better than the standard FND. To alleviate this problem we introduce a voting scheme that operates with normalised versions of the weighted FND. Although there are no improvements at lower matching ranks using this method, there are significant improvements for larger matching ranks. Our methods offer advantages over some well-accepted approaches such as eigenfaces, neural networks and those that use statistical learning theory. Some of the advantages are: new faces can be enrolled without re-training involving the whole database; faces can be removed from the database without the need for re-training; there are inherent invariances to face distortions; it is relatively simple to implement; and it is not model-based so there are no model parameters that need to be tweaked
    corecore