8 research outputs found
Curvelet Transform-Based Techniques For Biometric Person Identification
Biometric person identification refers to the recognition of a person based on the physical or behavioral traits. Palm print based biometric identification system is one of the low cost biometric systems, since the palm image can be obtained using low cost sensors, such as desktop scanners and web cameras. Because of ease of image acquisition of palm prints and identification accuracy, palm images are used in both uni- modal and multimodal biometric systems. A multi-scale and multi-directional representation is desirable to represent thick and scattered thin lines of a palm image. Multi-scale and multi-directional representation can also be used in image fusion, where two images of two different biometric traits can be fused to a single image to improve the identification accuracy. Face and palm images can be fused to keep the desired high pass information of the palm images and the low pass information of the face images. The Curvelet transform is a multi-scale and multi-directional geometric transform that provides a better representation of the objects with edges and requires a small number of curvelet coefficients to represent the curves.
In this thesis, two methods using the very desirable characteristics of the curvelet transform are proposed for both the uni-modal and bi-modal biometric systems. A palm curvelet code (PCC) for palm print based uni-modal biometric systems and a pixel-level fusion method for face and palm based bi-modal biometric systems are developed. A simple binary coding technique that represents the structural information in curvelet directional sub-bands is used to obtain the PCC. Performance of the PCC is evaluated for both identification and verification modes of a palm print based biometric system, and then, the use of PCC in hierarchical identification is investigated. In the pixel-level fusion scheme for a bi-modal system, face and palm images are fused in the curvelet transform domain using mean-mean fusion rule. Extensive experimentations are carried out on three publicly available palm databases and one face database to evaluate the performance in terms of the commonly used metrics, and it is shown that the proposed methods provide a better performance compared to other existing methods
Palmprint biometric data acquisition: extracting a consistent Region of Interest (ROI) for method evaluation
Traditionally personal identification was based on possessions. This could be in the form of a physical key, ID card, passport, or some kind of knowledge based entry system such as a password. All of these are prone to attack where impersonation of your identity for some kind of immediate financial gain, or the more serious identity theft, is possible simply by being in physical possession of an identity device or knowledge of a password.
In contrast biometric identification attempts to identify who you are. Iris or retina patterns, palmprint, fingerprint, face and voice recognition are well known examples of biometric attributes. Some biometrics such as fingerprints were established in the latter 19th century well before computers were commonplace. Others such as face, iris and voice recognition have emerged as computer technology and methodologies have developed. More recent research has also devoted attention to internal physiological biometrics based on brain (electroencephalogram), heart activity (electrocardiogram) and palm vein patterns. Even your personal gait based on how you walk has been investigated. Both security and forensic applications compete to find the best identification method trading off accuracy for performance depending on the intended application.
This thesis is a continuation of previous research to develop a tool for distributed palmprint image data gathering. This would enable researchers to concentrate on method evaluation whilst not losing valuable time in data validation. This simple tool will enable palmprint biometric diversity across continents to be gathered. This thesis continues by establishing how to extract a consistent region of interest in the acquired palmprint images from a mobile phone ,or statically mounted digital, camera. The importance of establishing a consistent region of interest is considered by studying a simple existing identification method applied to a known palmprint database. In the discussions and conclusions the usefulness of this method is established and the final research outlined that is needed to finalize the palmprint acquisition tool for academic research
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira
Handbook of Vascular Biometrics
This open access handbook provides the first comprehensive overview of biometrics exploiting the shape of human blood vessels for biometric recognition, i.e. vascular biometrics, including finger vein recognition, hand/palm vein recognition, retina recognition, and sclera recognition. After an introductory chapter summarizing the state of the art in and availability of commercial systems and open datasets/open source software, individual chapters focus on specific aspects of one of the biometric modalities, including questions of usability, security, and privacy. The book features contributions from both academia and major industrial manufacturers
Advanced Biometrics with Deep Learning
Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others