629 research outputs found

    A hybrid learning scheme towards authenticating hand-geometry using multi-modal features

    Get PDF
    Usage of hand geometry towards biometric-based authentication mechanism has been commercially practiced since last decade. However, there is a rising security problem being surfaced owing to the fluctuating features of hand-geometry during authentication mechanism. Review of existing research techniques exhibits the usage of singular features of hand-geometric along with sophisticated learning schemes where accuracy is accomplished at the higher cost of computational effort. Hence, the proposed study introduces a simplified analytical method which considers multi-modal features extracted from hand geometry which could further improve upon robust recognition system. For this purpose, the system considers implementing hybrid learning scheme using convolution neural network and Siamese algorithm where the former is used for feature extraction and latter is used for recognition of person on the basis of authenticated hand geometry. The main results show that proposed scheme offers 12.2% of improvement in accuracy compared to existing models exhibiting that with simpler amendment by inclusion of multi-modalities, accuracy can be significantly improve without computational burden

    Palmprint Gender Classification Using Deep Learning Methods

    Get PDF
    Gender identification is an important technique that can improve the performance of authentication systems by reducing searching space and speeding up the matching process. Several biometric traits have been used to ascertain human gender. Among them, the human palmprint possesses several discriminating features such as principal-lines, wrinkles, ridges, and minutiae features and that offer cues for gender identification. The goal of this work is to develop novel deep-learning techniques to determine gender from palmprint images. PolyU and CASIA palmprint databases with 90,000 and 5502 images respectively were used for training and testing purposes in this research. After ROI extraction and data augmentation were performed, various convolutional and deep learning-based classification approaches were empirically designed, optimized, and tested. Results of gender classification as high as 94.87% were achieved on the PolyU palmprint database and 90.70% accuracy on the CASIA palmprint database. Optimal performance was achieved by combining two different pre-trained and fine-tuned deep CNNs (VGGNet and DenseNet) through score level average fusion. In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) was also implemented to ascertain which specific regions of the palmprint are most discriminative for gender classification

    An adaptive model for multi-modal biometrics decision fusion

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Adversarial Learning of Mappings Onto Regularized Spaces for Biometric Authentication

    Get PDF
    We present AuthNet: a novel framework for generic biometric authentication which, by learning a regularized mapping instead of a classification boundary, leads to higher performance and improved robustness. The biometric traits are mapped onto a latent space in which authorized and unauthorized users follow simple and well-behaved distributions. In turn, this enables simple and tunable decision boundaries to be employed in order to make a decision. We show that, differently from the deep learning and traditional template-based authentication systems, regularizing the latent space to simple target distributions leads to improved performance as measured in terms of Equal Error Rate (EER), accuracy, False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR). Extensive experiments on publicly available datasets of faces and fingerprints confirm the superiority of AuthNet over existing methods

    Identification of User Behavioural Biometrics for Authentication using Keystroke Dynamics and Machine Learning

    Get PDF
    This thesis focuses on the effective classification of the behavior of users accessing computing devices to authenticate them. The authentication is based on keystroke dynamics, which captures the users behavioral biometric and applies machine learning concepts to classify them. The users type a strong passcode ”.tie5Roanl” to record their typing pattern. In order to confirm identity, anonymous data from 94 users were collected to carry out the research. Given the raw data, features were extracted from the attributes based on the button pressed and action timestamp events. The support vector machine classifier uses multi-class classification with one vs. one decision shape function to classify different users. To reduce the classification error, it is essential to identify the important features from the raw data. In an effort to confront the generation of features from attributes an efficient feature extraction algorithm has been developed, obtaining high classification performance are now being sought. To handle the multi-class problem, the random forest classifier is used to identify the users effectively. In addition, mRMR feature selection has been applied to increase the classification performance metrics and to confirm the identity of the users based on the way they access computing devices. From the results, we conclude that device information and touch pressure effectively contribute to identifying each user. Out of them, features that contain device information are responsible for increasing the performance metrics of the system by adding a token-based authentication layer. Based upon the results, random forest yields better classification results for this dataset. The research will contribute significantly to the field of cyber-security by forming a robust authentication system using machine learning algorithms

    Multimodal Biometric Systems for Personal Identification and Authentication using Machine and Deep Learning Classifiers

    Get PDF
    Multimodal biometrics, using machine and deep learning, has recently gained interest over single biometric modalities. This interest stems from the fact that this technique improves recognition and, thus, provides more security. In fact, by combining the abilities of single biometrics, the fusion of two or more biometric modalities creates a robust recognition system that is resistant to the flaws of individual modalities. However, the excellent recognition of multimodal systems depends on multiple factors, such as the fusion scheme, fusion technique, feature extraction techniques, and classification method. In machine learning, existing works generally use different algorithms for feature extraction of modalities, which makes the system more complex. On the other hand, deep learning, with its ability to extract features automatically, has made recognition more efficient and accurate. Studies deploying deep learning algorithms in multimodal biometric systems tried to find a good compromise between the false acceptance and the false rejection rates (FAR and FRR) to choose the threshold in the matching step. This manual choice is not optimal and depends on the expertise of the solution designer, hence the need to automatize this step. From this perspective, the second part of this thesis details an end-to-end CNN algorithm with an automatic matching mechanism. This thesis has conducted two studies on face and iris multimodal biometric recognition. The first study proposes a new feature extraction technique for biometric systems based on machine learning. The iris and facial features extraction is performed using the Discrete Wavelet Transform (DWT) combined with the Singular Value Decomposition (SVD). Merging the relevant characteristics of the two modalities is used to create a pattern for an individual in the dataset. The experimental results show the robustness of our proposed technique and the efficiency when using the same feature extraction technique for both modalities. The proposed method outperformed the state-of-the-art and gave an accuracy of 98.90%. The second study proposes a deep learning approach using DensNet121 and FaceNet for iris and faces multimodal recognition using feature-level fusion and a new automatic matching technique. The proposed automatic matching approach does not use the threshold to ensure a better compromise between performance and FAR and FRR errors. However, it uses a trained multilayer perceptron (MLP) model that allows people’s automatic classification into two classes: recognized and unrecognized. This platform ensures an accurate and fully automatic process of multimodal recognition. The results obtained by the DenseNet121-FaceNet model by adopting feature-level fusion and automatic matching are very satisfactory. The proposed deep learning models give 99.78% of accuracy, and 99.56% of precision, with 0.22% of FRR and without FAR errors. The proposed and developed platform solutions in this thesis were tested and vali- dated in two different case studies, the central pharmacy of Al-Asria Eye Clinic in Dubai and the Abu Dhabi Police General Headquarters (Police GHQ). The solution allows fast identification of the persons authorized to access the different rooms. It thus protects the pharmacy against any medication abuse and the red zone in the military zone against the unauthorized use of weapons
    corecore