1,380 research outputs found

    UBSegNet: Unified Biometric Region of Interest Segmentation Network

    Full text link
    Digital human identity management, can now be seen as a social necessity, as it is essentially required in almost every public sector such as, financial inclusions, security, banking, social networking e.t.c. Hence, in today's rampantly emerging world with so many adversarial entities, relying on a single biometric trait is being too optimistic. In this paper, we have proposed a novel end-to-end, Unified Biometric ROI Segmentation Network (UBSegNet), for extracting region of interest from five different biometric traits viz. face, iris, palm, knuckle and 4-slap fingerprint. The architecture of the proposed UBSegNet consists of two stages: (i) Trait classification and (ii) Trait localization. For these stages, we have used a state of the art region based convolutional neural network (RCNN), comprising of three major parts namely convolutional layers, region proposal network (RPN) along with classification and regression heads. The model has been evaluated over various huge publicly available biometric databases. To the best of our knowledge this is the first unified architecture proposed, segmenting multiple biometric traits. It has been tested over around 5000 * 5 = 25,000 images (5000 images per trait) and produces very good results. Our work on unified biometric segmentation, opens up the vast opportunities in the field of multiple biometric traits based authentication systems.Comment: 4th Asian Conference on Pattern Recognition (ACPR 2017

    Efficient prediction of trait judgments from faces using deep neural networks

    Get PDF
    Judgments of people from their faces are often invalid but influence many social decisions (e.g., legal sentencing), making them an important target for automated prediction. Direct training of deep convolutional neural networks (DCNNs) is difficult because of sparse human ratings, but features obtained from DCNNs pre-trained on other classifications (e.g., object recognition) can predict trait judgments within a given face database. However, it remains unknown if this latter approach generalizes across faces, raters, or traits. Here we directly compare three distinct types of face features, and test them across multiple out-of-sample datasets and traits. DCNNs pre-trained on face identification provided features that generalized the best, and models trained to predict a given trait also predicted several other traits. We demonstrate the flexibility, generalizability, and efficiency of using DCNN features to predict human trait judgments from faces, providing an easily scalable framework for automated prediction of human judgment

    Palmprint Gender Classification Using Deep Learning Methods

    Get PDF
    Gender identification is an important technique that can improve the performance of authentication systems by reducing searching space and speeding up the matching process. Several biometric traits have been used to ascertain human gender. Among them, the human palmprint possesses several discriminating features such as principal-lines, wrinkles, ridges, and minutiae features and that offer cues for gender identification. The goal of this work is to develop novel deep-learning techniques to determine gender from palmprint images. PolyU and CASIA palmprint databases with 90,000 and 5502 images respectively were used for training and testing purposes in this research. After ROI extraction and data augmentation were performed, various convolutional and deep learning-based classification approaches were empirically designed, optimized, and tested. Results of gender classification as high as 94.87% were achieved on the PolyU palmprint database and 90.70% accuracy on the CASIA palmprint database. Optimal performance was achieved by combining two different pre-trained and fine-tuned deep CNNs (VGGNet and DenseNet) through score level average fusion. In addition, Gradient-weighted Class Activation Mapping (Grad-CAM) was also implemented to ascertain which specific regions of the palmprint are most discriminative for gender classification

    Age-Adaptive Multimodal Biometric Authentication System with Blockchain-based Re-Enrollment

    Get PDF
    In the long run, a significant time gap between enrollment and probe image challenges the model's prediction ability when it has been trained on variant biometric traits. Since variant biometric traits change over time, it is sensible to construct a multimodal biometric authentication system that must include at least one invariant trait, such as the iris. The emergence of Deep learning has enabled developers to build classifiers on synthesized age-progressive images, particularly face images, to search for individuals who have been missing for many years, to avail a comprehensive portrayal of their appearance. However, in sensitive areas such as the military and banks, where security and confidentiality are of utmost importance, models should be built using real samples, and any variations in biometric traits should trigger an alert for the system and notify the subject about re-enrollment. This paper proposes an algorithm for age adaptation of biometric classifiers using multimodal channels which securely update the biometric traits while logging the transactions on the blockchain. It emphasizes confidence-score-based re-enrolment of individual subjects when the authenticator module becomes less effective with a particular subject's probe image. This reduces the time, cost, and memory involved in periodic re-enrolment of all subjects. The classifier deployed on the blockchain invokes appropriate smart contracts and completes this process securely
    corecore