8 research outputs found

    AN ENHANCED MULTIMODAL BIOMETRIC SYSTEM BASED ON CONVOLUTIONAL NEURAL NETWORK

    Get PDF
    Multimodal biometric system combines more than one biometric modality into a single method in order, to overcome the limitations of unimodal biometrics system. In multimodal biometrics system, the utilization of different algorithms for feature extraction, fusion at feature level and classification often to complexity and make fused biometrics features larger in dimensions. In this paper, we developed a face-iris multimodal biometric recognition system based on convolutional neural network for feature extraction, fusion at feature level, training and matching to reduce dimensionality, error rate and improve the recognition accuracy suitable for an access control. Convolutional Neural Network is based on deep supervised learning model and was employed for training, classification, and testing of the system. The images are preprocessed to a standard normalization and then flow into couples of convolutional layers. The developed multimodal biometrics system was evaluated on a dataset of 700 iris and facial images, the training database contain 600 iris and face images, 100 iris and face images were used for testing. Experimental result shows that at the learning rate of 0.0001, the multimodal system has a performance recognition accuracy (RA) of 98.33% and equal error rate (ERR) of 0.0006%

    Retinal biometric identification using convolutional neural network

    Get PDF
    Authentication is needed to enhance and protect the system from vulnerabilities or weaknesses of the system. There are still many weaknesses in the use of traditional authentication methods such as PINs or passwords, such as being hacked. New methods such as system biometrics are used to deal with this problem. Biometric characteristics using retinal identification are unique and difficult to manipulate compared to other biometric characteristics such as iris or fingerprints because they are located behind the human eye thus they are difficult to reach by normal human vision. This study uses the characteristics of the retinal fundus image blood vessels that have been segmented for its features. The dataset used is sourced from the DRIVE dataset. The preprocessing stage is used to extract its features to produce an image of retinal blood vessel segmentation. The image resulting from the segmentation is carried out with a two-dimensional image transformation such as the process of rotation, enlargement, shifting, cutting, and reversing to increase the quantity of the sample of the retinal blood vessel segmentation image. The results of the image transformation resulted in 189 images divided with the details of the ratio of 80 % or 151 images as training data and 20 % or 38 images as validation data. The process of forming this research model uses the Convolutional Neural Network method. The model built during the training consists of 10 iterations and produces a model accuracy value of 98 %. The results of the model's accuracy value are used for the process of identifying individual retinas in the retinal biometric system.The work was partially funded by DP2M RistekDikti, Gunadarma University especially to the Gunadarma University Research Bureau for the opportunity to conduct research specifically in the field of Biometrics

    Domain Adaptation and Privileged Information for Visual Recognition

    Get PDF
    The automatic identification of entities like objects, people or their actions in visual data, such as images or video, has significantly improved, and is now being deployed in access control, social media, online retail, autonomous vehicles, and several other applications. This visual recognition capability leverages supervised learning techniques, which require large amounts of labeled training data from the target distribution representative of the particular task at hand. However, collecting such training data might be expensive, require too much time, or even be impossible. In this work, we introduce several novel approaches aiming at compensating for the lack of target training data. Rather than leveraging prior knowledge for building task-specific models, typically easier to train, we focus on developing general visual recognition techniques, where the notion of prior knowledge is better identified by additional information, available during training. Depending on the nature of such information, the learning problem may turn into domain adaptation (DA), domain generalization (DG), leaning using privileged information (LUPI), or domain adaptation with privileged information (DAPI).;When some target data samples are available and additional information in the form of labeled data from a different source is also available, the learning problem becomes domain adaptation. Unlike previous DA work, we introduce two novel approaches for the few-shot learning scenario, which require only very few labeled target samples, and even one can be very effective. The first method exploits a Siamese deep neural network architecture for learning an embedding where visual categories from the source and target distributions are semantically aligned and yet maximally separated. The second approach instead, extends adversarial learning to simultaneously maximize the confusion between source and target domains while achieving semantic alignment.;In complete absence of target data, several cheaply available source datasets related to the target distribution can be leveraged as additional information for learning a task. This is the domain generalization setting. We introduce the first deep learning approach to address the DG problem, by extending a Siamese network architecture for learning a representation of visual categories that is invariant with respect to the sources, while imposing semantic alignment and class separation to maximize generalization performance on unseen target domains.;There are situations in which target data for training might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on the information bottleneck that leverages the auxiliary view to improve the performance of visual classifiers. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier.;Finally, when the available target data is unlabeled, and there is closely related labeled source data, which is also equipped with an auxiliary view as additional information, we pose the question of how to leverage the source data views to train visual classifiers for unseen target data. This is the DAPI scenario. We extend the LUPI framework based on the information bottleneck to learn visual classifiers in DAPI settings and show that privileged information can be leveraged to improve the learning on new domains. Also, the novel DAPI framework is general and can be used with any visual classifier.;Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
    corecore