20 research outputs found

    Fusion of face and iris biometrics in security verification systems.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2016.Abstract available in PDF file

    Multimodal Biometrics for Person Authentication

    Get PDF
    Unimodal biometric systems have limited effectiveness in identifying people, mainly due to their susceptibility to changes in individual biometric features and presentation attacks. The identification of people using multimodal biometric systems attracts the attention of researchers due to their advantages, such as greater recognition efficiency and greater security compared to the unimodal biometric system. To break into the biometric multimodal system, the intruder would have to break into more than one unimodal biometric system. In multimodal biometric systems: The availability of many features means that the multimodal system becomes more reliable. A multimodal biometric system increases security and ensures confidentiality of user data. A multimodal biometric system realizes the merger of decisions taken under individual modalities. If one of the modalities is eliminated, the system can still ensure security, using the remaining. Multimodal systems provide information on the ā€œlivenessā€ of the sample being introduced. In a multimodal system, a fusion of feature vectors and/or decisions developed by each subsystem is carried out, and then the final decision on identification is made on the basis of the vector of features thus obtained. In this chapter, we consider a multimodal biometric system that uses three modalities: dorsal vein, palm print, and periocular

    A framework for biometric recognition using non-ideal iris and face

    Get PDF
    Off-angle iris images are often captured in a non-cooperative environment. The distortion of the iris or pupil can decrease the segmentation quality as well as the data extracted thereafter. Moreover, iris with an off-angle of more than 30Ā° can have non-recoverable features since the boundary cannot be properly localized. This usually becomes a factor of limited discriminant ability of the biometric features. Limitations also come from the noisy data arisen due to image burst, background error, or inappropriate camera pixel noise. To address the issues above, the aim of this study is to develop a framework which: (1) to improve the non-circular boundary localization, (2) to overcome the lost features, and (3) to detect and minimize the error caused by noisy data. Non-circular boundary issue is addressed through a combination of geometric calibration and direct least square ellipse that can geometrically restore, adjust, and scale up the distortion of circular shape to ellipse fitting. Further improvement comes in the form of an extraction method that combines Haar Wavelet and Neural Network to transform the iris features into wavelet coefficient representative of the relevant iris data. The non-recoverable features problem is resolved by proposing Weighted Score Level Fusion which integrates face and iris biometrics. This enhancement is done to give extra distinctive information to increase authentication accuracy rate. As for the noisy data issues, a modified Reed Solomon codes with error correction capability is proposed to decrease intra-class variations by eliminating the differences between enrollment and verification templates. The key contribution of this research is a new unified framework for high performance multimodal biometric recognition system. The framework has been tested with WVU, UBIRIS v.2, UTMIFM, ORL datasets, and achieved more than 99.8% accuracy compared to other existing methods

    Improved methods for finger vein identification using composite median-wiener filter and hierarchical centroid features extraction

    Get PDF
    Finger vein identification is a potential new area in biometric systems. Finger vein patterns contain highly discriminative characteristics, which are difficult to be forged because they reside underneath the skin of the finger and require a specific device to capture them. Research have been carried out in this field but there is still an unresolved issue related to low-quality data due to data capturing and processing. Low-quality data have caused errors in the feature extraction process and reduced identification performance rate in finger vein identification. To address this issue, a new image enhancement and feature extraction methods were developed to improve finger vein identification. The image enhancement, Composite Median-Wiener (CMW) filter would improve image quality and preserve the edges of the finger vein image. Next, the feature extraction method, Hierarchical Centroid Feature Method (HCM) was fused with statistical pixel-based distribution feature method at the feature-level fusion to improve the performance of finger vein identification. These methods were evaluated on public SDUMLA-HMT and FV-USM finger vein databases. Each database was divided into training and testing sets. The average result of the experiments conducted was taken to ensure the accuracy of the measurements. The k-Nearest Neighbor classifier with city block distance to match the features was implemented. Both these methods produced accuracy as high as 97.64% for identification rate and 1.11% of equal error rate (EER) for measures verification rate. These showed that the accuracy of the proposed finger vein identification method is higher than the one reported in the literature. As a conclusion, the results have proven that the CMW filter and HCM have significantly improved the accuracy of finger vein identification

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Biometrics

    Get PDF
    Biometrics-Unique and Diverse Applications in Nature, Science, and Technology provides a unique sampling of the diverse ways in which biometrics is integrated into our lives and our technology. From time immemorial, we as humans have been intrigued by, perplexed by, and entertained by observing and analyzing ourselves and the natural world around us. Science and technology have evolved to a point where we can empirically record a measure of a biological or behavioral feature and use it for recognizing patterns, trends, and or discrete phenomena, such as individuals' and this is what biometrics is all about. Understanding some of the ways in which we use biometrics and for what specific purposes is what this book is all about
    corecore