51 research outputs found

    Deep Learning-Based Iris Segmentation Algorithm for Effective Iris Recognition System

    Get PDF
    In this study, a 19-layer convolutional neural network model is developed for accurate iris segmentation and is trained and validated using five publicly available iris image datasets. An integrodifferential operator is used to create labeled images for CASIA v1.0, CASIA v2.0, and PolyU Iris image datasets. The performance of the proposed model is evaluated based on accuracy, sensitivity, selectivity, precision, and F-score. The accuracy obtained for CASIA v1.0, CASIA v2.0, CASIA Iris Interval, IITD, and PolyU Iris are 0.82, 0.97, 0.9923, 0.9942, and 0.98, respectively. The result shows that the proposed model can accurately predict iris and non-iris regions and thus can be an effective tool for iris segmentation

    Achieving Information Security by multi-Modal Iris-Retina Biometric Approach Using Improved Mask R-CNN

    Get PDF
    The need for reliable user recognition (identification/authentication) techniques has grown in response to heightened security concerns and accelerated advances in networking, communication, and mobility. Biometrics, defined as the science of recognizing an individual based on his or her physical or behavioral characteristics, is gaining recognition as a method for determining an individual\u27s identity. Various commercial, civilian, and forensic applications now use biometric systems to establish identity. The purpose of this paper is to design an efficient multimodal biometric system based on iris and retinal features to assure accurate human recognition and improve the accuracy of recognition using deep learning techniques. Deep learning models were tested using retinographies and iris images acquired from the MESSIDOR and CASIA-IrisV1 databases for the same person. The Iris region was segmented from the image using the custom Mask R-CNN method, and the unique blood vessels were segmented from retinal images of the same person using principal curvature. Then, in order to aid precise recognition, they optimally extract significant information from the segmented images of the iris and retina. The suggested model attained 98% accuracy, 98.1% recall, and 98.1% precision. It has been discovered that using a custom Mask R-CNN approach on Iris-Retina images improves efficiency and accuracy in person recognition

    Feature extraction using two dimensional (2D) legendre wavelet filter for partial iris recognition

    Get PDF
    An increasing need for biometrics recognition systems has grown substantially to address the issues of recognition and identification, especially in highly dense areas such as airports, train stations, and financial transactions. Evidence of these can be seen in some airports and also the implementation of these technologies in our mobile phones. Among the most popular biometric technologies include facial, fingerprints, and iris recognition. The iris recognition is considered by many researchers to be the most accurate and reliable form of biometric recognition because iris can neither be surgically operated with a chance of losing slight nor change due to aging. However, presently most iris recognition systems available can only recognize iris image with frontal-looking and high-quality images. Angular image and partially capture image cannot be authenticated with the existing method of iris recognition. This research investigates the possibility of developing a technique for recognition partially captured iris image. The technique is designed to process the iris image at 50%, 25%, 16.5%, and 12.5% and to find a threshold for a minimum amount of iris region required to authenticate the individual. The research also developed and implemented two Dimensional (2D) Legendre wavelet filter for the iris feature extraction. The Legendre wavelet filter is to enhance the feature extraction technique. Selected iris images from CASIA, UBIRIS, and MMU database were used to test the accuracy of the introduced technique. The technique was able to produce recognition accuracy between 70 – 90% CASIA-interval with 92.25% accuracy, CASIA-distance with 86.25%, UBIRIS with 74.95%, and MMU with 94.45%

    QUIS-CAMPI: Biometric Recognition in Surveillance Scenarios

    Get PDF
    The concerns about individuals security have justified the increasing number of surveillance cameras deployed both in private and public spaces. However, contrary to popular belief, these devices are in most cases used solely for recording, instead of feeding intelligent analysis processes capable of extracting information about the observed individuals. Thus, even though video surveillance has already proved to be essential for solving multiple crimes, obtaining relevant details about the subjects that took part in a crime depends on the manual inspection of recordings. As such, the current goal of the research community is the development of automated surveillance systems capable of monitoring and identifying subjects in surveillance scenarios. Accordingly, the main goal of this thesis is to improve the performance of biometric recognition algorithms in data acquired from surveillance scenarios. In particular, we aim at designing a visual surveillance system capable of acquiring biometric data at a distance (e.g., face, iris or gait) without requiring human intervention in the process, as well as devising biometric recognition methods robust to the degradation factors resulting from the unconstrained acquisition process. Regarding the first goal, the analysis of the data acquired by typical surveillance systems shows that large acquisition distances significantly decrease the resolution of biometric samples, and thus their discriminability is not sufficient for recognition purposes. In the literature, diverse works point out Pan Tilt Zoom (PTZ) cameras as the most practical way for acquiring high-resolution imagery at a distance, particularly when using a master-slave configuration. In the master-slave configuration, the video acquired by a typical surveillance camera is analyzed for obtaining regions of interest (e.g., car, person) and these regions are subsequently imaged at high-resolution by the PTZ camera. Several methods have already shown that this configuration can be used for acquiring biometric data at a distance. Nevertheless, these methods failed at providing effective solutions to the typical challenges of this strategy, restraining its use in surveillance scenarios. Accordingly, this thesis proposes two methods to support the development of a biometric data acquisition system based on the cooperation of a PTZ camera with a typical surveillance camera. The first proposal is a camera calibration method capable of accurately mapping the coordinates of the master camera to the pan/tilt angles of the PTZ camera. The second proposal is a camera scheduling method for determining - in real-time - the sequence of acquisitions that maximizes the number of different targets obtained, while minimizing the cumulative transition time. In order to achieve the first goal of this thesis, both methods were combined with state-of-the-art approaches of the human monitoring field to develop a fully automated surveillance capable of acquiring biometric data at a distance and without human cooperation, designated as QUIS-CAMPI system. The QUIS-CAMPI system is the basis for pursuing the second goal of this thesis. The analysis of the performance of the state-of-the-art biometric recognition approaches shows that these approaches attain almost ideal recognition rates in unconstrained data. However, this performance is incongruous with the recognition rates observed in surveillance scenarios. Taking into account the drawbacks of current biometric datasets, this thesis introduces a novel dataset comprising biometric samples (face images and gait videos) acquired by the QUIS-CAMPI system at a distance ranging from 5 to 40 meters and without human intervention in the acquisition process. This set allows to objectively assess the performance of state-of-the-art biometric recognition methods in data that truly encompass the covariates of surveillance scenarios. As such, this set was exploited for promoting the first international challenge on biometric recognition in the wild. This thesis describes the evaluation protocols adopted, along with the results obtained by the nine methods specially designed for this competition. In addition, the data acquired by the QUIS-CAMPI system were crucial for accomplishing the second goal of this thesis, i.e., the development of methods robust to the covariates of surveillance scenarios. The first proposal regards a method for detecting corrupted features in biometric signatures inferred by a redundancy analysis algorithm. The second proposal is a caricature-based face recognition approach capable of enhancing the recognition performance by automatically generating a caricature from a 2D photo. The experimental evaluation of these methods shows that both approaches contribute to improve the recognition performance in unconstrained data.A crescente preocupação com a segurança dos indivíduos tem justificado o crescimento do número de câmaras de vídeo-vigilância instaladas tanto em espaços privados como públicos. Contudo, ao contrário do que normalmente se pensa, estes dispositivos são, na maior parte dos casos, usados apenas para gravação, não estando ligados a nenhum tipo de software inteligente capaz de inferir em tempo real informações sobre os indivíduos observados. Assim, apesar de a vídeo-vigilância ter provado ser essencial na resolução de diversos crimes, o seu uso está ainda confinado à disponibilização de vídeos que têm que ser manualmente inspecionados para extrair informações relevantes dos sujeitos envolvidos no crime. Como tal, atualmente, o principal desafio da comunidade científica é o desenvolvimento de sistemas automatizados capazes de monitorizar e identificar indivíduos em ambientes de vídeo-vigilância. Esta tese tem como principal objetivo estender a aplicabilidade dos sistemas de reconhecimento biométrico aos ambientes de vídeo-vigilância. De forma mais especifica, pretende-se 1) conceber um sistema de vídeo-vigilância que consiga adquirir dados biométricos a longas distâncias (e.g., imagens da cara, íris, ou vídeos do tipo de passo) sem requerer a cooperação dos indivíduos no processo; e 2) desenvolver métodos de reconhecimento biométrico robustos aos fatores de degradação inerentes aos dados adquiridos por este tipo de sistemas. No que diz respeito ao primeiro objetivo, a análise aos dados adquiridos pelos sistemas típicos de vídeo-vigilância mostra que, devido à distância de captura, os traços biométricos amostrados não são suficientemente discriminativos para garantir taxas de reconhecimento aceitáveis. Na literatura, vários trabalhos advogam o uso de câmaras Pan Tilt Zoom (PTZ) para adquirir imagens de alta resolução à distância, principalmente o uso destes dispositivos no modo masterslave. Na configuração master-slave um módulo de análise inteligente seleciona zonas de interesse (e.g. carros, pessoas) a partir do vídeo adquirido por uma câmara de vídeo-vigilância e a câmara PTZ é orientada para adquirir em alta resolução as regiões de interesse. Diversos métodos já mostraram que esta configuração pode ser usada para adquirir dados biométricos à distância, ainda assim estes não foram capazes de solucionar alguns problemas relacionados com esta estratégia, impedindo assim o seu uso em ambientes de vídeo-vigilância. Deste modo, esta tese propõe dois métodos para permitir a aquisição de dados biométricos em ambientes de vídeo-vigilância usando uma câmara PTZ assistida por uma câmara típica de vídeo-vigilância. O primeiro é um método de calibração capaz de mapear de forma exata as coordenadas da câmara master para o ângulo da câmara PTZ (slave) sem o auxílio de outros dispositivos óticos. O segundo método determina a ordem pela qual um conjunto de sujeitos vai ser observado pela câmara PTZ. O método proposto consegue determinar em tempo-real a sequência de observações que maximiza o número de diferentes sujeitos observados e simultaneamente minimiza o tempo total de transição entre sujeitos. De modo a atingir o primeiro objetivo desta tese, os dois métodos propostos foram combinados com os avanços alcançados na área da monitorização de humanos para assim desenvolver o primeiro sistema de vídeo-vigilância completamente automatizado e capaz de adquirir dados biométricos a longas distâncias sem requerer a cooperação dos indivíduos no processo, designado por sistema QUIS-CAMPI. O sistema QUIS-CAMPI representa o ponto de partida para iniciar a investigação relacionada com o segundo objetivo desta tese. A análise do desempenho dos métodos de reconhecimento biométrico do estado-da-arte mostra que estes conseguem obter taxas de reconhecimento quase perfeitas em dados adquiridos sem restrições (e.g., taxas de reconhecimento maiores do que 99% no conjunto de dados LFW). Contudo, este desempenho não é corroborado pelos resultados observados em ambientes de vídeo-vigilância, o que sugere que os conjuntos de dados atuais não contêm verdadeiramente os fatores de degradação típicos dos ambientes de vídeo-vigilância. Tendo em conta as vulnerabilidades dos conjuntos de dados biométricos atuais, esta tese introduz um novo conjunto de dados biométricos (imagens da face e vídeos do tipo de passo) adquiridos pelo sistema QUIS-CAMPI a uma distância máxima de 40m e sem a cooperação dos sujeitos no processo de aquisição. Este conjunto permite avaliar de forma objetiva o desempenho dos métodos do estado-da-arte no reconhecimento de indivíduos em imagens/vídeos capturados num ambiente real de vídeo-vigilância. Como tal, este conjunto foi utilizado para promover a primeira competição de reconhecimento biométrico em ambientes não controlados. Esta tese descreve os protocolos de avaliação usados, assim como os resultados obtidos por 9 métodos especialmente desenhados para esta competição. Para além disso, os dados adquiridos pelo sistema QUIS-CAMPI foram essenciais para o desenvolvimento de dois métodos para aumentar a robustez aos fatores de degradação observados em ambientes de vídeo-vigilância. O primeiro é um método para detetar características corruptas em assinaturas biométricas através da análise da redundância entre subconjuntos de características. O segundo é um método de reconhecimento facial baseado em caricaturas automaticamente geradas a partir de uma única foto do sujeito. As experiências realizadas mostram que ambos os métodos conseguem reduzir as taxas de erro em dados adquiridos de forma não controlada

    Building Trustworthy AI for Biometrics

    Get PDF
    In the recent past, face recognition and Eye Authentication (EA) have been widely used for biometric authentication, especially in mission critical applications like surveillance, security, border patrol etc. Since the introduction of Deep Convolutional Neural Networks (DCNNs), the accuracy of face recognition and eye authentication algorithms has significantly increased. The improvement in this technology has led to its usage in a larger number of applications. However, these networks have demonstrated several issues related to bias in terms of sensitive attributes (such as gender, skintone etc.) and are also susceptible to privacy leakage and spoofing attacks from malicious agents. Therefore, in this dissertation, we investigate the trustworthiness of DCNN-based models used in biometric authentication and propose techniques to improve the same. In the context of face-based authentication, (i) we present an approach for evaluating the reliability of deep features in performing face verification. We term this reliability measure as ‘iconicity’. (ii) We study the implicit encoding of sensitive attribute information in face recognition features, extracted from different layers of a previously trained network. (iii) We present an adversarial approach to reduce the implicit encoding of sensitive attributes in features extracted from a pre-trained network. This helps us reduce the gender and skintone bias demonstrated by such features. (iv) We also propose a non-adversarial, distillation-based to mitigate bias, while maintaining reasonable face verification accuracy. For eye authentication, (v) we present a distillation-based approach to make eye authentication networks resilient to presentation attacks (spoof). Finally, since two of our proposed methods use the vanilla knowledge distillation (vi) we present an attention-based mechanism to improve the knowledge transfer in a typical distillation step

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    A Novel Convolutional Neural Network Pore-Based Fingerprint Recognition System

    Get PDF
    Biometrics play an important role in security measures, such as border control and online transactions, relying on traits like uniqueness and permanence. Among the different biometrics, the fingerprint stands out for their enduring nature and individual uniqueness. Fingerprint recognition systems traditionally rely on ridge patterns (Level 1) and minutiae (Level 2). However, these systems suffer from recognition accuracy with partial fingerprints. Level 3 features, such as pores, offer distinctive attributes crucial for individual identification, particularly with high-resolution acquisition devices. Moreover, the use of convolutional neural networks (CNNs) has significantly improved the accuracy in automatic feature extraction for biometric recognition. A CNN-based pore fingerprint recognition system consists of two main modules, pore detection and pore feature extraction and matching modules. The first module generates pixel intensity maps to determine the pore centroids, while the second module extracts relevant features of pores to generate pore representations for matching between query and template fingerprints. However, existing CNN architectures lack in generating deep-level discriminative feature and computational efficiency. Moreover, available knowledge on the pores has not been taken into consideration optimally for pore centroids and metrics other than Euclidean distance have not been explored for pore matching. The objective of this research is to develop a CNN-based pore fingerprint recognition scheme that is capable of providing a low-complexity and high-accuracy performance. The design of the CNN architecture of the two modules aimed at generating features at different hierarchical levels in residual frameworks and fusing them to produce comprehensive sets of discriminative features. Depthwise and depthwise separable convolution operations are judiciously used to keep the complexity of networks low. In the proposed pore centroid part, the knowledge of the variation of the pore characteristics is used. In the proposed pore matching scheme, a composite metric, encompassing the Euclidean distance, angle, and magnitudes difference between the vectors of pore representations, is proposed to measure the similarity between the pores in the query and template images. Extensive experiments are performed on fingerprint images from the benchmark PolyU High-Resolution-Fingerprint dataset to demonstrate the effectiveness of the various strategies developed and used in the proposed scheme for fingerprint recognition

    LIPIcs, Volume 277, GIScience 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 277, GIScience 2023, Complete Volum
    corecore