52 research outputs found

    Development of Local Feature Extraction and Reduction Schemes for Iris Biometrics

    Get PDF
    Iris is one of the most reliable biometric trait used for human recognition due to its stability and randomness. Typically, recognition concerns with the matching of the features extracted from the iris regions. A feature extraction method can be categorized as local or global, depending on the manner in which the features are extracted from an image. In case of global features fail to represent details of an image because, the computation is focused on the image as a whole. On the contrary, local features are more precise and capable of representing the details of an image as they are computed from specific regions of the image. In the conventional approaches, the local features consider corners as keypoints, that may not always be suitable for iris images. Salient regions are visually pre-attentive distinct portions in an image and are appropriate candidate for interest points. The thesis presents a salient keypoint detector called Salient Point of Interest using Entropy (SPIE). Entropy from local segments are used as the significant measure of saliency. In order to compute the entropy value of such portions, an entropy map is generated. Scale invariance property of the detector is achieved by constructing the scale-space for the input image. Generally local feature extraction methods suffer from high dimensionality. Thus, they are computationally expensive and unsuitable for real-time application. Some reduction techniques can be applied to decrease the feature size and increase the computational speed. In this thesis, feature reduction is achieved by decreasing the number of keypoints using density-based clustering. The proposed method reduces keypoints efficiently, by grouping all the closely placed keypoints into one. Each cluster is then represented by a keypoint with its scale and location, for which an algorithm is presented. The proposed schemes are validated through publicly available databases, which shows the superiority of the proposed ones over the existing state-of-the-art methods

    Image-based Authentication

    Get PDF
    Mobile and wearable devices are popular platforms for accessing online services. However, the small form factor of such devices, makes a secure and practical experience for user authentication, challenging. Further, online fraud that includes phishing attacks, has revealed the importance of conversely providing solutions for usable authentication of remote services to online users. In this thesis, we introduce image-based solutions for mutual authentication between a user and a remote service provider. First, we propose and develop Pixie, a two-factor, object-based authentication solution for camera-equipped mobile and wearable devices. We further design ai.lock, a system that reliably extracts from images, authentication credentials similar to biometrics. Second, we introduce CEAL, a system to generate visual key fingerprint representations of arbitrary binary strings, to be used to visually authenticate online entities and their cryptographic keys. CEAL leverages deep learning to capture the target style and domain of training images, into a generator model from a large collection of sample images rather than hand curated as a collection of rules, hence provides a unique capacity for easy customizability. CEAL integrates a model of the visual discriminative ability of human perception, hence the resulting fingerprint image generator avoids mapping distinct keys to images which are not distinguishable by humans. Further, CEAL deterministically generates visually pleasing fingerprint images from an input vector where the vector components are designated to represent visual properties which are either readily perceptible to human eye, or imperceptible yet are necessary for accurately modeling the target image domain. We show that image-based authentication using Pixie is usable and fast, while ai.lock extracts authentication credentials that exceed the entropy of biometrics. Further, we show that CEAL outperforms state-of-the-art solution in terms of efficiency, usability, and resilience to powerful adversarial attacks

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Contributions on 3D Biometric Face Recognition for point clouds in low-resolution devices

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2020.Recentemente, diversos processos de automação fazem uso de conhecimentos relacionados a visão computacional, utilizando-se das informações digitalizadas que auxiliam na tomada de decisões destes processos. O estudo de informações 3D é um assunto que vem sendo recorrente em comu- nidades de visão computacional e atividades gráficas. Uma gama de métodos vem sendo propostos visando obter melhores resultados de performance, em termos de acurácia e robustez. O objetivo deste trabalho é contribuir com métodos de reconhecimento facial em dispositivos de baixa res- olução de núvens de ponto. Neste trabalho realiza-se um processo de reconhecimento facial em uma base de dados contendo 31 sujeitos, em que cada sujeito apresenta 3 imagens de profundidade e 3 imagens de cor (RGB). As imagens de cor são utilizadas para detecção facial por uso de um Haar Cascade, que permite a extração dos pontos da face da imagem de profundidade formando uma nuvem de pontos 3D. Da nuvem de pontos foram extraídas a intensidade normal e a intensi- dade do índice de curvatura de cada ponto permitindo a formação de uma imagem bidimensional, intitulada de mapa de curvatura, a partir da qual extrai-se histogramas utilizados no processo de reconhecimento facial. Junto com os mapas de curvature, Um novo método de correspondência é proposto por meio da adaptação do algoritmo clássico de Bozorth, formando uma representação 3D de marcos faciais em nuvens de ponto de baixa resolução para prover um descritor dos pontos chaves da nuvem e extrair uma representação única de cada indivíduo. A validação é realizada e comparada com uma técnica de linha de base para reconhecimento facial 3D. O manuscrito apre- sentado provê multiplos cenários de teste (faces frontais, acurácia, escala e orientação) para ambos métodos atingindo uma acurácia de 98.92% no melhor caso dos mapas de curvature e uma acurácio de 100% no melhor caso do algoritmo clássico de Bozorth adaptado.Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Recently, many automation processes make use of knowledge related to computer vision, exploiting digital information in the form of images or data that assists the decision-making of these processes. 3D data recognition is a trending topic in computer vision and graphics tasks. Many methods had been proposed for applications on 3D, expecting a better performance in accuracy and robustness. The main goal of this manuscript is to contribute with face recognition methods for low-resolution point cloud devices. In this manuscript, a face recognition process was accomplished in a 31 subject database, using colorful images (RGB) and depth images for each subject. The colorful images are utilized for face detection by a Haar Cascade algorithm, allowing the extraction of facial points in the depth image and the generation of a face 3D point cloud. The point cloud is used to extract the normal intensity and the curvature index intensity of each point, allowing the confection of a bidimensional image, entitled curvature map, of which histograms are obtained to perform the facial recognition task. Along with the curvature maps, a novel matching method is proposed by an adaptation of the classic Bozorth’s algorithm, forming a net-based 3D representation of facial landmarks in a low resolution point cloud in order to provide a descriptor of the cloud key points and extract an unique representation for each individual. The validation was fulfilled and compared with a baseline technique for 3D face recognition. The presented manuscript provide multiple testing scenarios (frontal faces, accuracy, scale and orientation) for both methods, achieving an accuracy of 98.92% in the best case of the curvature maps and an 100% accuracy in the best case of the classic Bozorth’s algorithm adaptation

    The image ray transform

    No full text
    Image feature extraction is a fundamental area of image processing and computer vision. There are many ways that techniques can be created that extract features and particularly novel techniques can be developed by taking influence from the physical world. This thesis presents the Image Ray Transform (IRT), a technique based upon an analogy to light, using the mechanisms that define how light travels through different media and analogy to optical fibres to extract structural features within an image. Through analogising the image as a transparent medium we can use refraction and reflection to cast many rays inside the image and guide them towards features, transforming the image in order to emphasise tubular and circular structures.The power of the transform for structural feature detection is shown empirically in a number of applications, especially through its ability to highlight curvilinear structures. The IRT is used to enhance the accuracy of circle detection through use as a preprocessor, highlighting circles to a greater extent than conventional edge detection methods. The transform is also shown to be well suited to enrolment for ear biometrics, providing a high detection and recognition rate with PCA, comparable to manual enrolment. Vascular features such as those found in medical images are also shown to be emphasised by the transform, and the IRT is used for detection of the vasculature in retinal fundus images.Extensions to the basic image ray transform allow higher level features to be detected. A method is shown for expressing rays in an invariant form to describe the structures of an object and hence the object itself with a bag-of-visual words model. These ray features provide a complementary description of objects to other patch-based descriptors and have been tested on a number of object categorisation databases. Finally a different analysis of rays is provided that can produce information on both bilateral (reflectional) and rotational symmetry within the image, allowing a deeper understanding of image structure. The IRT is a flexible technique, capable of detecting a range of high and low level image features, and open to further use and extension across a range of applications

    Feature Fusion for Fingerprint Liveness Detection

    Get PDF
    For decades, fingerprints have been the most widely used biometric trait in identity recognition systems, thanks to their natural uniqueness, even in rare cases such as identical twins. Recently, we witnessed a growth in the use of fingerprint-based recognition systems in a large variety of devices and applications. This, as a consequence, increased the benefits for offenders capable of attacking these systems. One of the main issues with the current fingerprint authentication systems is that, even though they are quite accurate in terms of identity verification, they can be easily spoofed by presenting to the input sensor an artificial replica of the fingertip skin’s ridge-valley patterns. Due to the criticality of this threat, it is crucial to develop countermeasure methods capable of facing and preventing these kind of attacks. The most effective counter–spoofing methods are those trying to distinguish between a "live" and a "fake" fingerprint before it is actually submitted to the recognition system. According to the technology used, these methods are mainly divided into hardware and software-based systems. Hardware-based methods rely on extra sensors to gain more pieces of information regarding the vitality of the fingerprint owner. On the contrary, software-based methods merely rely on analyzing the fingerprint images acquired by the scanner. Software-based methods can then be further divided into dynamic, aimed at analyzing sequences of images to capture those vital signs typical of a real fingerprint, and static, which process a single fingerprint impression. Among these different approaches, static software-based methods come with three main benefits. First, they are cheaper, since they do not require the deployment of any additional sensor to perform liveness detection. Second, they are faster since the information they require is extracted from the same input image acquired for the identification task. Third, they are potentially capable of tackling novel forms of attack through an update of the software. The interest in this type of counter–spoofing methods is at the basis of this dissertation, which addresses the fingerprint liveness detection under a peculiar perspective, which stems from the following consideration. Generally speaking, this problem has been tackled in the literature with many different approaches. Most of them are based on first identifying the most suitable image features for the problem in analysis and, then, into developing some classification system based on them. In particular, most of the published methods rely on a single type of feature to perform this task. Each of this individual features can be more or less discriminative and often highlights some peculiar characteristics of the data in analysis, often complementary with that of other feature. Thus, one possible idea to improve the classification accuracy is to find effective ways to combine them, in order to mutually exploit their individual strengths and soften, at the same time, their weakness. However, such a "multi-view" approach has been relatively overlooked in the literature. Based on the latter observation, the first part of this work attempts to investigate proper feature fusion methods capable of improving the generalization and robustness of fingerprint liveness detection systems and enhance their classification strength. Then, in the second part, it approaches the feature fusion method in a different way, that is by first dividing the fingerprint image into smaller parts, then extracting an evidence about the liveness of each of these patches and, finally, combining all these pieces of information in order to take the final classification decision. The different approaches have been thoroughly analyzed and assessed by comparing their results (on a large number of datasets and using the same experimental protocol) with that of other works in the literature. The experimental results discussed in this dissertation show that the proposed approaches are capable of obtaining state–of–the–art results, thus demonstrating their effectiveness
    corecore