567 research outputs found

    Poisoning Attacks on Learning-Based Keystroke Authentication and a Residue Feature Based Defense

    Get PDF
    Behavioral biometrics, such as keystroke dynamics, are characterized by relatively large variation in the input samples as compared to physiological biometrics such as fingerprints and iris. Recent advances in machine learning have resulted in behaviorbased pattern learning methods that obviate the effects of variation by mapping the variable behavior patterns to a unique identity with high accuracy. However, it has also exposed the learning systems to attacks that use updating mechanisms in learning by injecting imposter samples to deliberately drift the data to impostors’ patterns. Using the principles of adversarial drift, we develop a class of poisoning attacks, named Frog-Boiling attacks. The update samples are crafted with slow changes and random perturbations so that they can bypass the classifiers detection. Taking the case of keystroke dynamics which includes motoric and neurological learning, we demonstrate the success of our attack mechanism. We also present a detection mechanism for the frog-boiling attack that uses correlation between successive training samples to detect spurious input patterns. To measure the effect of adversarial drift in frog-boiling attack and the effectiveness of the proposed defense mechanism, we use traditional error rates such as FAR, FRR, and EER and the metric in terms of shifts in biometric menagerie

    Multi-View Face Recognition From Single RGBD Models of the Faces

    Get PDF
    This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Detecção de vivacidade de impressões digitais baseada em software

    Get PDF
    Orientador: Roberto de Alencar LotufoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Com o uso crescente de sistemas de autenticação por biometria nos últimos anos, a detecção de impressões digitais falsas tem se tornado cada vez mais importante. Neste trabalho, nós implementamos e comparamos várias técnicas baseadas em software para detecção de vivacidade de impressões digitais. Utilizamos como extratores de características as redes convolucionais, que foram usadas pela primeira vez nesta área, e Local Binary Patterns (LBP). As técnicas foram usadas em conjunto com redução de dimensionalidade através da Análise de Componentes Principais (PCA) e um classificador Support Vector Machine (SVM). O aumento artificial de dados foi usado de forma bem sucedida para melhorar o desempenho do classificador. Testamos uma variedade de operações de pré-processamento, tais como filtragem em frequência, equalização de contraste e filtragem da região de interesse. Graças aos computadores de alto desempenho disponíveis como serviços em nuvem, foi possível realizar uma busca extensa e automática para encontrar a melhor combinação de operações de pré-processamento, arquiteturas e hiper-parâmetros. Os experimentos foram realizados nos conjuntos de dados usados nas competições Liveness Detection nos anos de 2009, 2011 e 2013, que juntos somam quase 50.000 imagens de impressões digitais falsas e verdadeiras. Nosso melhor método atinge uma taxa média de amostras classificadas corretamente de 95,2%, o que representa uma melhora de 59% na taxa de erro quando comparado com os melhores resultados publicados anteriormenteAbstract: With the growing use of biometric authentication systems in the past years, spoof fingerprint detection has become increasingly important. In this work, we implemented and compared various techniques for software-based fingerprint liveness detection. We use as feature extractors Convolutional Networks with random weights, which are applied for the first time for this task, and Local Binary Patterns. The techniques were used in conjunction with dimensionality reduction through Principal Component Analysis (PCA) and a Support Vector Machine (SVM) classifier. Dataset Augmentation was successfully used to increase classifier¿s performance. We tested a variety of preprocessing operations such as frequency filtering, contrast equalization, and region of interest filtering. An automatic and extensive search for the best combination of preprocessing operations, architectures and hyper-parameters was made, thanks to the fast computers available as cloud services. The experiments were made on the datasets used in The Liveness Detection Competition of years 2009, 2011 and 2013 that comprise almost 50,000 real and fake fingerprints¿ images. Our best method achieves an overall rate of 95.2% of correctly classified samples - an improvement of 59% in test error when compared with the best previously published resultsMestradoEnergia EletricaMestre em Engenharia Elétric

    Automated dental identification: A micro-macro decision-making approach

    Get PDF
    Identification of deceased individuals based on dental characteristics is receiving increased attention, especially with the large volume of victims encountered in mass disasters. In this work we consider three important problems in automated dental identification beyond the basic approach of tooth-to-tooth matching.;The first problem is on automatic classification of teeth into incisors, canines, premolars and molars as part of creating a data structure that guides tooth-to-tooth matching, thus avoiding illogical comparisons that inefficiently consume the limited computational resources and may also mislead the decision-making. We tackle this problem using principal component analysis and string matching techniques. We reconstruct the segmented teeth using the eigenvectors of the image subspaces of the four teeth classes, and then call the teeth classes that achieve least energy-discrepancy between the novel teeth and their approximations. We exploit teeth neighborhood rules in validating teeth-classes and hence assign each tooth a number corresponding to its location in a dental chart. Our approach achieves 82% teeth labeling accuracy based on a large test dataset of bitewing films.;Because dental radiographic films capture projections of distinct teeth; and often multiple views for each of the distinct teeth, in the second problem we look for a scheme that exploits teeth multiplicity to achieve more reliable match decisions when we compare the dental records of a subject and a candidate match. Hence, we propose a hierarchical fusion scheme that utilizes both aspects of teeth multiplicity for improving teeth-level (micro) and case-level (macro) decision-making. We achieve a genuine accept rate in excess of 85%.;In the third problem we study the performance limits of dental identification due to features capabilities. We consider two types of features used in dental identification, namely teeth contours and appearance features. We propose a methodology for determining the number of degrees of freedom possessed by a feature set, as a figure of merit, based on modeling joint distributions using copulas under less stringent assumptions on the dependence between feature dimensions. We also offer workable approximations of this approach

    Deep Learning Architectures for Heterogeneous Face Recognition

    Get PDF
    Face recognition has been one of the most challenging areas of research in biometrics and computer vision. Many face recognition algorithms are designed to address illumination and pose problems for visible face images. In recent years, there has been significant amount of research in Heterogeneous Face Recognition (HFR). The large modality gap between faces captured in different spectrum as well as lack of training data makes heterogeneous face recognition (HFR) quite a challenging problem. In this work, we present different deep learning frameworks to address the problem of matching non-visible face photos against a gallery of visible faces. Algorithms for thermal-to-visible face recognition can be categorized as cross-spectrum feature-based methods, or cross-spectrum image synthesis methods. In cross-spectrum feature-based face recognition a thermal probe is matched against a gallery of visible faces corresponding to the real-world scenario, in a feature subspace. The second category synthesizes a visible-like image from a thermal image which can then be used by any commercial visible spectrum face recognition system. These methods also beneficial in the sense that the synthesized visible face image can be directly utilized by existing face recognition systems which operate only on the visible face imagery. Therefore, using this approach one can leverage the existing commercial-off-the-shelf (COTS) and government-off-the-shelf (GOTS) solutions. In addition, the synthesized images can be used by human examiners for different purposes. There are some informative traits, such as age, gender, ethnicity, race, and hair color, which are not distinctive enough for the sake of recognition, but still can act as complementary information to other primary information, such as face and fingerprint. These traits, which are known as soft biometrics, can improve recognition algorithms while they are much cheaper and faster to acquire. They can be directly used in a unimodal system for some applications. Usually, soft biometric traits have been utilized jointly with hard biometrics (face photo) for different tasks in the sense that they are considered to be available both during the training and testing phases. In our approaches we look at this problem in a different way. We consider the case when soft biometric information does not exist during the testing phase, and our method can predict them directly in a multi-tasking paradigm. There are situations in which training data might come equipped with additional information that can be modeled as an auxiliary view of the data, and that unfortunately is not available during testing. This is the LUPI scenario. We introduce a novel framework based on deep learning techniques that leverages the auxiliary view to improve the performance of recognition system. We do so by introducing a formulation that is general, in the sense that can be used with any visual classifier. Every use of auxiliary information has been validated extensively using publicly available benchmark datasets, and several new state-of-the-art accuracy performance values have been set. Examples of application domains include visual object recognition from RGB images and from depth data, handwritten digit recognition, and gesture recognition from video. We also design a novel aggregation framework which optimizes the landmark locations directly using only one image without requiring any extra prior which leads to robust alignment given arbitrary face deformations. Three different approaches are employed to generate the manipulated faces and two of them perform the manipulation via the adversarial attacks to fool a face recognizer. This step can decouple from our framework and potentially used to enhance other landmark detectors. Aggregation of the manipulated faces in different branches of proposed method leads to robust landmark detection. Finally we focus on the generative adversarial networks which is a very powerful tool in synthesizing a visible-like images from the non-visible images. The main goal of a generative model is to approximate the true data distribution which is not known. In general, the choice for modeling the density function is challenging. Explicit models have the advantage of explicitly calculating the probability densities. There are two well-known implicit approaches, namely the Generative Adversarial Network (GAN) and Variational AutoEncoder (VAE) which try to model the data distribution implicitly. The VAEs try to maximize the data likelihood lower bound, while a GAN performs a minimax game between two players during its optimization. GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. This causes the generator to create similar looking images with poor diversity of samples. In the last chapter of thesis, we focus to address this issue in GANs framework
    corecore