945 research outputs found

    Risk-Adapted Access Control with Multimodal Biometric Identification

    Get PDF
    The presented article examines the background of biometric identification. As a technical method of authentication, biometrics suffers from some limitations. These limitations are due to human nature, because skin, appearance and behavior changes more or less continuously in time. Changing patterns affect quality and always pose a significantly higher risk. This study investigated risk adaption and the integration of the mathematical representation of this risk into the whole authentication process. Several biometrical identification methods have been compared in order to find an algorithm of a multimodal biometric identification process as a possible solution to simultaneously improve the rates of failed acceptations and rejections. This unique solution is based on the Adaptive Neuro-Fuzzy Inference System and the Bayesian Theorem

    Fingerprint testing protocols for optical sensors

    Get PDF
    Currently there is a variety of conflicting and contradictory testing protocols for biometric technologies. There is currently no biometrics testing standard, which allows vendors to skew their test results in their favor. The research discussed in this thesis aims to address these issues by developing and validating testing protocols for optical fingerprint sensors. Angle of rotation, translation, lighting, and device placement have been identified in this work as variables potentially affecting system performance and protocols were developed to evaluate their effects on optical fingerprint sensor performance. Testing was done by capturing raw images under different scenarios, then offline analysis of data was performed to see how these variables impact performance. Based on the results of this research, it can be shown that these variables have an effect on system performance in optical fingerprint sensors and these protocols have some relevance in the evaluation of optical fingerprint sensors

    The theoretical limits of biometry

    Full text link
    Biometry has proved its capability in terms of recognition accuracy. Now, it is widely used for automated border control with the biometric passport, to unlock a smartphone or a computer with a fingerprint or a face recognition algorithm. While identity verification is widely democratized, pure identification with no additional clues is still a work in progress. The identification difficulty depends on the population size, as the larger the group is, the larger the confusion risk. For collision prevention, biometric traits must be sufficiently distinguishable to scale to considerable groups, and algorithms should be able to capture their differences accurately. Most biometric works are purely experimental, and it is impossible to extrapolate the results to a smaller or a larger group. In this work, we propose a theoretical analysis of the distinguishability problem, which governs the error rates of biometric systems. We demonstrate simple relationships between the population size and the number of independent bits necessary to prevent collision in the presence of noise. This work provides the lowest lower bound for memory requirements. The results are very encouraging, as the biometry of the whole Earth population can fit in a regular disk, leaving some space for noise and redundancy.Comment: 19 page

    A Bayesian model for predicting face recognition performance using image quality

    Get PDF
    Quality of a pair of facial images is a strong indicator of the uncertainty in decision about identity based on that image pair. In this paper, we describe a Bayesian approach to model the relation between image quality (like pose, illumination, noise, sharpness, etc) and corresponding face recognition performance. Experiment results based on the MultiPIE data set show that our model can accurately aggregate verification samples into groups for which the verification performance varies fairly consistently. Our model does not require similarity scores and can predict face recognition performance using only image quality information. Such a model has many applications. As an illustrative application, we show improved verification performance when the decision threshold automatically adapts according to the quality of facial images

    Inferring the Latent Incidence of Inefficiency from DEA Estimates and Bayesian Priors

    Get PDF
    Data envelopment analysis (DEA) is among the most popular empirical tools for measuring cost and productive efficiency. Because DEA is a linear programming technique, establishing formal statistical properties for outcomes is difficult. We show that the incidence of inefficiency within a population of Decision Making Units (DMUs) is a latent variable, with DEA outcomes providing only noisy sample-based categorizations of inefficiency. We then use a Bayesian approach to infer an appropriate posterior distribution for the incidence of inefficient DMUs based on a random sample of DEA outcomes and a prior distribution on the incidence of inefficiency. The methodology applies to both finite and infinite populations, and to sampling DMUs with and without replacement, and accounts for the noise in the DEA characterization of inefficiency within a coherent Bayesian approach to the problem. The result is an appropriately up-scaled, noise-adjusted inference regarding the incidence of inefficiency in a population of DMUs.Data Envelopment Analysis, latent inefficiency, Bayesian inference,Beta priors, posterior incidence of inefficiency

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-definition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, effective spoofing attempts on face recognition systems were found to be possible. As a result, various anti-spoofing algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the effectiveness of some simple, direct spoofing attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the effect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we find that it is especially vulnerable against spoofing attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the first, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the effectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more difficult to detect even when using high-end, expensive machine learning techniques

    Modelagem espaço-temporal do padrão de infestação da broca do café levando em consideração excesso de zeros e dados faltantes

    Get PDF
    The study of pest distributions in space and time in agricultural systems provides important information for the optimization of integrated pest management programs and for the planning of experiments. Two statistical problems commonly associated to the space-time modelling of data that hinder its implementation are the excess of zero counts and the presence of missing values due to the adopted sampling scheme. These problems are considered in the present article. Data of coffee berry borer infestation collected under Colombian field conditions are used to study the spatio-temporal evolution of the pest infestation. The dispersion of the pest starting from initial focuses of infestation was modelled considering linear and quadratic infestation growth trends as well as different combinations of random effects representing both spatially and not spatially structured variability. The analysis was accomplished under a hierarchical Bayesian approach. The missing values were dealt with by means of multiple imputation. Additionally, a mixture model was proposed to take into account the excess of zeroes in the beginning of the infestation. In general, quadratic models had a better fit than linear models. The use of spatially structured parameters also allowed a clearer identification of the temporal increase or decrease of infestation patterns. However, neither of the space-time models based on standard distributions was able to properly describe the excess of zero counts in the beginning of the infestation. This overdispersed pattern was correctly modelled by the mixture space-time models, which had a better performance than their counterpart without a mixture component.O estudo da distribuição de pragas em espaço e tempo em sistemas agrícolas fornece informação importante para a otimização de programas de manejo integrado de pragas e para o planejamento de experimentos. Dois problemas estatísticos comumente associados à modelagem espaço-temporal desse tipo de dados que dificultam sua implementação são o excesso de zeros nas contagens e a presença de dados faltantes devido ao esquema de amostragem adotado. Esses problemas são considerados no presente artigo. Para estudar a evolução da infestação da broca do café a partir de focos iniciais de infestação foram usados dados de infestação da praga coletados em condições de campo na Colômbia. Foram considerados modelos com tendência de crescimento da infestação linear e quadrática, assim como diferentes combinações de efeitos aleatórios representando variabilidade espacialmente estruturada e não estruturada. As análises foram feitas sob uma abordagem Bayesiana hierárquica. O método de imputação múltipla foi usado para abordar o problema de dados faltantes. Adicionalmente, foi proposto um modelo de mistura para levar em consideração o excesso de zeros nas contagens no início da infestação. Em geral, os modelos quadráticos tiveram um melhor ajuste que os modelos lineares. O uso de parâmetros espacialmente estruturados permitiu uma identificação mais clara dos padrões temporais de acréscimo ou decréscimo na infestação. No entanto, nenhum dos modelos espaço-tempo baseados em distribuições padrões descreveu, apropriadamente, o excesso de zeros no início da infestação. Esse padrão de sobredispersão foi corretamente modelado pelos modelos de mistura espaço-tempo, os quais tiveram um melhor desempenho que seus homólogos sem mistura.CNP
    corecore