266 research outputs found

    Influence of segmentation on deep iris recognition performance

    Full text link
    Despite the rise of deep learning in numerous areas of computer vision and image processing, iris recognition has not benefited considerably from these trends so far. Most of the existing research on deep iris recognition is focused on new models for generating discriminative and robust iris representations and relies on methodologies akin to traditional iris recognition pipelines. Hence, the proposed models do not approach iris recognition in an end-to-end manner, but rather use standard heuristic iris segmentation (and unwrapping) techniques to produce normalized inputs for the deep learning models. However, because deep learning is able to model very complex data distributions and nonlinear data changes, an obvious question arises. How important is the use of traditional segmentation methods in a deep learning setting? To answer this question, we present in this paper an empirical analysis of the impact of iris segmentation on the performance of deep learning models using a simple two stage pipeline consisting of a segmentation and a recognition step. We evaluate how the accuracy of segmentation influences recognition performance but also examine if segmentation is needed at all. We use the CASIA Thousand and SBVPI datasets for the experiments and report several interesting findings.Comment: 6 pages, 3 figures, 3 tables, submitted to IWBF 201

    Semi-supervised auto-encoder for facial attributes recognition

    Get PDF
    The particularity of our faces encourages many researchers to exploit their features in different domains such as user identification, behaviour analysis, computer technology, security, and psychology. In this paper, we present a method for facial attributes analysis. The work addressed to analyse facial images and extract features in the purpose to recognize demographic attributes: age, gender, and ethnicity (AGE). In this work, we exploited the robustness of deep learning (DL) using an updating version of autoencoders called the deep sparse autoencoder (DSAE). In this work we used a new architecture of DSAE by adding the supervision to the classic model and we control the overfitting problem by regularizing the model. The pass from DSAE to the semi-supervised autoencoder (DSSAE) facilitates the supervision process and achieves an excellent performance to extract features. In this work we focused to estimate AGE jointly. The experiment results show that DSSAE is created to recognize facial features with high precision. The whole system achieves good performance and important rates in AGE using the MORPH II databas

    A Reminiscence of ”Mastermind”: Iris/Periocular Biometrics by ”In-Set” CNN Iterative Analysis

    Get PDF
    Convolutional neural networks (CNNs) have emerged as the most popular classification models in biometrics research. Under the discriminative paradigm of pattern recognition, CNNs are used typically in one of two ways: 1) verification mode (”are samples from the same person?”), where pairs of images are provided to the network to distinguish between genuine and impostor instances; and 2) identification mode (”whom is this sample from?”), where appropriate feature representations that map images to identities are found. This paper postulates a novel mode for using CNNs in biometric identification, by learning models that answer to the question ”is the query’s identity among this set?”. The insight is a reminiscence of the classical Mastermind game: by iteratively analysing the network responses when multiple random samples of k gallery elements are compared to the query, we obtain weakly correlated matching scores that - altogether - provide solid cues to infer the most likely identity. In this setting, identification is regarded as a variable selection and regularization problem, with sparse linear regression techniques being used to infer the matching probability with respect to each gallery identity. As main strength, this strategy is highly robust to outlier matching scores, which are known to be a primary error source in biometric recognition. Our experiments were carried out in full versions of two well known irises near-infrared (CASIA-IrisV4-Thousand) and periocular visible wavelength (UBIRIS.v2) datasets, and confirm that recognition performance can be solidly boosted-up by the proposed algorithm, when compared to the traditional working modes of CNNs in biometrics.info:eu-repo/semantics/publishedVersio
    • …
    corecore