88 research outputs found

    How Image Degradations Affect Deep CNN-based Face Recognition?

    Full text link
    Face recognition approaches that are based on deep convolutional neural networks (CNN) have been dominating the field. The performance improvements they have provided in the so called in-the-wild datasets are significant, however, their performance under image quality degradations have not been assessed, yet. This is particularly important, since in real-world face recognition applications, images may contain various kinds of degradations due to motion blur, noise, compression artifacts, color distortions, and occlusion. In this work, we have addressed this problem and analyzed the influence of these image degradations on the performance of deep CNN-based face recognition approaches using the standard LFW closed-set identification protocol. We have evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and GoogLeNet. Results have indicated that blur, noise, and occlusion cause a significant decrease in performance, while deep CNN models are found to be robust to distortions, such as color distortions and change in color balance.Comment: 8 pages, 3 figure

    A Reminiscence of ”Mastermind”: Iris/Periocular Biometrics by ”In-Set” CNN Iterative Analysis

    Get PDF
    Convolutional neural networks (CNNs) have emerged as the most popular classification models in biometrics research. Under the discriminative paradigm of pattern recognition, CNNs are used typically in one of two ways: 1) verification mode (”are samples from the same person?”), where pairs of images are provided to the network to distinguish between genuine and impostor instances; and 2) identification mode (”whom is this sample from?”), where appropriate feature representations that map images to identities are found. This paper postulates a novel mode for using CNNs in biometric identification, by learning models that answer to the question ”is the query’s identity among this set?”. The insight is a reminiscence of the classical Mastermind game: by iteratively analysing the network responses when multiple random samples of k gallery elements are compared to the query, we obtain weakly correlated matching scores that - altogether - provide solid cues to infer the most likely identity. In this setting, identification is regarded as a variable selection and regularization problem, with sparse linear regression techniques being used to infer the matching probability with respect to each gallery identity. As main strength, this strategy is highly robust to outlier matching scores, which are known to be a primary error source in biometric recognition. Our experiments were carried out in full versions of two well known irises near-infrared (CASIA-IrisV4-Thousand) and periocular visible wavelength (UBIRIS.v2) datasets, and confirm that recognition performance can be solidly boosted-up by the proposed algorithm, when compared to the traditional working modes of CNNs in biometrics.info:eu-repo/semantics/publishedVersio

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others

    UFPR-Periocular: A Periocular Dataset Collected by Mobile Devices in Unconstrained Scenarios

    Full text link
    Recently, ocular biometrics in unconstrained environments using images obtained at visible wavelength have gained the researchers' attention, especially with images captured by mobile devices. Periocular recognition has been demonstrated to be an alternative when the iris trait is not available due to occlusions or low image resolution. However, the periocular trait does not have the high uniqueness presented in the iris trait. Thus, the use of datasets containing many subjects is essential to assess biometric systems' capacity to extract discriminating information from the periocular region. Also, to address the within-class variability caused by lighting and attributes in the periocular region, it is of paramount importance to use datasets with images of the same subject captured in distinct sessions. As the datasets available in the literature do not present all these factors, in this work, we present a new periocular dataset containing samples from 1,122 subjects, acquired in 3 sessions by 196 different mobile devices. The images were captured under unconstrained environments with just a single instruction to the participants: to place their eyes on a region of interest. We also performed an extensive benchmark with several Convolutional Neural Network (CNN) architectures and models that have been employed in state-of-the-art approaches based on Multi-class Classification, Multitask Learning, Pairwise Filters Network, and Siamese Network. The results achieved in the closed- and open-world protocol, considering the identification and verification tasks, show that this area still needs research and development

    Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks

    Get PDF
    This work is based on a disruptive hypothesisfor periocular biometrics: in visible-light data, the recognitionperformance is optimized when the components inside the ocularglobe (the iris and the sclera) are simply discarded, and therecogniser’s response is exclusively based in information fromthe surroundings of the eye. As major novelty, we describe aprocessing chain based on convolution neural networks (CNNs)that defines the regions-of-interest in the input data that should beprivileged in an implicit way, i.e., without masking out any areasin the learning/test samples. By using an ocular segmentationalgorithm exclusively in the learning data, we separate the ocularfrom the periocular parts. Then, we produce a large set of”multi-class” artificial samples, by interchanging the periocularand ocular parts from different subjects. These samples areused for data augmentation purposes and feed the learningphase of the CNN, always considering as label the ID of theperiocular part. This way, for every periocular region, the CNNreceives multiple samples of different ocular classes, forcing itto conclude that such regions should not be considered in itsresponse. During the test phase, samples are provided withoutany segmentation mask and the networknaturallydisregardsthe ocular components, which contributes for improvements inperformance. Our experiments were carried out in full versionsof two widely known data sets (UBIRIS.v2 and FRGC) and showthat the proposed method consistently advances the state-of-the-art performance in theclosed-worldsetting, reducing the EERsin about 82% (UBIRIS.v2) and 85% (FRGC) and improving theRank-1 over 41% (UBIRIS.v2) and 12% (FRGC).info:eu-repo/semantics/publishedVersio
    • …
    corecore