1,895 research outputs found

    Techniques for Ocular Biometric Recognition Under Non-ideal Conditions

    Get PDF
    The use of the ocular region as a biometric cue has gained considerable traction due to recent advances in automated iris recognition. However, a multitude of factors can negatively impact ocular recognition performance under unconstrained conditions (e.g., non-uniform illumination, occlusions, motion blur, image resolution, etc.). This dissertation develops techniques to perform iris and ocular recognition under challenging conditions. The first contribution is an image-level fusion scheme to improve iris recognition performance in low-resolution videos. Information fusion is facilitated by the use of Principal Components Transform (PCT), thereby requiring modest computational efforts. The proposed approach provides improved recognition accuracy when low-resolution iris images are compared against high-resolution iris images. The second contribution is a study demonstrating the effectiveness of the ocular region in improving face recognition under plastic surgery. A score-level fusion approach that combines information from the face and ocular regions is proposed. The proposed approach, unlike other previous methods in this application, is not learning-based, and has modest computational requirements while resulting in better recognition performance. The third contribution is a study on matching ocular regions extracted from RGB face images against that of near-infrared iris images. Face and iris images are typically acquired using sensors operating in visible and near-infrared wavelengths of light, respectively. To this end, a sparse representation approach which generates a joint dictionary from corresponding pairs of face and iris images is designed. The proposed joint dictionary approach is observed to outperform classical ocular recognition techniques. In summary, the techniques presented in this dissertation can be used to improve iris and ocular recognition in practical, unconstrained environments

    Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks

    Get PDF
    This work is based on a disruptive hypothesisfor periocular biometrics: in visible-light data, the recognitionperformance is optimized when the components inside the ocularglobe (the iris and the sclera) are simply discarded, and therecogniser’s response is exclusively based in information fromthe surroundings of the eye. As major novelty, we describe aprocessing chain based on convolution neural networks (CNNs)that defines the regions-of-interest in the input data that should beprivileged in an implicit way, i.e., without masking out any areasin the learning/test samples. By using an ocular segmentationalgorithm exclusively in the learning data, we separate the ocularfrom the periocular parts. Then, we produce a large set of”multi-class” artificial samples, by interchanging the periocularand ocular parts from different subjects. These samples areused for data augmentation purposes and feed the learningphase of the CNN, always considering as label the ID of theperiocular part. This way, for every periocular region, the CNNreceives multiple samples of different ocular classes, forcing itto conclude that such regions should not be considered in itsresponse. During the test phase, samples are provided withoutany segmentation mask and the networknaturallydisregardsthe ocular components, which contributes for improvements inperformance. Our experiments were carried out in full versionsof two widely known data sets (UBIRIS.v2 and FRGC) and showthat the proposed method consistently advances the state-of-the-art performance in theclosed-worldsetting, reducing the EERsin about 82% (UBIRIS.v2) and 85% (FRGC) and improving theRank-1 over 41% (UBIRIS.v2) and 12% (FRGC).info:eu-repo/semantics/publishedVersio
    • …
    corecore