9 research outputs found

    Robust Iris Segmentation Based on Fully Convolutional Networks and Generative Adversarial Networks

    Full text link
    The iris can be considered as one of the most important biometric traits due to its high degree of uniqueness. Iris-based biometrics applications depend mainly on the iris segmentation whose suitability is not robust for different environments such as near-infrared (NIR) and visible (VIS) ones. In this paper, two approaches for robust iris segmentation based on Fully Convolutional Networks (FCNs) and Generative Adversarial Networks (GANs) are described. Similar to a common convolutional network, but without the fully connected layers (i.e., the classification layers), an FCN employs at its end a combination of pooling layers from different convolutional layers. Based on the game theory, a GAN is designed as two networks competing with each other to generate the best segmentation. The proposed segmentation networks achieved promising results in all evaluated datasets (i.e., BioSec, CasiaI3, CasiaT4, IITD-1) of NIR images and (NICE.I, CrEye-Iris and MICHE-I) of VIS images in both non-cooperative and cooperative domains, outperforming the baselines techniques which are the best ones found so far in the literature, i.e., a new state of the art for these datasets. Furthermore, we manually labeled 2,431 images from CasiaT4, CrEye-Iris and MICHE-I datasets, making the masks available for research purposes.Comment: Accepted for presentation at the Conference on Graphics, Patterns and Images (SIBGRAPI) 201

    Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks

    Get PDF
    This work is based on a disruptive hypothesisfor periocular biometrics: in visible-light data, the recognitionperformance is optimized when the components inside the ocularglobe (the iris and the sclera) are simply discarded, and therecogniser’s response is exclusively based in information fromthe surroundings of the eye. As major novelty, we describe aprocessing chain based on convolution neural networks (CNNs)that defines the regions-of-interest in the input data that should beprivileged in an implicit way, i.e., without masking out any areasin the learning/test samples. By using an ocular segmentationalgorithm exclusively in the learning data, we separate the ocularfrom the periocular parts. Then, we produce a large set of”multi-class” artificial samples, by interchanging the periocularand ocular parts from different subjects. These samples areused for data augmentation purposes and feed the learningphase of the CNN, always considering as label the ID of theperiocular part. This way, for every periocular region, the CNNreceives multiple samples of different ocular classes, forcing itto conclude that such regions should not be considered in itsresponse. During the test phase, samples are provided withoutany segmentation mask and the networknaturallydisregardsthe ocular components, which contributes for improvements inperformance. Our experiments were carried out in full versionsof two widely known data sets (UBIRIS.v2 and FRGC) and showthat the proposed method consistently advances the state-of-the-art performance in theclosed-worldsetting, reducing the EERsin about 82% (UBIRIS.v2) and 85% (FRGC) and improving theRank-1 over 41% (UBIRIS.v2) and 12% (FRGC).info:eu-repo/semantics/publishedVersio
    corecore