58 research outputs found

    IRDO: Iris Recognition by Fusion of DTCWT and OLBP

    Get PDF
    Iris Biometric is a physiological trait of human beings. In this paper, we propose Iris an Recognition using Fusion of Dual Tree Complex Wavelet Transform (DTCWT) and Over Lapping Local Binary Pattern (OLBP) Features. An eye is preprocessed to extract the iris part and obtain the Region of Interest (ROI) area from an iris. The complex wavelet features are extracted for region from the Iris DTCWT. OLBP is further applied on ROI to generate features of magnitude coefficients. The resultant features are generated by fusing DTCWT and OLBP using arithmetic addition. The Euclidean Distance (ED) is used to compare test iris with database iris features to identify a person. It is observed that the values of Total Success Rate (TSR) and Equal Error Rate (EER) are better in the case of proposed IRDO compared to the state-of-the art technique

    IRHDF: Iris Recognition using Hybrid Domain Features

    Get PDF
    Iris Biometric is a unique physiological noninvasive trait of human beings that remains stable over a person's life. In this paper, we propose an Iris Recognition using Hybrid Domain Features (IRHDF) as Dual Tree Complex Wavelet Transform (DTCWT) and Over Lapping Local Binary Pattern (OLBP). An eye is preprocessed to extract the complex wavelet features to obtain the Region of Interest (ROI) area from an iris. OLBP is further applied on ROI to generate features of magnitude coefficients. Resultant features are generated by fusion of DTCWT and OLBP using arithmetic addition. Euclidean Distance (ED) is used to match the test iris image with database iris features to recognize a person. We observe that the values of Equal Error Rate (EER) and Total Success Rate (TSR) are better than in [7]

    On Generative Adversarial Network Based Synthetic Iris Presentation Attack And Its Detection

    Get PDF
    Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. Reliability and accuracy of iris biometric modality have prompted its large-scale deployment for critical applications such as border control and national identification projects. The extensive growth of iris recognition systems has raised apprehensions about the susceptibility of these systems to various presentation attacks. In this thesis, a novel iris presentation attack using deep learning based synthetically generated iris images is presented. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, a new framework, named as iDCGAN is proposed for creating realistic appearing synthetic iris images. In-depth analysis is performed using quality score distributions of real and synthetically generated iris images to understand the effectiveness of the proposed approach. We also demonstrate that synthetically generated iris images can be used to attack existing iris recognition systems. As synthetically generated iris images can be effectively deployed in iris presentation attacks, it is important to develop accurate iris presentation attack detection algorithms which can distinguish such synthetic iris images from real iris images. For this purpose, a novel structural and textural feature-based iris presentation attack detection framework (DESIST) is proposed. The key emphasis of DESIST is on developing a unified framework for detecting a medley of iris presentation attacks, including synthetic iris. Experimental evaluations showcase the efficacy of the proposed DESIST framework in detecting synthetic iris presentation attacks

    An efficient multiscale scheme using local zernike moments for face recognition

    Get PDF
    In this study, we propose a face recognition scheme using local Zernike moments (LZM), which can be used for both identification and verification. In this scheme, local patches around the landmarks are extracted from the complex components obtained by LZM transformation. Then, phase magnitude histograms are constructed within these patches to create descriptors for face images. An image pyramid is utilized to extract features at multiple scales, and the descriptors are constructed for each image in this pyramid. We used three different public datasets to examine the performance of the proposed method:Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), and Surveillance Cameras Face (SCface). The results revealed that the proposed method is robust against variations such as illumination, facial expression, and pose. Aside from this, it can be used for low-resolution face images acquired in uncontrolled environments or in the infrared spectrum. Experimental results show that our method outperforms state-of-the-art methods on FERET and SCface datasets.WOS:000437326800174Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ2 - Q3ArticleUluslararası işbirliği ile yapılmayan - HAYIRMayıs2018YÖK - 2017-1

    Complex-valued Iris Recognition Network

    Full text link
    In this work, we design a complex-valued neural network for the task of iris recognition. Unlike the problem of general object recognition, where real-valued neural networks can be used to extract pertinent features, iris recognition depends on the extraction of both phase and amplitude information from the input iris texture in order to better represent its stochastic content. This necessitates the extraction and processing of phase information that cannot be effectively handled by a real-valued neural network. In this regard, we design a complex-valued neural network that can better capture the multi-scale, multi-resolution, and multi-orientation phase and amplitude features of the iris texture. We show a strong correspondence of the proposed complex-valued iris recognition network with Gabor wavelets that are used to generate the classical IrisCode; however, the proposed method enables automatic complex-valued feature learning that is tailored for iris recognition. Experiments conducted on three benchmark datasets - ND-CrossSensor-2013, CASIA-Iris-Thousand and UBIRIS.v2 - show the benefit of the proposed network for the task of iris recognition. Further, the generalization capability of the proposed network is demonstrated by training and testing it across different datasets. Finally, visualization schemes are used to convey the type of features being extracted by the complex-valued network in comparison to classical real-valued networks. The results of this work are likely to be applicable in other domains, where complex Gabor filters are used for texture modeling

    Signal-Level Information Fusion for Less Constrained Iris Recognition using Sparse-Error Low Rank Matrix Factorization

    Get PDF
    Iris recognition systems working in less constrained environments with the subject at-a-distance and on-the-move suffer from the noise and degradations in the iris captures. These noise and degradations significantly deteriorate iris recognition performance. In this paper, we propose a novel signal-level information fusion method to mitigate the influence of noise and degradations for less constrained iris recognition systems. The proposed method is based on low rank approximation (LRA). Given multiple noisy captures of the same eye, we assume that: 1) the potential noiseless images lie in a low rank subspace and 2) the noise is spatially sparse. Based on these assumptions, we seek an LRA of noisy captures to separate the noiseless images and noise for information fusion. Specifically, we propose a sparse-error low rank matrix factorization model to perform LRA, decomposing the noisy captures into a low rank component and a sparse error component. The low rank component estimates the potential noiseless images, while the error component models the noise. Then, the low rank and error components are utilized to perform signal-level fusion separately, producing two individually fused images. Finally, we combine the two fused images at the code level to produce one iris code as the final fusion result. Experiments on benchmark data sets demonstrate that the proposed signal-level fusion method is able to achieve a generally improved iris recognition performance in less constrained environment, in comparison with the existing iris recognition algorithms, especially for the iris captures with heavy noise and low quality

    IRINA: Iris Recognition (even) in Inacurately Segmented Data

    Get PDF
    The effectiveness of current iris recognition systems de-pends on the accurate segmentation and parameterisationof the iris boundaries, as failures at this point misalignthe coefficients of the biometric signatures. This paper de-scribesIRINA, an algorithm forIrisRecognition that is ro-bust againstINAccurately segmented samples, which makesit a good candidate to work in poor-quality data. The pro-cess is based in the concept of ”corresponding” patch be-tween pairs of images, that is used to estimate the posteriorprobabilities that patches regard the same biological region,even in case of segmentation errors and non-linear texturedeformations. Such information enables to infer a free-formdeformation field (2D registration vectors) between images,whose first and second-order statistics provide effective bio-metric discriminating power. Extensive experiments werecarried out in four datasets (CASIA-IrisV3-Lamp, CASIA-IrisV4-Lamp, CASIA-IrisV4-Thousand and WVU) and showthat IRINA not only achieves state-of-the-art performancein good quality data, but also handles effectively severe seg-mentation errors and large differences in pupillary dilation/ constriction.info:eu-repo/semantics/publishedVersio
    corecore