1,028 research outputs found

    A novel method for low-constrained iris boundary localization

    Get PDF
    Iris recognition systems are strongly dependent on their segmentation processes, which have traditionally assumed rigid experimental constraints to achieve good performance, but now move towards less constrained environments. This work presents a novel method on iris segmentation that covers the localization of the pupillary and limbic iris boundaries. The method consists of an energy minimization procedure posed as a multilabel one-directional graph, followed by a model fitting process and the use of physiological priors. Accurate segmentations are achieved even in the presence of lutter, lenses, glasses, motion blur,and variable illumination. The contributions of this paper are a fast and reliable method for the accurate localizationof the iris boundaries in low-constrained conditions, and a novel database for iris segmentation incorporating challenging iris images, which has been publicly released to the research community. The proposed method has been evaluated over three different databases, showing higher performance in comparison to traditional techniques.Peer ReviewedPreprin

    Enhanced iris recognition: Algorithms for segmentation, matching and synthesis

    Get PDF
    This thesis addresses the issues of segmentation, matching, fusion and synthesis in the context of irises and makes a four-fold contribution. The first contribution of this thesis is a post matching algorithm that observes the structure of the differences in feature templates to enhance recognition accuracy. The significance of the scheme is its robustness to inaccuracies in the iris segmentation process. Experimental results on the CASIA database indicate the efficacy of the proposed technique. The second contribution of this thesis is a novel iris segmentation scheme that employs Geodesic Active Contours to extract the iris from the surrounding structures. The proposed scheme elicits the iris texture in an iterative fashion depending upon both the local and global conditions of the image. The performance of an iris recognition algorithm on both the WVU non-ideal and CASIA iris database is observed to improve upon application of the proposed segmentation algorithm. The third contribution of this thesis is the fusion of multiple instances of the same iris and multiple iris units of the eye, i.e., the left and right iris at the match score level. Using simple sum rule, it is demonstrated that both multi-instance and multi-unit fusion of iris can lead to a significant improvement in matching accuracy. The final contribution is a technique to create a large database of digital renditions of iris images that can be used to evaluate the performance of iris recognition algorithms. This scheme is implemented in two stages. In the first stage, a Markov Random Field model is used to generate a background texture representing the global iris appearance. In the next stage a variety of iris features, viz., radial and concentric furrows, collarette and crypts, are generated and embedded in the texture field. Experimental results confirm the validity of the synthetic irises generated using this technique

    A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images

    Full text link
    [EN] This work describes a new hybrid method for accurate iris segmentation from full-face images independently of the ethnicity of the subject. It is based on a combination of three methods: facial key-point detection, integro-differential operator (IDO) and mathematical morphology. First, facial landmarks are extracted by means of the Chehra algorithm in order to obtain the eye location. Then, the IDO is applied to the extracted sub-image containing only the eye in order to locate the iris. Once the iris is located, a series of mathematical morphological operations is performed in order to accurately segment it. Results are obtained and compared among four different ethnicities (Asian, Black, Latino and White) as well as with two other iris segmentation algorithms. In addition, robustness against rotation, blurring and noise is also assessed. Our method obtains state-of-the-art performance and shows itself robust with small amounts of blur, noise and/or rotation. Furthermore, it is fast, accurate, and its code is publicly available.Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Diego-Mas, JA.; Alcañiz Raya, ML. (2019). A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images. EURASIP Journal on Image and Video Processing (Online). 2019(1):1-14. https://doi.org/10.1186/s13640-019-0473-0S11420191A. Radman, K. Jumari, N. Zainal, Fast and reliable iris segmentation algorithm. IET Image Process.7(1), 42–49 (2013).M. Erbilek, M. Fairhurst, M. C. D. C Abreu, in 5th International Conference on Imaging for Crime Detection and Prevention (ICDP 2013). Age prediction from iris biometrics (London, 2013), pp. 1–5. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913712&isnumber=6867223 .A. Abbasi, M. Khan, Iris-pupil thickness based method for determining age group of a person. Int. Arab J. Inf. Technol. (IAJIT). 13(6) (2016).G. Mabuza-Hocquet, F. Nelwamondo, T. Marwala, in Intelligent Information and Database Systems. ed. by N. Nguyen, S. Tojo, L. Nguyen, B. Trawiński. Ethnicity Distinctiveness Through Iris Texture Features Using Gabor Filters. ACIIDS 2017. Lecture Notes in Computer Science, vol. 10192 (Springer, Cham, 2017).S. Lagree, K. W. Bowyer, in 2011 IEEE International Conference on Technologies for Homeland Security (HST). Predicting ethnicity and gender from iris texture (IEEEWaltham, 2011). p. 440–445. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6107909&isnumber=6107829 .J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell.15(11), 1148–1161 (1993).N. Kourkoumelis, M. Tzaphlidou. Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics, eds. A. Kumar, D. Zhang. Ethics and Policy of Biometrics. ICEB 2010. Lecture Notes in Computer Science, vol 6005 (Springer, Berlin, Heidelberg, 2010).R. P. Wildes, Iris recognition: an emerging biometric technology. Proc. IEEE. 85(9), 1348–1363 (1997).M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active contour models. Int. J. Comput. Vision. 1(4), 321–331 (1988).S. J. Pundlik, D. L. Woodard, S. T. Birchfield, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Non-ideal iris segmentation using graph cuts (IEEEAnchorage, 2008). p. 1–6. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4563108&isnumber=4562948 .H. Proença, Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell.32(8), 1502–1516 (2010). http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5156505&isnumber=5487331 .T. Tan, Z. He, Z. Sun, Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition. Image Vision Comput.28(2), 223–230 (2010).C. -W. Tan, A. Kumar, in CVPR 2011 WORKSHOPS. Automated segmentation of iris images using visible wavelength face images (Colorado Springs, 2011). p. 9–14. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5981682&isnumber=5981671 .Y. -H. Li, M. Savvides, An automatic iris occlusion estimation method based on high-dimensional density estimation. IEEE Trans. Pattern Anal. Mach. Intell.35(4), 784–796 (2013).M. Yahiaoui, E. Monfrini, B. Dorizzi, Markov chains for unsupervised segmentation of degraded nir iris images for person recognition. Pattern Recogn. Lett.82:, 116–123 (2016).A. Radman, N. Zainal, S. A. Suandi, Automated segmentation of iris images acquired in an unconstrained environment using hog-svm and growcut. Digit. Signal Proc.64:, 60–70 (2017).N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, T. Tan, in 2016 International Conference on Biometrics (ICB). Accurate iris segmentation in non-cooperative environments using fully convolutional networks (Halmstad, 2016). p. 1–8. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7550055&isnumber=7550036 .Z. Zhao, A. Kumar, in 2017 IEEE International Conference on Computer Vision (ICCV). Towards more accurate iris recognition using deeply learned spatially corresponding features (Venice, 2017). p. 3829–3838. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8237673&isnumber=8237262 .P. Li, X. Liu, L. Xiao, Q. Song, Robust and accurate iris segmentation in very noisy iris images. Image Vision Comput.28(2), 246–253 (2010).D. S. Jeong, J. W. Hwang, B. J. Kang, K. R. Park, C. S. Won, D. -K. Park, J. Kim, A new iris segmentation method for non-ideal iris images. Image Vision Comput.28(2), 254–260 (2010).Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vision Comput. 28(2), 261–269 (2010).Z. Zhao, A. Kumar, in 2015 IEEE International Conference on Computer Vision (ICCV). An accurate iris segmentation framework under relaxed imaging constraints using total variation model (Santiago, 2015). p. 3828–3836. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410793&isnumber=7410356 .Y. Hu, K. Sirlantzis, G. Howells, Improving colour iris segmentation using a model selection technique. Pattern Recogn. Lett.57:, 24–32 (2015).E. Ouabida, A. Essadique, A. Bouzid, Vander lugt correlator based active contours for iris segmentation and tracking. Expert Systems Appl.71:, 383–395 (2017).C. -W. Tan, A. Kumar, Unified framework for automated iris segmentation using distantly acquired face images. IEEE Trans. Image Proc.21(9), 4068–4079 (2012).C. -W. Tan, A. Kumar, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Human identification from at-a-distance images by simultaneously exploiting iris and periocular features (Tsukuba, 2012). p. 553–556. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6460194&isnumber=6460043 .C. -W. Tan, A. Kumar, Towards online iris and periocular recognition under relaxed imaging constraints. IEEE Trans. Image Proc.22(10), 3751–3765 (2013).K. Y. Shin, Y. G. Kim, K. R. Park, Enhanced iris recognition method based on multi-unit iris images. Opt. Eng.52(4), 047201–047201 (2013).CASIA iris databases. http://biometrics.idealtest.org/ . Accessed 06 Sept 2017.WVU iris databases. hhttp://biic.wvu.edu/data-sets/synthetic-iris-dataset . Accessed 06 Sept 2017.UBIRIS iris database. http://iris.di.ubi.pt . Accessed 06 Sept 2017.MICHE iris database. http://biplab.unisa.it/MICHE/ . Accessed 06 Sept 2017.P. J. Phillips, et al, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1. Overview of the face recognition grand challenge (San Diego, 2005). p. 947–954. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1467368&isnumber=31472 .D. S. Ma, J. Correll, B. Wittenbrink, The chicago face database: A free stimulus set of faces and norming data. Behav. Res. Methods. 47(4), 1122–1135 (2015).P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 2013).A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, Inc., Englewood Cliffs, 1989).J. Daugman, How iris recognition works. IEEE Trans. Circ. Syst. Video Technol.14(1), 21–30 (2004).A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Incremental face alignment in the wild (Columbus, 2014). p. 1859–1866. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909636&isnumber=6909393 .T. Baltrusaitis, P. Robinson, L. -P. Morency, in 2013 IEEE International Conference on Computer Vision Workshops. Constrained local neural fields for robust facial landmark detection in the wild (Sydney, 2013). p. 354–361. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755919&isnumber=6755862 .X. Zhu, D. Ramanan, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Face detection, pose estimation, and landmark localization in the wild (IEEEBerlin Heidelberg, 2012), pp. 2879–2886.G. Tzimiropoulos, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Project-out cascaded regression with an application to face alignment (Boston, 2015). p. 3659–3667. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298989&isnumber=7298593 .H. Hofbauer, F. Alonso-Fernandez, P. Wild, J. Bigun, A. Uhl, in 2014 22nd International Conference on Pattern Recognition. A ground truth for iris segmentation (Stockholm, 2014). p. 527–532. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976811&isnumber=6976709 .H. Proença, L. A. Alexandre, in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. The NICE.I: Noisy Iris Challenge Evaluation - Part I (Crystal City, 2007). p. 1–4. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4401910&isnumber=4401902 .J. Daugman, in European Convention on Security and Detection. High confidence recognition of persons by rapid video analysis of iris texture, (1995). p. 244–251. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=491729&isnumber=10615 .Code of Matlab implementation of Daugman’s integro-differential operator (IDO). https://es.mathworks.com/matlabcentral/fileexchange/15652-iris-segmentation-using-daugman-s-integrodifferential-operator/ . Accessed 06 Sept 2017.Code of Matlab implementation of Zhao and Kumar’s iris segmentation framework under relaxed imaging constraints using total variation model. http://www4.comp.polyu.edu.hk/~csajaykr/tvmiris.htm/ . Accessed 06 Sept 2017.Code of Matlab implementation of presented work. https://gitlab.com/ffuentes/hybrid_iris_segmentation/ . Accessed 06 Sept 2017.Face and eye detection with OpenCV. https://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html . Accessed 07 Sept 2018.A. K. Boyat, B. K. Joshi, 6. A review paper:noise models in digital image processing signal & image processing. An International Journal (SIPIJ), (2015), pp. 63–75. https://doi.org/10.5121/sipij.2015.6206 .A. Buades, Y. Lou, J. M. Morel, Z. Tang, Multi image noise estimation and denoising (2010). Available: https://hal.archives-ouvertes.fr/hal-00510866/

    Techniques for Ocular Biometric Recognition Under Non-ideal Conditions

    Get PDF
    The use of the ocular region as a biometric cue has gained considerable traction due to recent advances in automated iris recognition. However, a multitude of factors can negatively impact ocular recognition performance under unconstrained conditions (e.g., non-uniform illumination, occlusions, motion blur, image resolution, etc.). This dissertation develops techniques to perform iris and ocular recognition under challenging conditions. The first contribution is an image-level fusion scheme to improve iris recognition performance in low-resolution videos. Information fusion is facilitated by the use of Principal Components Transform (PCT), thereby requiring modest computational efforts. The proposed approach provides improved recognition accuracy when low-resolution iris images are compared against high-resolution iris images. The second contribution is a study demonstrating the effectiveness of the ocular region in improving face recognition under plastic surgery. A score-level fusion approach that combines information from the face and ocular regions is proposed. The proposed approach, unlike other previous methods in this application, is not learning-based, and has modest computational requirements while resulting in better recognition performance. The third contribution is a study on matching ocular regions extracted from RGB face images against that of near-infrared iris images. Face and iris images are typically acquired using sensors operating in visible and near-infrared wavelengths of light, respectively. To this end, a sparse representation approach which generates a joint dictionary from corresponding pairs of face and iris images is designed. The proposed joint dictionary approach is observed to outperform classical ocular recognition techniques. In summary, the techniques presented in this dissertation can be used to improve iris and ocular recognition in practical, unconstrained environments

    Measuring aberrations in lithographic projection systems with phase wheel targets

    Get PDF
    A significant factor in the degradation of nanolithographic image fidelity is optical wavefront aberration. Aerial image sensitivity to aberrations is currently much greater than in earlier lithographic technologies, a consequence of increased resolution requirements. Optical wavefront tolerances are dictated by the dimensional tolerances of features printed, which require lens designs with a high degree of aberration correction. In order to increase lithographic resolution, lens numerical aperture (NA) must continue to increase and imaging wavelength must decrease. Not only do aberration magnitudes scale inversely with wavelength, but high-order aberrations increase at a rate proportional to NA2 or greater, as do aberrations across the image field. Achieving lithographic-quality diffraction limited performance from an optical system, where the relatively low image contrast is further reduced by aberrations, requires the development of highly accurate in situ aberration measurement. In this work, phase wheel targets are used to generate an optical image, which can then be used to both describe and monitor aberrations in lithographic projection systems. The use of lithographic images is critical in this approach, since it ensures that optical system measurements are obtained during the system\u27s standard operation. A mathematical framework is developed that translates image errors into the Zernike polynomial representation, commonly used in the description of optical aberrations. The wavefront is decomposed into a set of orthogonal basis functions, and coefficients for the set are estimated from image-based measurements. A solution is deduced from multiple image measurements by using a combination of different image sets. Correlations between aberrations and phase wheel image characteristics are modeled based on physical simulation and statistical analysis. The approach uses a well-developed rigorous simulation tool to model significant aspects of lithography processes to assess how aberrations affect the final image. The aberration impact on resulting image shapes is then examined and approximations identified so the aberration computation can be made into a fast compact model form. Wavefront reconstruction examples are presented together with corresponding numerical results. The detailed analysis is given along with empirical measurements and a discussion of measurement capabilities. Finally, the impact of systematic errors in exposure tool parameters is measureable from empirical data and can be removed in the calibration stage of wavefront analysis

    Panoramic optical and near-infrared SETI instrument: optical and structural design concepts

    Full text link
    We propose a novel instrument design to greatly expand the current optical and near-infrared SETI search parameter space by monitoring the entire observable sky during all observable time. This instrument is aimed to search for technosignatures by means of detecting nano- to micro-second light pulses that could have been emitted, for instance, for the purpose of interstellar communications or energy transfer. We present an instrument conceptual design based upon an assembly of 198 refracting 0.5-m telescopes tessellating two geodesic domes. This design produces a regular layout of hexagonal collecting apertures that optimizes the instrument footprint, aperture diameter, instrument sensitivity and total field-of-view coverage. We also present the optical performance of some Fresnel lenses envisaged to develop a dedicated panoramic SETI (PANOSETI) observatory that will dramatically increase sky-area searched (pi steradians per dome), wavelength range covered, number of stellar systems observed, interstellar space examined and duration of time monitored with respect to previous optical and near-infrared technosignature finders.Comment: 14 pages, 5 figures, 3 table

    Methods for Ellipse Detection from Edge Maps of Real Images

    Get PDF

    Human-Centric Machine Vision

    Get PDF
    Recently, the algorithms for the processing of the visual information have greatly evolved, providing efficient and effective solutions to cope with the variability and the complexity of real-world environments. These achievements yield to the development of Machine Vision systems that overcome the typical industrial applications, where the environments are controlled and the tasks are very specific, towards the use of innovative solutions to face with everyday needs of people. The Human-Centric Machine Vision can help to solve the problems raised by the needs of our society, e.g. security and safety, health care, medical imaging, and human machine interface. In such applications it is necessary to handle changing, unpredictable and complex situations, and to take care of the presence of humans
    • …
    corecore