3,778 research outputs found

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Deep Neural Network and Data Augmentation Methodology for off-axis iris segmentation in wearable headsets

    Full text link
    A data augmentation methodology is presented and applied to generate a large dataset of off-axis iris regions and train a low-complexity deep neural network. Although of low complexity the resulting network achieves a high level of accuracy in iris region segmentation for challenging off-axis eye-patches. Interestingly, this network is also shown to achieve high levels of performance for regular, frontal, segmentation of iris regions, comparing favorably with state-of-the-art techniques of significantly higher complexity. Due to its lower complexity, this network is well suited for deployment in embedded applications such as augmented and mixed reality headsets

    Cursor control by point-of-regard estimation for a computer with integrated webcam

    Get PDF
    This work forms part of the project Eye-Communicate funded by the Malta Council for Science and Technology through the National Research & Innovation Programme (2012) under Research Grant No. R&I-2012-057.The problem of eye-gaze tracking by videooculography has been receiving extensive interest throughout the years owing to the wide range of applications associated with this technology. Nonetheless, the emergence of a new paradigm referred to as pervasive eye-gaze tracking, introduces new challenges that go beyond the typical conditions for which classical video-based eye- gaze tracking methods have been developed. In this paper, we propose to deal with the problem of point-of-regard estimation from low-quality images acquired by an integrated camera inside a notebook computer. The proposed method detects the iris region from low-resolution eye region images by its intensity values rather than the shape, ensuring that this region can also be detected at different angles of rotation and under partial occlusion by the eyelids. Following the calculation of the point- of-regard from the estimated iris center coordinates, a number of Kalman filters improve upon the noisy point-of-regard estimates to smoothen the trajectory of the mouse cursor on the monitor screen. Quantitative results obtained from a validation procedure reveal a low mean error that is within the footprint of the average on-screen icon.peer-reviewe

    Improving Iris Recognition through Quality and Interoperability Metrics

    Get PDF
    The ability to identify individuals based on their iris is known as iris recognition. Over the past decade iris recognition has garnered much attention because of its strong performance in comparison with other mainstream biometrics such as fingerprint and face recognition. Performance of iris recognition systems is driven by application scenario requirements. Standoff distance, subject cooperation, underlying optics, and illumination are a few examples of these requirements which dictate the nature of images an iris recognition system has to process. Traditional iris recognition systems, dubbed stop and stare , operate under highly constrained conditions. This ensures that the captured image is of sufficient quality so that the success of subsequent processing stages, segmentation, encoding, and matching are not compromised. When acquisition constraints are relaxed, such as for surveillance or iris on the move, the fidelity of subsequent processing steps lessens.;In this dissertation we propose a multi-faceted framework for mitigating the difficulties associated with non-ideal iris. We develop and investigate a comprehensive iris image quality metric that is predictive of iris matching performance. The metric is composed of photometric measures such as defocus, motion blur, and illumination, but also contains domain specific measures such as occlusion, and gaze angle. These measures are then combined through a fusion rule based on Dempster-Shafer theory. Related to iris segmentation, which is arguably one of the most important tasks in iris recognition, we develop metrics which are used to evaluate the precision of the pupil and iris boundaries. Furthermore, we illustrate three methods which take advantage of the proposed segmentation metrics for rectifying incorrect segmentation boundaries. Finally, we look at the issue of iris image interoperability and demonstrate that techniques from the field of hardware fingerprinting can be utilized to improve iris matching performance when images captured from distinct sensors are involved

    On-screen point-of-regard estimation under natural head movement for a computer with integrated webcam

    Get PDF
    Recent developments in the field of eye-gaze tracking by vidoeoculography indicate a growing interest towards unobtrusive tracking in real-life scenarios, a new paradigm referred to as pervasive eye-gaze tracking. Among the challenges associated with this paradigm, the capability of a tracking platform to integrate well into devices with in-built imaging hardware and to permit natural head movement during tracking is of importance in less constrained scenarios. The work presented in this paper builds on our earlier work, which addressed the problem of estimating on-screen point-of-regard from iris center movements captured by an integrated camera inside a notebook computer, by proposing a method to approximate the head movements in conjunction with the iris movements in order to alleviate the requirement for a stationary head pose. Following iris localization by an appearance-based method, linear mapping functions for the iris and head movement are computed during a brief calibration procedure permitting the image information to be mapped to a point-of-regard on the monitor screen. Following the calculation of the point-of-regard as a function of the iris and head movement, separate Kalman filters improve upon the noisy point-of-regard estimates to smoothen the trajectory of the mouse cursor on the monitor screen. Quantitative and qualitative results obtained from two validation procedures reveal an improvement in the estimation accuracy under natural head movement, over our previous results achieved from earlier work.peer-reviewe

    A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images

    Full text link
    [EN] This work describes a new hybrid method for accurate iris segmentation from full-face images independently of the ethnicity of the subject. It is based on a combination of three methods: facial key-point detection, integro-differential operator (IDO) and mathematical morphology. First, facial landmarks are extracted by means of the Chehra algorithm in order to obtain the eye location. Then, the IDO is applied to the extracted sub-image containing only the eye in order to locate the iris. Once the iris is located, a series of mathematical morphological operations is performed in order to accurately segment it. Results are obtained and compared among four different ethnicities (Asian, Black, Latino and White) as well as with two other iris segmentation algorithms. In addition, robustness against rotation, blurring and noise is also assessed. Our method obtains state-of-the-art performance and shows itself robust with small amounts of blur, noise and/or rotation. Furthermore, it is fast, accurate, and its code is publicly available.Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Diego-Mas, JA.; Alcañiz Raya, ML. (2019). A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images. EURASIP Journal on Image and Video Processing (Online). 2019(1):1-14. https://doi.org/10.1186/s13640-019-0473-0S11420191A. Radman, K. Jumari, N. Zainal, Fast and reliable iris segmentation algorithm. IET Image Process.7(1), 42–49 (2013).M. Erbilek, M. Fairhurst, M. C. D. C Abreu, in 5th International Conference on Imaging for Crime Detection and Prevention (ICDP 2013). Age prediction from iris biometrics (London, 2013), pp. 1–5. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913712&isnumber=6867223 .A. Abbasi, M. Khan, Iris-pupil thickness based method for determining age group of a person. Int. Arab J. Inf. Technol. (IAJIT). 13(6) (2016).G. Mabuza-Hocquet, F. Nelwamondo, T. Marwala, in Intelligent Information and Database Systems. ed. by N. Nguyen, S. Tojo, L. Nguyen, B. Trawiński. Ethnicity Distinctiveness Through Iris Texture Features Using Gabor Filters. ACIIDS 2017. Lecture Notes in Computer Science, vol. 10192 (Springer, Cham, 2017).S. Lagree, K. W. Bowyer, in 2011 IEEE International Conference on Technologies for Homeland Security (HST). Predicting ethnicity and gender from iris texture (IEEEWaltham, 2011). p. 440–445. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6107909&isnumber=6107829 .J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell.15(11), 1148–1161 (1993).N. Kourkoumelis, M. Tzaphlidou. Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics, eds. A. Kumar, D. Zhang. Ethics and Policy of Biometrics. ICEB 2010. Lecture Notes in Computer Science, vol 6005 (Springer, Berlin, Heidelberg, 2010).R. P. Wildes, Iris recognition: an emerging biometric technology. Proc. IEEE. 85(9), 1348–1363 (1997).M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active contour models. Int. J. Comput. Vision. 1(4), 321–331 (1988).S. J. Pundlik, D. L. Woodard, S. T. Birchfield, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Non-ideal iris segmentation using graph cuts (IEEEAnchorage, 2008). p. 1–6. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4563108&isnumber=4562948 .H. Proença, Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell.32(8), 1502–1516 (2010). http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5156505&isnumber=5487331 .T. Tan, Z. He, Z. Sun, Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition. Image Vision Comput.28(2), 223–230 (2010).C. -W. Tan, A. Kumar, in CVPR 2011 WORKSHOPS. Automated segmentation of iris images using visible wavelength face images (Colorado Springs, 2011). p. 9–14. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5981682&isnumber=5981671 .Y. -H. Li, M. Savvides, An automatic iris occlusion estimation method based on high-dimensional density estimation. IEEE Trans. Pattern Anal. Mach. Intell.35(4), 784–796 (2013).M. Yahiaoui, E. Monfrini, B. Dorizzi, Markov chains for unsupervised segmentation of degraded nir iris images for person recognition. Pattern Recogn. Lett.82:, 116–123 (2016).A. Radman, N. Zainal, S. A. Suandi, Automated segmentation of iris images acquired in an unconstrained environment using hog-svm and growcut. Digit. Signal Proc.64:, 60–70 (2017).N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, T. Tan, in 2016 International Conference on Biometrics (ICB). Accurate iris segmentation in non-cooperative environments using fully convolutional networks (Halmstad, 2016). p. 1–8. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7550055&isnumber=7550036 .Z. Zhao, A. Kumar, in 2017 IEEE International Conference on Computer Vision (ICCV). Towards more accurate iris recognition using deeply learned spatially corresponding features (Venice, 2017). p. 3829–3838. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8237673&isnumber=8237262 .P. Li, X. Liu, L. Xiao, Q. Song, Robust and accurate iris segmentation in very noisy iris images. Image Vision Comput.28(2), 246–253 (2010).D. S. Jeong, J. W. Hwang, B. J. Kang, K. R. Park, C. S. Won, D. -K. Park, J. Kim, A new iris segmentation method for non-ideal iris images. Image Vision Comput.28(2), 254–260 (2010).Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vision Comput. 28(2), 261–269 (2010).Z. Zhao, A. Kumar, in 2015 IEEE International Conference on Computer Vision (ICCV). An accurate iris segmentation framework under relaxed imaging constraints using total variation model (Santiago, 2015). p. 3828–3836. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410793&isnumber=7410356 .Y. Hu, K. Sirlantzis, G. Howells, Improving colour iris segmentation using a model selection technique. Pattern Recogn. Lett.57:, 24–32 (2015).E. Ouabida, A. Essadique, A. Bouzid, Vander lugt correlator based active contours for iris segmentation and tracking. Expert Systems Appl.71:, 383–395 (2017).C. -W. Tan, A. Kumar, Unified framework for automated iris segmentation using distantly acquired face images. IEEE Trans. Image Proc.21(9), 4068–4079 (2012).C. -W. Tan, A. Kumar, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Human identification from at-a-distance images by simultaneously exploiting iris and periocular features (Tsukuba, 2012). p. 553–556. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6460194&isnumber=6460043 .C. -W. Tan, A. Kumar, Towards online iris and periocular recognition under relaxed imaging constraints. IEEE Trans. Image Proc.22(10), 3751–3765 (2013).K. Y. Shin, Y. G. Kim, K. R. Park, Enhanced iris recognition method based on multi-unit iris images. Opt. Eng.52(4), 047201–047201 (2013).CASIA iris databases. http://biometrics.idealtest.org/ . Accessed 06 Sept 2017.WVU iris databases. hhttp://biic.wvu.edu/data-sets/synthetic-iris-dataset . Accessed 06 Sept 2017.UBIRIS iris database. http://iris.di.ubi.pt . Accessed 06 Sept 2017.MICHE iris database. http://biplab.unisa.it/MICHE/ . Accessed 06 Sept 2017.P. J. Phillips, et al, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1. Overview of the face recognition grand challenge (San Diego, 2005). p. 947–954. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1467368&isnumber=31472 .D. S. Ma, J. Correll, B. Wittenbrink, The chicago face database: A free stimulus set of faces and norming data. Behav. Res. Methods. 47(4), 1122–1135 (2015).P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 2013).A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, Inc., Englewood Cliffs, 1989).J. Daugman, How iris recognition works. IEEE Trans. Circ. Syst. Video Technol.14(1), 21–30 (2004).A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Incremental face alignment in the wild (Columbus, 2014). p. 1859–1866. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909636&isnumber=6909393 .T. Baltrusaitis, P. Robinson, L. -P. Morency, in 2013 IEEE International Conference on Computer Vision Workshops. Constrained local neural fields for robust facial landmark detection in the wild (Sydney, 2013). p. 354–361. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755919&isnumber=6755862 .X. Zhu, D. Ramanan, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Face detection, pose estimation, and landmark localization in the wild (IEEEBerlin Heidelberg, 2012), pp. 2879–2886.G. Tzimiropoulos, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Project-out cascaded regression with an application to face alignment (Boston, 2015). p. 3659–3667. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298989&isnumber=7298593 .H. Hofbauer, F. Alonso-Fernandez, P. Wild, J. Bigun, A. Uhl, in 2014 22nd International Conference on Pattern Recognition. A ground truth for iris segmentation (Stockholm, 2014). p. 527–532. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976811&isnumber=6976709 .H. Proença, L. A. Alexandre, in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. The NICE.I: Noisy Iris Challenge Evaluation - Part I (Crystal City, 2007). p. 1–4. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4401910&isnumber=4401902 .J. Daugman, in European Convention on Security and Detection. High confidence recognition of persons by rapid video analysis of iris texture, (1995). p. 244–251. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=491729&isnumber=10615 .Code of Matlab implementation of Daugman’s integro-differential operator (IDO). https://es.mathworks.com/matlabcentral/fileexchange/15652-iris-segmentation-using-daugman-s-integrodifferential-operator/ . Accessed 06 Sept 2017.Code of Matlab implementation of Zhao and Kumar’s iris segmentation framework under relaxed imaging constraints using total variation model. http://www4.comp.polyu.edu.hk/~csajaykr/tvmiris.htm/ . Accessed 06 Sept 2017.Code of Matlab implementation of presented work. https://gitlab.com/ffuentes/hybrid_iris_segmentation/ . Accessed 06 Sept 2017.Face and eye detection with OpenCV. https://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html . Accessed 07 Sept 2018.A. K. Boyat, B. K. Joshi, 6. A review paper:noise models in digital image processing signal & image processing. An International Journal (SIPIJ), (2015), pp. 63–75. https://doi.org/10.5121/sipij.2015.6206 .A. Buades, Y. Lou, J. M. Morel, Z. Tang, Multi image noise estimation and denoising (2010). Available: https://hal.archives-ouvertes.fr/hal-00510866/

    An efficient framework for visible-infrared cross modality person re-identification

    Get PDF
    Visible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset.WOS:000551127300017Scopus - Affiliation ID: 60105072Science Citation Index ExpandedQ2ArticleUluslararası işbirliği ile yapılmayan - HAYIREylül2020YÖK - 2020-2

    Hyperspectral Data Acquisition and Its Application for Face Recognition

    Get PDF
    Current face recognition systems are rife with serious challenges in uncontrolled conditions: e.g., unrestrained lighting, pose variations, accessories, etc. Hyperspectral imaging (HI) is typically employed to counter many of those challenges, by incorporating the spectral information within different bands. Although numerous methods based on hyperspectral imaging have been developed for face recognition with promising results, three fundamental challenges remain: 1) low signal to noise ratios and low intensity values in the bands of the hyperspectral image specifically near blue bands; 2) high dimensionality of hyperspectral data; and 3) inter-band misalignment (IBM) correlated with subject motion during data acquisition. This dissertation concentrates mainly on addressing the aforementioned challenges in HI. First, to address low quality of the bands of the hyperspectral image, we utilize a custom light source that has more radiant power at shorter wavelengths and properly adjust camera exposure times corresponding to lower transmittance of the filter and lower radiant power of our light source. Second, the high dimensionality of spectral data imposes limitations on numerical analysis. As such, there is an emerging demand for robust data compression techniques with lows of less relevant information to manage real spectral data. To cope with these challenging problems, we describe a reduced-order data modeling technique based on local proper orthogonal decomposition in order to compute low-dimensional models by projecting high-dimensional clusters onto subspaces spanned by local reduced-order bases. Third, we investigate 11 leading alignment approaches to address IBM correlated with subject motion during data acquisition. To overcome the limitations of the considered alignment approaches, we propose an accurate alignment approach ( A3) by incorporating the strengths of point correspondence and a low-rank model. In addition, we develop two qualitative prediction models to assess the alignment quality of hyperspectral images in determining improved alignment among the conducted alignment approaches. Finally, we show that the proposed alignment approach leads to promising improvement on face recognition performance of a probabilistic linear discriminant analysis approach
    • …
    corecore