854 research outputs found

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Cursor control by point-of-regard estimation for a computer with integrated webcam

    Get PDF
    This work forms part of the project Eye-Communicate funded by the Malta Council for Science and Technology through the National Research & Innovation Programme (2012) under Research Grant No. R&I-2012-057.The problem of eye-gaze tracking by videooculography has been receiving extensive interest throughout the years owing to the wide range of applications associated with this technology. Nonetheless, the emergence of a new paradigm referred to as pervasive eye-gaze tracking, introduces new challenges that go beyond the typical conditions for which classical video-based eye- gaze tracking methods have been developed. In this paper, we propose to deal with the problem of point-of-regard estimation from low-quality images acquired by an integrated camera inside a notebook computer. The proposed method detects the iris region from low-resolution eye region images by its intensity values rather than the shape, ensuring that this region can also be detected at different angles of rotation and under partial occlusion by the eyelids. Following the calculation of the point- of-regard from the estimated iris center coordinates, a number of Kalman filters improve upon the noisy point-of-regard estimates to smoothen the trajectory of the mouse cursor on the monitor screen. Quantitative results obtained from a validation procedure reveal a low mean error that is within the footprint of the average on-screen icon.peer-reviewe

    A Novel approach to a wearable eye tracker using region-based gaze estimation

    Get PDF
    Eye tracking studies are useful to understand human behavior and reactions to visual stimuli. To conduct experiments in natural environments it is common to use mobile or wearable eye trackers. To ensure these systems do not interfere with the natural behavior of the subject during the experiment, they should be comfortable and be able to collect information about the subject\u27s point of gaze for long periods of time. Most existing mobile eye trackers are costly and complex. Furthermore they partially obstruct the visual field of the subject by placing the eye camera directly in front of the eye. These systems are not suitable for natural outdoor environments due to external ambient light interfering with the infrared illumination used to facilitate gaze estimation. To address these limitations a new eye tracking system was developed and analyzed. The new system was designed to be light and unobtrusive. It has two high definition cameras mounted onto headgear worn by the subject and two mirrors placed outside the visual field of the subject to capture eye images. Based on the angular perspective of the eye, a novel gaze estimation algorithm was designed and optimized to estimate the gaze of the subject in one of nine possible directions. Several methods were developed to compromise between shape-based models and appearance-based models. The eye model and features were chosen based on the correlation with the different gaze directions. The performance of this eye tracking system was then experimentally evaluated based on the accuracy of gaze estimation and the weight of the system

    On-screen point-of-regard estimation under natural head movement for a computer with integrated webcam

    Get PDF
    Recent developments in the field of eye-gaze tracking by vidoeoculography indicate a growing interest towards unobtrusive tracking in real-life scenarios, a new paradigm referred to as pervasive eye-gaze tracking. Among the challenges associated with this paradigm, the capability of a tracking platform to integrate well into devices with in-built imaging hardware and to permit natural head movement during tracking is of importance in less constrained scenarios. The work presented in this paper builds on our earlier work, which addressed the problem of estimating on-screen point-of-regard from iris center movements captured by an integrated camera inside a notebook computer, by proposing a method to approximate the head movements in conjunction with the iris movements in order to alleviate the requirement for a stationary head pose. Following iris localization by an appearance-based method, linear mapping functions for the iris and head movement are computed during a brief calibration procedure permitting the image information to be mapped to a point-of-regard on the monitor screen. Following the calculation of the point-of-regard as a function of the iris and head movement, separate Kalman filters improve upon the noisy point-of-regard estimates to smoothen the trajectory of the mouse cursor on the monitor screen. Quantitative and qualitative results obtained from two validation procedures reveal an improvement in the estimation accuracy under natural head movement, over our previous results achieved from earlier work.peer-reviewe

    A Novel Authentication Method Using Multi-Factor Eye Gaze

    Get PDF
    A method for novel, rapid and robust one-step multi-factor authentication of a user is presented, employing multi-factor eye gaze. The mobile environment presents challenges that render the conventional password model obsolete. The primary goal is to offer an authentication method that competitively replaces the password, while offering improved security and usability. This method and apparatus combine the smooth operation of biometric authentication with the protection of knowledge based authentication to robustly authenticate a user and secure information on a mobile device in a manner that is easily used and requires no external hardware. This work demonstrates a solution comprised of a pupil segmentation algorithm, gaze estimation, and an innovative application that allows a user to authenticate oneself using gaze as the interaction medium

    Video-based iris feature extraction and matching using Deep Learning

    Get PDF
    This research is initiated to enhance the video-based eye tracker’s performance to detect small eye movements.[1] Chaudhary and Pelz, 2019, created an excellent foundation on their motion tracking of iris features to detect small eye movements[1], where they successfully used the classical handcrafted feature extraction methods like Scale InvariantFeature Transform (SIFT) to match the features on iris image frames. They extracted features from the eye-tracking videos and then used patent [2] an approach of tracking the geometric median of the distribution. This patent [2] excludes outliers, and the velocity is approximated by scaling by the sampling rate. To detect the microsaccades (small, rapid eye movements that occur in only one eye at a time) thresholding was used to estimate the velocity in the following paper[1]. Our goal is to create a robust mathematical model to create a 2D feature distribution in the given patent [2]. In this regard, we worked in two steps. First, we studied a large number of multiple recent deep learning approaches along with the classical hand-crafted feature extractor like SIFT, to extract the features from the collected eye tracker videos from Multidisciplinary Vision Research Lab(MVRL) and then showed the best matching process for our given RIT-Eyes dataset[3]. The goal is to make the feature extraction as robust as possible. Secondly, we clearly showed that deep learning methods can detect more feature points from the iris images and that matching of the extracted features frame by frame is more accurate than the classical approach

    Pupil Localisation and Eye Centre Estimation using Machine Learning and Computer Vision

    Get PDF
    Various methods have been used to estimate the pupil location within an image or a real-time video frame in many fields. However, these methods lack the performance specifically in low-resolution images and varying background conditions. We propose a coarse-to-fine pupil localisation method using a composite of machine learning and image processing algorithms. First, a pre-trained model is employed for the facial landmark identification to extract the desired eye-frames within the input image. We then use multi-stage convolution to find the optimal horizontal and vertical coordinates of the pupil within the identified eye-frames. For this purpose, we define an adaptive kernel to deal with the varying resolution and size of input images. Furthermore, a dynamic threshold is calculated recursively for reliable identification of the best-matched candidate. We evaluated our method using various statistical and standard metrics along-with a standardized distance metric we introduce first time in this study. Proposed method outperforms previous works in terms of accuracy and reliability when benchmarked on multiple standard datasets. The work has diverse artificial intelligence and industrial applications including human computer interfaces, emotion recognition, psychological profiling, healthcare and automated deception detection
    corecore