1,934 research outputs found

    Directional Sensitivity of Gaze-Collinearity Features in Liveness Detection

    Get PDF
    To increase the trust in using face recognition systems, these need to be capable of differentiating between face images captured from a real person and those captured from photos or similar artifacts presented at the sensor. Methods have been published for face liveness detection by measuring the gaze of a user while the user tracks an object on the screen, which appears at pre-defined, places randomly. In this paper we explore the sensitivity of such a system to different stimulus alignments. The aim is to establish whether there is such sensitivity and if so to explore how this may be exploited for improving the design of the stimulus. The results suggest that collecting feature points along the horizontal direction is more effective than the vertical direction for liveness detection

    Learning to Personalize in Appearance-Based Gaze Tracking

    Full text link
    Personal variations severely limit the performance of appearance-based gaze tracking. Adapting to these variations using standard neural network model adaptation methods is difficult. The problems range from overfitting, due to small amounts of training data, to underfitting, due to restrictive model architectures. We tackle these problems by introducing the SPatial Adaptive GaZe Estimator (SPAZE). By modeling personal variations as a low-dimensional latent parameter space, SPAZE provides just enough adaptability to capture the range of personal variations without being prone to overfitting. Calibrating SPAZE for a new person reduces to solving a small optimization problem. SPAZE achieves an error of 2.70 degrees with 9 calibration samples on MPIIGaze, improving on the state-of-the-art by 14 %. We contribute to gaze tracking research by empirically showing that personal variations are well-modeled as a 3-dimensional latent parameter space for each eye. We show that this low-dimensionality is expected by examining model-based approaches to gaze tracking. We also show that accurate head pose-free gaze tracking is possible

    Gaze-tracking-based interface for robotic chair guidance

    Get PDF
    This research focuses on finding solutions to enhance the quality of life for wheelchair users, specifically by applying a gaze-tracking-based interface for the guidance of a robotized wheelchair. For this purpose, the interface was applied in two different approaches for the wheelchair control system. The first one was an assisted control in which the user was continuously involved in controlling the movement of the wheelchair in the environment and the inclination of the different parts of the seat through the user’s gaze and eye blinks obtained with the interface. The second approach was to take the first steps to apply the device to an autonomous wheelchair control in which the wheelchair moves autonomously avoiding collisions towards the position defined by the user. To this end, the basis for obtaining the gaze position relative to the wheelchair and the object detection was developed in this project to be able to calculate in the future the optimal route to which the wheelchair should move. In addition, the integration of a robotic arm in the wheelchair to manipulate different objects was also considered, obtaining in this work the object of interest indicated by the user's gaze within the detected objects so that in the future the robotic arm could select and pick up the object the user wants to manipulate. In addition to the two approaches, an attempt was also made to estimate the user's gaze without the software interface. For this purpose, the gaze is obtained from pupil detection libraries, a calibration and a mathematical model that relates pupil positions to gaze. The results of the implementations have been analysed in this work, including some limitations encountered. Nevertheless, future improvements are proposed, with the aim of increasing the independence of wheelchair user

    Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low Resolution Images

    Full text link
    Iris centre localization in low-resolution visible images is a challenging problem in computer vision community due to noise, shadows, occlusions, pose variations, eye blinks, etc. This paper proposes an efficient method for determining iris centre in low-resolution images in the visible spectrum. Even low-cost consumer-grade webcams can be used for gaze tracking without any additional hardware. A two-stage algorithm is proposed for iris centre localization. The proposed method uses geometrical characteristics of the eye. In the first stage, a fast convolution based approach is used for obtaining the coarse location of iris centre (IC). The IC location is further refined in the second stage using boundary tracing and ellipse fitting. The algorithm has been evaluated in public databases like BioID, Gi4E and is found to outperform the state of the art methods.Comment: 12 pages, 10 figures, IET Computer Vision, 201

    A Novel approach to a wearable eye tracker using region-based gaze estimation

    Get PDF
    Eye tracking studies are useful to understand human behavior and reactions to visual stimuli. To conduct experiments in natural environments it is common to use mobile or wearable eye trackers. To ensure these systems do not interfere with the natural behavior of the subject during the experiment, they should be comfortable and be able to collect information about the subject\u27s point of gaze for long periods of time. Most existing mobile eye trackers are costly and complex. Furthermore they partially obstruct the visual field of the subject by placing the eye camera directly in front of the eye. These systems are not suitable for natural outdoor environments due to external ambient light interfering with the infrared illumination used to facilitate gaze estimation. To address these limitations a new eye tracking system was developed and analyzed. The new system was designed to be light and unobtrusive. It has two high definition cameras mounted onto headgear worn by the subject and two mirrors placed outside the visual field of the subject to capture eye images. Based on the angular perspective of the eye, a novel gaze estimation algorithm was designed and optimized to estimate the gaze of the subject in one of nine possible directions. Several methods were developed to compromise between shape-based models and appearance-based models. The eye model and features were chosen based on the correlation with the different gaze directions. The performance of this eye tracking system was then experimentally evaluated based on the accuracy of gaze estimation and the weight of the system

    Dummy eye measurements of microsaccades: testing the influence of system noise and head movements on microsaccade detection in a popular video-based eye tracker

    Get PDF
    Whereas early studies of microsaccades have predominantly relied on custom-built eye trackers and manual tagging of microsaccades, more recent work tends to use video-based eye tracking and automated algorithms for microsaccade detection. While data from these newer studies suggest that microsaccades can be reliably detected with video-based systems, this has not been systematically evaluated. I here present a method and data examining microsaccade detection in an often used video-based system (the Eyelink II system) and a commonly used detection algorithm (Engbert & Kliegl, 2003; Engbert & Mergenthaler, 2006). Recordings from human participants and those obtained using a pair of dummy eyes, mounted on a pair of glasses either worn by a human participant (i.e., with head motion) or a dummy head (no head motion) were compared. Three experiments were conducted. The first experiment suggests that when microsaccade measurements make use of the pupil detection mode, microsaccade detections in the absence of eye movements are sparse in the absence of head movements, but frequent with head movements (despite the use of a chin rest). A second experiment demonstrates that by using measurements that rely on a combination of corneal reflection and pupil detection, false microsaccade detections can be largely avoided as long as a binocular criterion is used. A third experiment examines whether past results may have been affected by possible incorrect detections due to small head movements. It shows that despite the many detections due to head movements, the typical modulation of microsaccade rate after stimulus onset is found only when recording from the participants’ eyes
    corecore