55,729 research outputs found

    A model-based gaze-tracking system

    Get PDF

    A Novel approach to a wearable eye tracker using region-based gaze estimation

    Get PDF
    Eye tracking studies are useful to understand human behavior and reactions to visual stimuli. To conduct experiments in natural environments it is common to use mobile or wearable eye trackers. To ensure these systems do not interfere with the natural behavior of the subject during the experiment, they should be comfortable and be able to collect information about the subject\u27s point of gaze for long periods of time. Most existing mobile eye trackers are costly and complex. Furthermore they partially obstruct the visual field of the subject by placing the eye camera directly in front of the eye. These systems are not suitable for natural outdoor environments due to external ambient light interfering with the infrared illumination used to facilitate gaze estimation. To address these limitations a new eye tracking system was developed and analyzed. The new system was designed to be light and unobtrusive. It has two high definition cameras mounted onto headgear worn by the subject and two mirrors placed outside the visual field of the subject to capture eye images. Based on the angular perspective of the eye, a novel gaze estimation algorithm was designed and optimized to estimate the gaze of the subject in one of nine possible directions. Several methods were developed to compromise between shape-based models and appearance-based models. The eye model and features were chosen based on the correlation with the different gaze directions. The performance of this eye tracking system was then experimentally evaluated based on the accuracy of gaze estimation and the weight of the system

    Implementing a Gaze Tracking Algorithm for Improving Advanced Driver Assistance Systems

    Get PDF
    Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a system that is continuously sounding alarms can be overwhelming or confusing or both, and can be counterproductive. Using the driver"s attention to build an efficient ADAS is the main contribution of this work. To obtain this 'attention value” the use of a Gaze tracking is proposed. Driver"s gaze direction is a crucial factor in understanding fatal distractions, as well as discerning when it is necessary to warn the driver about risks on the road. In this paper, a real-time gaze tracking system is proposed as part of the development of an ADAS that obtains and communicates the driver"s gaze information. The developed ADAS uses gaze information to determine if the drivers are looking to the road with their full attention. This work gives a step ahead in the ADAS based on the driver, building an ADAS that warns the driver only in case of distraction. The gaze tracking system was implemented as a model-based system using a Kinect v2.0 sensor and was adjusted on a set-up environment and tested on a suitable-features driving simulation environment. The average obtained results are promising, having hit ratios between 96.37% and 81.84%This work has been supported by the Spanish Government under projects TRA2016-78886-C3-1-R, PID2019-104793RB-C31, RTI2018-096036-B-C22, PEAVAUTO-CM-UC3M and by the Region of Madrid Excellence Program (EPUC3M17

    Statistical Methods to Measure Reading Progression Using Eye-Gaze Fixation Points

    Get PDF
    In this thesis, we investigate methods to accurately track reading progression by analyzing eye-gaze fixation points, using commercially available eye tracking devices and without the imposition of unnatural movement constraints. In order to obtain the most accurate eye-gaze fixation point data possible, the current state of the art relies on expensive, cumbersome apparatuses. Eye-gaze tracking using less expensive hardware, and without constraints imposed on the individual whose gaze is being tracked, results in less reliable, noise-corrupt data which proves difficult to interpret. Extending the accessibility of accurate reading progression tracking beyond its current limits and enabling its feasibility in a real-world, constraint-free environment will enable a multitude of futuristic functionalities for educational, enterprise, and consumer technologies. We first discuss the ``Line Detection System\u27\u27 (LDS), a Kalman filter and hidden Markov model based algorithm designed to infer from noisy data the line of text associated with each eye-gaze fixation point reported every few milliseconds during reading. This system is shown to yield an average line detection accuracy of 88.1\%. Next, we discuss a ``Horizontal Saccade Tracking System\u27\u27 (HSTS) which aims to track horizontal progression within each line, using a least squares approach to filter out noise. Finally, we discuss a novel ``Slip-Kalman\u27\u27 filter which is custom designed to track the progression of reading. This method improves upon the original LDS, performing at an average line detection accuracy of 97.8\%, and offers advanced capability in horizontal tracking compared to the HSTS. The performance of each method is demonstrated using 25 pages worth of data collected during readin

    Benefits of temporal information for appearance-based gaze estimation

    Full text link
    State-of-the-art appearance-based gaze estimation methods, usually based on deep learning techniques, mainly rely on static features. However, temporal trace of eye gaze contains useful information for estimating a given gaze point. For example, approaches leveraging sequential eye gaze information when applied to remote or low-resolution image scenarios with off-the-shelf cameras are showing promising results. The magnitude of contribution from temporal gaze trace is yet unclear for higher resolution/frame rate imaging systems, in which more detailed information about an eye is captured. In this paper, we investigate whether temporal sequences of eye images, captured using a high-resolution, high-frame rate head-mounted virtual reality system, can be leveraged to enhance the accuracy of an end-to-end appearance-based deep-learning model for gaze estimation. Performance is compared against a static-only version of the model. Results demonstrate statistically-significant benefits of temporal information, particularly for the vertical component of gaze.Comment: In ACM Symposium on Eye Tracking Research & Applications (ETRA), 202
    corecore