1,184 research outputs found

    Idiosyncratic Feature-Based Gaze Mapping

    Get PDF
    It is argued that polynomial expressions that are normally used for remote, video-based, low cost eye tracking systems, are not always ideal to accommodate individual differences in eye cleft, position of the eye in the socket, corneal bulge, astigmatism, etc. A procedure to identify a set of polynomial expressions that will provide the best possible accuracy for a specific individual is proposed.  It is also proposed that regression coefficients are recalculated in real-time, based on a subset of calibration points in the region of the current gaze and that a real-time correction is applied, based on the offsets from calibration targets that are close to the estimated point of regard.It was found that if no correction is applied, the choice of polynomial is critically important to get an accuracy that is just acceptable.  Previously identified polynomial sets were confirmed to provide good results in the absence of any correction procedure.  By applying real-time correction, the accuracy of any given polynomial improves while the choice of polynomial becomes less critical.  Identification of the best polynomial set per participant and correction technique in combination with the aforementioned correction techniques, lead to an average error of 0.32° (sd = 0.10°) over 134 participant recordings.The proposed improvements could lead to low-cost systems that are accurate and fast enough to do reading research or other studies where high accuracy is expected at framerates in excess of 200 Hz

    Perceptual Visibility Model for Temporal Contrast Changes in Periphery

    Get PDF
    Modeling perception is critical for many applications and developments in computer graphics to optimize and evaluate content generation techniques. Most of the work to date has focused on central (foveal) vision. However, this is insufficient for novel wide-field-of-view display devices, such as virtual and augmented reality headsets. Furthermore, the perceptual models proposed for the fovea do not readily extend to the off-center, peripheral visual field, where human perception is drastically different. In this paper, we focus on modeling the temporal aspect of visual perception in the periphery. We present new psychophysical experiments that measure the sensitivity of human observers to different spatio-temporal stimuli across a wide field of view. We use the collected data to build a perceptual model for the visibility of temporal changes at different eccentricities in complex video content. Finally, we discuss, demonstrate, and evaluate several problems that can be addressed using our technique. First, we show how our model enables injecting new content into the periphery without distracting the viewer, and we discuss the link between the model and human attention. Second, we demonstrate how foveated rendering methods can be evaluated and optimized to limit the visibility of temporal aliasing

    Improving eye-tracking calibration accuracy using symbolic regression

    Get PDF
    Eye tracking systems have recently experienced a diversity of novel calibration procedures, including smooth pursuit and vestibulo-ocular reflex based calibrations. These approaches allowed collecting more data compared to the standard 9-point calibration. However, the computation of the mapping function which provides planar gaze positions from pupil features given as input is mostly based on polynomial regressions, and little work has investigated alternative approaches. This paper fills this gap by providing a new calibration computation method based on symbolic regression. Instead of making prior assumptions on the polynomial transfer function between input and output records, symbolic regression seeks an optimal model among different types of functions and their combinations. This approach offers an interesting perspective in terms of flexibility and accuracy. Therefore, we designed two experiments in which we collected ground truth data to compare vestibulo-ocular and smooth pursuit calibrations based on symbolic regression, both using a marker or a finger as a target, resulting in four different calibrations. As a result, we improved calibration accuracy by more than 30%, with reasonable extra computation time

    Saccade Landing Point Prediction Based on Fine-Grained Learning Method

    Full text link
    The landing point of a saccade defines the new fixation region, the new region of interest. We asked whether it was possible to predict the saccade landing point early in this very fast eye movement. This work proposes a new algorithm based on LSTM networks and a fine-grained loss function for saccade landing point prediction in real-world scenarios. Predicting the landing point is a critical milestone toward reducing the problems caused by display-update latency in gaze-contingent systems that make real-time changes in the display based on eye tracking. Saccadic eye movements are some of the fastest human neuro-motor activities with angular velocities of up to 1,000°/s. We present a comprehensive analysis of the performance of our method using a database with almost 220,000 saccades from 75 participants captured during natural viewing of videos. We include a comparison with state-of-the-art saccade landing point prediction algorithms. The results obtained using our proposed method outperformed existing approaches with improvements of up to 50% error reduction. Finally, we analyzed some factors that affected prediction errors including duration, length, age, and user intrinsic characteristics.This work was supported in part by the Project BIBECA through MINECO/FEDER under Grant RTI2018-101248-B-100, in part by the Jose Castillejo Program through MINECO under Grant CAS17/00117, and in part by the National Institutes of Health (NIH) under Grant P30EY003790 and Grant R21EY023724

    Gaze-tracking-based interface for robotic chair guidance

    Get PDF
    This research focuses on finding solutions to enhance the quality of life for wheelchair users, specifically by applying a gaze-tracking-based interface for the guidance of a robotized wheelchair. For this purpose, the interface was applied in two different approaches for the wheelchair control system. The first one was an assisted control in which the user was continuously involved in controlling the movement of the wheelchair in the environment and the inclination of the different parts of the seat through the user’s gaze and eye blinks obtained with the interface. The second approach was to take the first steps to apply the device to an autonomous wheelchair control in which the wheelchair moves autonomously avoiding collisions towards the position defined by the user. To this end, the basis for obtaining the gaze position relative to the wheelchair and the object detection was developed in this project to be able to calculate in the future the optimal route to which the wheelchair should move. In addition, the integration of a robotic arm in the wheelchair to manipulate different objects was also considered, obtaining in this work the object of interest indicated by the user's gaze within the detected objects so that in the future the robotic arm could select and pick up the object the user wants to manipulate. In addition to the two approaches, an attempt was also made to estimate the user's gaze without the software interface. For this purpose, the gaze is obtained from pupil detection libraries, a calibration and a mathematical model that relates pupil positions to gaze. The results of the implementations have been analysed in this work, including some limitations encountered. Nevertheless, future improvements are proposed, with the aim of increasing the independence of wheelchair user
    • …
    corecore