7 research outputs found
Motion tracking of iris features to detect small eye movements
The inability of current video-based eye trackers to reliably detect very small eye movements has led to confusion about the prevalence or even the existence of monocular microsaccades (small, rapid eye movements that occur in only one eye at a time). As current methods often rely on precisely localizing the pupil and/or corneal reflection on successive frames, current microsaccade-detection algorithms often suffer from signal artifacts and a low signal-to-noise ratio. We describe a new video-based eye tracking methodology which can reliably detect small eye movements over 0.2 degrees (12 arcmin) with very high confidence. Our method tracks the motion of iris features to estimate velocity rather than position, yielding a better record of microsaccades. We provide a more robust, detailed record of miniature eye movements by relying on more stable, higher-order features (such as local features of iris texture) instead of lower-order features (such as pupil center and corneal reflection), which are sensitive to noise and drift
Privacy-Preserving Eye Videos using Rubber Sheet Model
Video-based eye trackers estimate gaze based on eye images/videos. As
security and privacy concerns loom over technological advancements, tackling
such challenges is crucial. We present a new approach to handle privacy issues
in eye videos by replacing the current identifiable iris texture with a
different iris template in the video capture pipeline based on the Rubber Sheet
Model. We extend to image blending and median-value representations to
demonstrate that videos can be manipulated without significantly degrading
segmentation and pupil detection accuracy.Comment: Will be published in ETRA 20 Short Papers, June 2-5, 2020, Stuttgart,
Germany Copyright 2020 Association for Computing Machiner
EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking
Ellipse fitting, an essential component in pupil or iris tracking based video
oculography, is performed on previously segmented eye parts generated using
various computer vision techniques. Several factors, such as occlusions due to
eyelid shape, camera position or eyelashes, frequently break ellipse fitting
algorithms that rely on well-defined pupil or iris edge segments. In this work,
we propose training a convolutional neural network to directly segment entire
elliptical structures and demonstrate that such a framework is robust to
occlusions and offers superior pupil and iris tracking performance (at least
10 and 24 increase in pupil and iris center detection rate respectively
within a two-pixel error margin) compared to using standard eye parts
segmentation for multiple publicly available synthetic segmentation datasets
RITnet: Real-time Semantic Segmentation of the Eye for Gaze Tracking
Accurate eye segmentation can improve eye-gaze estimation and support
interactive computing based on visual attention; however, existing eye
segmentation methods suffer from issues such as person-dependent accuracy, lack
of robustness, and an inability to be run in real-time. Here, we present the
RITnet model, which is a deep neural network that combines U-Net and DenseNet.
RITnet is under 1 MB and achieves 95.3\% accuracy on the 2019 OpenEDS Semantic
Segmentation challenge. Using a GeForce GTX 1080 Ti, RITnet tracks at
300Hz, enabling real-time gaze tracking applications. Pre-trained models and
source code are available https://bitbucket.org/eye-ush/ritnet/.Comment: This model is the winning submission for OpenEDS Semantic
Segmentation Challenge for Eye images
https://research.fb.com/programs/openeds-challenge/. To appear in ICCVW 2019.
("Pre-trained models and source code are available
https://bitbucket.org/eye-ush/ritnet/."