4,567 research outputs found
Comparison of head gaze and head and eye gaze within an immersive environment
For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Teleconferencing
systems such as the AcessGrid allow users to meet across geographically disparate rooms but as of now there seems no substitute for face to face meetings. This paper gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive Collaborative Virtual Environment and assessing the impact that this
would have on interaction between the users of such a system.
An experiment was conducted to assess the difference between users abilities to judge what objects an avatar is
looking at with only head gaze being viewed and also with
eye and head gaze data being displayed. The results from
the experiment show that eye gaze is of vital importance to
the subjects correctly identifying what a person is looking
at in an immersive virtual environment. This is followed by
a description of how the eye tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system
Robust Real-Time Multi-View Eye Tracking
Despite significant advances in improving the gaze tracking accuracy under
controlled conditions, the tracking robustness under real-world conditions,
such as large head pose and movements, use of eyeglasses, illumination and eye
type variations, remains a major challenge in eye tracking. In this paper, we
revisit this challenge and introduce a real-time multi-camera eye tracking
framework to improve the tracking robustness. First, differently from previous
work, we design a multi-view tracking setup that allows for acquiring multiple
eye appearances simultaneously. Leveraging multi-view appearances enables to
more reliably detect gaze features under challenging conditions, particularly
when they are obstructed in conventional single-view appearance due to large
head movements or eyewear effects. The features extracted on various
appearances are then used for estimating multiple gaze outputs. Second, we
propose to combine estimated gaze outputs through an adaptive fusion mechanism
to compute user's overall point of regard. The proposed mechanism firstly
determines the estimation reliability of each gaze output according to user's
momentary head pose and predicted gazing behavior, and then performs a
reliability-based weighted fusion. We demonstrate the efficacy of our framework
with extensive simulations and user experiments on a collected dataset
featuring 20 subjects. Our results show that in comparison with
state-of-the-art eye trackers, the proposed framework provides not only a
significant enhancement in accuracy but also a notable robustness. Our
prototype system runs at 30 frames-per-second (fps) and achieves 1 degree
accuracy under challenging experimental scenarios, which makes it suitable for
applications demanding high accuracy and robustness.Comment: Organisational changes in the main msp and supplementary info.
Results unchanged. Main msp: 14 pages, 15 figures. Supplementary: 2 tables, 1
figure. Under review for an IEEE transactions publicatio
OpenEDS: Open Eye Dataset
We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images
captured using a virtual-reality (VR) head mounted display mounted with two
synchronized eyefacing cameras at a frame rate of 200 Hz under controlled
illumination. This dataset is compiled from video capture of the eye-region
collected from 152 individual participants and is divided into four subsets:
(i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil
and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from
randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs
of left and right point cloud data compiled from corneal topography of eye
regions collected from a subset, 143 out of 152, participants in the study. A
baseline experiment has been evaluated on OpenEDS for the task of semantic
segmentation of pupil, iris, sclera and background, with the mean
intersectionover-union (mIoU) of 98.3 %. We anticipate that OpenEDS will create
opportunities to researchers in the eye tracking community and the broader
machine learning and computer vision community to advance the state of
eye-tracking for VR applications. The dataset is available for download upon
request at https://research.fb.com/programs/openeds-challengeComment: 11 pages; 12 figure
A Computer Vision System for Attention Mapping in SLAM based 3D Models
The study of human factors in the frame of interaction studies has been
relevant for usability engi-neering and ergonomics for decades. Today, with the
advent of wearable eye-tracking and Google glasses, monitoring of human factors
will soon become ubiquitous. This work describes a computer vision system that
enables pervasive mapping and monitoring of human attention. The key
contribu-tion is that our methodology enables full 3D recovery of the gaze
pointer, human view frustum and associated human centred measurements directly
into an automatically computed 3D model in real-time. We apply RGB-D SLAM and
descriptor matching methodologies for the 3D modelling, locali-zation and fully
automated annotation of ROIs (regions of interest) within the acquired 3D
model. This innovative methodology will open new avenues for attention studies
in real world environments, bringing new potential into automated processing
for human factors technologies.Comment: Part of the OAGM/AAPR 2013 proceedings (arXiv:1304.1876
Eye Gaze Controlled Interfaces for Head Mounted and Multi-Functional Displays in Military Aviation Environment
Eye gaze controlled interfaces allow us to directly manipulate a graphical
user interface just by looking at it. This technology has great potential in
military aviation, in particular, operating different displays in situations
where pilots hands are occupied with flying the aircraft. This paper reports
studies on analyzing accuracy of eye gaze controlled interface inside aircraft
undertaking representative flying missions. We reported that pilots can
undertake representative pointing and selection tasks at less than 2 secs on
average. Further, we evaluated the accuracy of eye gaze tracking glass under
various G-conditions and analyzed its failure modes. We observed that the
accuracy of an eye tracker is less than 5 degree of visual angle up to +3G,
although it is less accurate at minus 1G and plus 5G. We observed that eye
tracker may fail to track under higher external illumination. We also infer
that an eye tracker to be used in military aviation need to have larger
vertical field of view than the present available systems. We used this
analysis to develop eye gaze trackers for Multi-Functional displays and Head
Mounted Display System. We obtained significant reduction in pointing and
selection times using our proposed HMDS system compared to traditional TDS.Comment: Presented at IEEE Aerospace 202
Free-View, 3D Gaze-Guided, Assistive Robotic System for Activities of Daily Living
Patients suffering from quadriplegia have limited body motion which prevents
them from performing daily activities. We have developed an assistive robotic
system with an intuitive free-view gaze interface. The user's point of regard
is estimated in 3D space while allowing free head movement and is combined with
object recognition and trajectory planning. This framework allows the user to
interact with objects using fixations. Two operational modes have been
implemented to cater for different eventualities. The automatic mode performs a
pre-defined task associated with a gaze-selected object, while the manual mode
allows gaze control of the robot's end-effector position on the user's frame of
reference. User studies reported effortless operation in automatic mode. A
manual pick and place task achieved a success rate of 100% on the users' first
attempt.Comment: 7 Pages, 9 Figures, IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2018), Madrid, Spai
Noninvasive Corneal Image-Based Gaze Measurement System
Gaze tracking is an important technology as the system can give information
about a person from what and where the person is seeing. There have been many
attempts to make robust and accurate gaze trackers using either monitor or
wearable devices. However, those contraptions often require fine individual
calibration per session and/or require a person wearing a device, which may not
be suitable for certain situations. In this paper, we propose a robust and a
completely noninvasive gaze tracking system that involves neither complex
calibrations nor the use of wearable devices. We achieve this via direct eye
reflection analysis by building a real-time system that effectively enables it.
We also show several interesting applications for our system including
experiments with young children
Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping
Assistive robotic systems endeavour to support those with movement
disabilities, enabling them to move again and regain functionality. Main issue
with these systems is the complexity of their low-level control, and how to
translate this to simpler, higher level commands that are easy and intuitive
for a human user to interact with. We have created a multi-modal system,
consisting of different sensing, decision making and actuating modalities,
leading to intuitive, human-in-the-loop assistive robotics. The system takes
its cue from the user's gaze, to decode their intentions and implement
low-level motion actions to achieve high-level tasks. This results in the user
simply having to look at the objects of interest, for the robotic system to
assist them in reaching for those objects, grasping them, and using them to
interact with other objects. We present our method for 3D gaze estimation, and
grammars-based implementation of sequences of action with the robotic system.
The 3D gaze estimation is evaluated with 8 subjects, showing an overall
accuracy of . The full system is tested with 5 subjects, showing
successful implementation of of reach to gaze point actions and full
implementation of pick and place tasks in 96\%, and pick and pour tasks in
of cases. Finally we present a discussion on our results and what future
work is needed to improve the system.Comment: 7 pages, 7 figures, 4 tables. Submitted to IEEE ICRA 2019 - under
revie
Eyemotion: Classifying facial expressions in VR using eye-tracking cameras
One of the main challenges of social interaction in virtual reality settings
is that head-mounted displays occlude a large portion of the face, blocking
facial expressions and thereby restricting social engagement cues among users.
Hence, auxiliary means of sensing and conveying these expressions are needed.
We present an algorithm to automatically infer expressions by analyzing only a
partially occluded face while the user is engaged in a virtual reality
experience. Specifically, we show that images of the user's eyes captured from
an IR gaze-tracking camera within a VR headset are sufficient to infer a select
subset of facial expressions without the use of any fixed external camera.
Using these inferences, we can generate dynamic avatars in real-time which
function as an expressive surrogate for the user. We propose a novel data
collection pipeline as well as a novel approach for increasing CNN accuracy via
personalization. Our results show a mean accuracy of 74% ( of 0.73) among 5
`emotive' expressions and a mean accuracy of 70% ( of 0.68) among 10
distinct facial action units, outperforming human raters.Comment: Uploaded Supplementary PDF. Fixed author affiliation. Corrected typo
in personalization accurac
OpenEDS2020: Open Eyes Dataset
We present the second edition of OpenEDS dataset, OpenEDS2020, a novel
dataset of eye-image sequences captured at a frame rate of 100 Hz under
controlled illumination, using a virtual-reality head-mounted display mounted
with two synchronized eye-facing cameras. The dataset, which is anonymized to
remove any personally identifiable information on participants, consists of 80
participants of varied appearance performing several gaze-elicited tasks, and
is divided in two subsets: 1) Gaze Prediction Dataset, with up to 66,560
sequences containing 550,400 eye-images and respective gaze vectors, created to
foster research in spatio-temporal gaze estimation and prediction approaches;
and 2) Eye Segmentation Dataset, consisting of 200 sequences sampled at 5 Hz,
with up to 29,500 images, of which 5% contain a semantic segmentation label,
devised to encourage the use of temporal information to propagate labels to
contiguous frames. Baseline experiments have been evaluated on OpenEDS2020, one
for each task, with average angular error of 5.37 degrees when performing gaze
prediction on 1 to 5 frames into the future, and a mean intersection over union
score of 84.1% for semantic segmentation. As its predecessor, OpenEDS dataset,
we anticipate that this new dataset will continue creating opportunities to
researchers in eye tracking, machine learning and computer vision communities,
to advance the state of the art for virtual reality applications. The dataset
is available for download upon request at
http://research.fb.com/programs/openeds-2020-challenge/.Comment: Description of dataset used in OpenEDS2020 challenge:
https://research.fb.com/programs/openeds-2020-challenge
- …