5,464 research outputs found
Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views
Blattgerste J, Renner P, Pfeiffer T. Advantages of Eye-Gaze over Head-Gaze-Based Selection in Virtual and Augmented Reality under Varying Field of Views. In: COGAIN '18. Proceedings of the Symposium on Communication by Gaze Interaction. New York: ACM; 2018.The current best practice for hands-free selection using Virtual and Augmented Reality (VR/AR) head-mounted displays is to use head-gaze for aiming and dwell-time or clicking for triggering the selection. There is an observable trend for new VR and AR devices to come with integrated eye-tracking units to improve rendering, to provide means for attention analysis or for social interactions. Eye-gaze has been successfully used for human-computer interaction in other domains, primarily on desktop computers. In VR/AR systems, aiming via eye-gaze could be significantly faster and less exhausting than via head-gaze.
To evaluate benefits of eye-gaze-based interaction methods in VR and AR, we compared aiming via head-gaze and aiming via eye-gaze. We show that eye-gaze outperforms head-gaze in terms of speed, task load, required head movement and user preference. We furthermore show that the advantages of eye-gaze further increase with larger FOV sizes
Recommended from our members
Eye Tracking Support for Visual Analytics Systems
Visual analytics (VA) research provides helpful solutions for interactive visual data analysis when exploring large and complex datasets. Due to recent advances in eye tracking technology, promising opportunities arise to extend these traditional VA approaches. Therefore, we discuss foundations for eye tracking support in VA systems. We first review and discuss the structure and range of typical VA systems. Based on a widely used VA model, we present five comprehensive examples that cover a wide range of usage scenarios. Then, we demonstrate that the VA model can be used to systematically explore how concrete VA systems could be extended with eye tracking, to create supportive and adaptive analytics systems. This allows us to identify general research and application opportunities, and classify them into research themes. In a call for action, we map the road for future research to broaden the use of eye tracking and advance visual analytics
A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms
In this paper a review is presented of the research on eye gaze estimation
techniques and applications, that has progressed in diverse ways over the past
two decades. Several generic eye gaze use-cases are identified: desktop, TV,
head-mounted, automotive and handheld devices. Analysis of the literature leads
to the identification of several platform specific factors that influence gaze
tracking accuracy. A key outcome from this review is the realization of a need
to develop standardized methodologies for performance evaluation of gaze
tracking systems and achieve consistency in their specification and comparative
evaluation. To address this need, the concept of a methodological framework for
practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July
201
BimodalGaze:Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement
Eye gaze is a fast and ergonomic modality for pointing but limited in precision and accuracy. In this work, we introduce BimodalGaze, a novel technique for seamless head-based refinement of a gaze cursor. The technique leverages eye-head coordination insights to separate natural from gestural head movement. This allows users to quickly shift their gaze to targets over larger fields of view with naturally combined eye-head movement, and to refine the cursor position with gestural head movement. In contrast to an existing baseline, head refinement is invoked automatically, and only if a target is not already acquired by the initial gaze shift. Study results show that users reliably achieve fine-grained target selection, but we observed a higher rate of initial selection errors affecting overall performance. An in-depth analysis of user performance provides insight into the classification of natural versus gestural head movement, for improvement of BimodalGaze and other potential applications
Beyond Halo and Wedge: Visualizing out-of-view objects on head-mounted virtual and augmented reality devices
Head-mounted devices (HMDs) for Virtual and Augmented Reality (VR/AR) enable us to alter our visual perception of the world. However, current devices suffer from a limited field of view (FOV), which becomes problematic when users need to locate out of view objects (e.g., locating points-of-interest during sightseeing). To address this, we developed and evaluated in two studies HaloVR, WedgeVR, HaloAR and WedgeAR, which are inspired by usable 2D off-screen object visualization techniques (Halo, Wedge). While our techniques resulted in overall high usability, we found the choice of AR or VR impacts mean search time (VR: 2.25s, AR: 3.92s) and mean direction estimation error (VR: 21.85°, AR: 32.91°). Moreover, while adding more out-of-view objects significantly affects search time across VR and AR, direction estimation performance remains unaffected. We provide implications and discuss the challenges of designing for VR and AR HMDs
VRDoc: Gaze-based Interactions for VR Reading Experience
Virtual reality (VR) offers the promise of an infinite office and remote
collaboration, however, existing interactions in VR do not strongly support one
of the most essential tasks for most knowledge workers, reading. This paper
presents VRDoc, a set of gaze-based interaction methods designed to improve the
reading experience in VR. We introduce three key components: Gaze
Select-and-Snap for document selection, Gaze MagGlass for enhanced text
legibility, and Gaze Scroll for ease of document traversal. We implemented each
of these tools using a commodity VR headset with eye-tracking. In a series of
user studies with 13 participants, we show that VRDoc makes VR reading both
more efficient (p < 0.01 ) and less demanding (p < 0.01), and when given a
choice, users preferred to use our tools over the current VR reading methods.Comment: 8 pages, 4 figures, ISMAR 202
Deformable Beamsplitters: Enhancing Perception with Wide Field of View, Varifocal Augmented Reality Displays
An augmented reality head-mounted display with full environmental awareness could present data in new ways and provide a new type of experience, allowing seamless transitions between real life and virtual content. However, creating a light-weight, optical see-through display providing both focus support and wide field of view remains a challenge. This dissertation describes a new dynamic optical element, the deformable beamsplitter, and its applications for wide field of view, varifocal, augmented reality displays. Deformable beamsplitters combine a traditional deformable membrane mirror and a beamsplitter into a single element, allowing reflected light to be manipulated by the deforming membrane mirror, while transmitted light remains unchanged. This research enables both single element optical design and correct focus while maintaining a wide field of view, as demonstrated by the description and analysis of two prototype hardware display systems which incorporate deformable beamsplitters. As a user changes the depth of their gaze when looking through these displays, the focus of virtual content can quickly be altered to match the real world by simply modulating air pressure in a chamber behind the deformable beamsplitter; thus ameliorating vergence–accommodation conflict. Two user studies verify the display prototypes’ capabilities and show the potential of the display in enhancing human performance at quickly perceiving visual stimuli. This work shows that near-eye displays built with deformable beamsplitters allow for simple optical designs that enable wide field of view and comfortable viewing experiences with the potential to enhance user perception.Doctor of Philosoph
- …