9 research outputs found

    GazeCast: Using Mobile Devices to Allow Gaze-based Interaction on Public Displays

    Get PDF
    Gaze is promising for natural and spontaneous interaction with public displays, but current gaze-enabled displays require movement-hindering stationary eye trackers or cumbersome head-mounted eye trackers. We propose and evaluate GazeCast – a novel system that leverages users’ handheld mobile devices to allow gaze-based interaction with surrounding displays. In a user study (N = 20), we compared GazeCast to a standard webcam for gaze-based interaction using Pursuits. We found that while selection using GazeCast requires more time and physical demand, participants value GazeCast’s high accuracy and flexible positioning. We conclude by discussing how mobile computing can facilitate the adoption of gaze interaction with pervasive displays

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Investigating privacy perceptions and subjective acceptance of eye tracking on handheld mobile devices

    Get PDF
    Although eye tracking brings many benefits to users of mobile devices and developers of mobile applications, it poses significant privacy risks to both: the users of mobile devices, and the bystanders that surround users, are within the front-facing camera's field of view. Recent research demonstrates that tracking an individual's gaze reveals personal and sensitive information. This paper presents an investigation of the privacy perceptions and the subjective acceptance of users towards eye tracking on handheld mobile devices. In a four-phase user study (N=17), participants used a smartphone eye tracking app, were interviewed before and after viewing a video showing the amount of sensitive and personal data that could be derived from eye movements, and had their privacy concerns measured. Our findings 1) show factors that influence users' and bystanders' attitudes toward eye tracking on mobile devices such as the algorithms' transparency and the developers' credibility and 2) support designing mechanisms to allow for privacy-aware eye tracking solutions on mobile-devices

    An end-to-end review of gaze estimation and its interactive applications on handheld mobile devices

    Get PDF
    In recent years we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality. This trend is driven by the enhanced computational power of these devices, higher resolution and capacity of their cameras, and improved gaze estimation accuracy obtained from advanced machine learning techniques, especially in deep learning. As the literature is fast progressing, there is a pressing need to review the state of the art, delineate the boundary, and identify the key research challenges and opportunities in gaze estimation and interaction. This paper aims to serve this purpose by presenting an end-to-end holistic view in this area, from gaze capturing sensors, to gaze estimation workflows, to deep learning techniques, and to gaze interactive applications.PostprintPeer reviewe

    A Review of Interaction Techniques for Immersive Environments

    Get PDF
    The recent proliferation of immersive technology has led to the rapid adoption of consumer-ready hardware for Augmented Reality (AR) and Virtual Reality (VR). While this increase has resulted in a variety of platforms that can offer a richer interactive experience, the advances in technology bring more variability in display types, interaction sensors and use cases. This provides a spectrum of device-specific interaction possibilities, with each offering a tailor-made solution for delivering immersive experiences to users, but often with an inherent lack of standardisation across devices and applications. To address this, a systematic review and an evaluation of explicit, task-based interaction methods in immersive environments are presented in this paper. A corpus of papers published between 2013 and 2020 is reviewed to thoroughly explore state-of-the-art user studies, which investigate input methods and their implementation for immersive interaction tasks (pointing, selection, translation, rotation, scale, viewport, menu-based and abstract). Focus is given to how input methods have been applied within the spectrum of immersive technology (AR, VR, XR). This is achieved by categorising findings based on display type, input method, study type, use case and task. Results illustrate key trends surrounding the benefits and limitations of each interaction technique and highlight the gaps in current research. The review provides a foundation for understanding the current and future directions for interaction studies in immersive environments, which, at this pivotal point in XR technology adoption, provides routes forward for achieving more valuable, intuitive and natural interactive experiences

    Computational Analysis of Eye-Strain for Digital Screens based on Eye Tracking Studies

    Get PDF
    Computer vision syndrome (CVS) is composed of multiple eye vision problems due to the prolonged use of digital displays, including tablets and smartphones. These problems were shown to affect visual comfort as well as work productivity in both adults and teenagers. CVS causes eye and vision symptoms such as eye-strain, eye burn, dry eyes, double vision, and blurred vision. CVS, which causes severe vision and muscular problems due to repeated eye movements and excessive eye focus on computer screens, is a cause of work-related stress. In this thesis, we address this problem and present three general-purpose mathematical compound models for assessing eye-strain in eye-tracking applications, namely (1) Fixation-based Eye fatigue Load Index (FELiX), (2) Index of Difficulty for Eye-tracking Applications (IDEA), and (3) Eye-Strain Probation Model (ESPiM) based on eye-tracking parameters and subjective ratings to measure, predict, and compare the amount of fatigue or cognitive workload during target selection tasks for different user groups or interaction techniques. The ESPiM model is the outcome of both FELiX and IDEA, which benefit from direct subjective rating and, therefore, can be applied to assess the ESPiM model's efficacy. We present experiments and user studies that show that these models can measure potential eye-strain levels on individuals based on physical circumstances such as screen resolution and target positions per time

    Investigating New Forms of Single-handed Physical Phone Interaction with Finger Dexterity

    Get PDF
    With phones becoming more powerful and such an essential part of our lives, manufacturers are creating new device forms and interactions to better support even more diverse functions. A common goal is to enable a larger input space and expand the input vocabulary using new physical phone interactions other than touchscreen input. This thesis explores how utilizing our hand and finger dexterity can expand physical phone interactions. To understand how we can physically manipulate a phone using the fine motor skills of finger, we identify and evaluate single-handed "dexterous gestures". Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. With the confirmation that users can loosen their grip and perform gestures with finger dexterity, we investigate the performance of one-handed touch input on the side of a mobile phone. An experiment examines grip change and subjective preference when reaching for side targets using different fingers. Two following experiments examine taps and flicks using the thumb and index finger in a new two-dimensional input space. We simulate a side-touch sensor with a combination of capacitive sensing and motion tracking to distinguish touches on the lower, middle, or upper edges. We further focus on physical phone interaction with a new phone form factor by exploring and evaluating single-handed folding interactions suitable for "modern flip phones": smartphones with a bendable full screen touch display. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble current flip phones, but with a modified spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures, revealing which are most promising. Overall, our exploration shows that users can loosen their grip to physically interact with phones in new ways, and these interactions could be practically integrated into daily phone applications
    corecore