2,598 research outputs found

    DynamicRead: Exploring Robust Gaze Interaction Methods for Reading on Handheld Mobile Devices under Dynamic Conditions

    Get PDF
    Enabling gaze interaction in real-time on handheld mobile devices has attracted significant attention in recent years. An increasing number of research projects have focused on sophisticated appearance-based deep learning models to enhance the precision of gaze estimation on smartphones. This inspires important research questions, including how the gaze can be used in a real-time application, and what type of gaze interaction methods are preferable under dynamic conditions in terms of both user acceptance and delivering reliable performance. To address these questions, we design four types of gaze scrolling techniques: three explicit technique based on Gaze Gesture, Dwell time, and Pursuit; and one implicit technique based on reading speed to support touch-free, page-scrolling on a reading application. We conduct a 20-participant user study under both sitting and walking settings and our results reveal that Gaze Gesture and Dwell time-based interfaces are more robust while walking and Gaze Gesture has achieved consistently good scores on usability while not causing high cognitive workload.Comment: Accepted by ETRA 2023 as Full paper, and as journal paper in Proceedings of the ACM on Human-Computer Interactio

    Using Eye Gaze for Interactive Computing

    Get PDF
    User input to desktop and laptop computers is largely via the keyboard, pointing devices such as a mouse/trackpad, and to some extent, the touchscreen. Thus far, using eye gaze as an input, based on data from on-device cameras (e.g., laptop or desktop cameras) requires calibration steps; even so, since such cameras are relatively far from the user, the resulting input lacks precision. This disclosure describes techniques for human-computer interaction based on eye gaze derived from the user’s smart glasses. With user permission, the user’s eye movements, as captured by the camera on the user’s smart glasses, provides an additional interactive channel to the user’s other devices (e.g., laptop, desktop, etc.) to enable eye-gaze based actions such as scroll focus, text focus, notifications dismissal, window focus, auto-scrolling of text, etc

    Javardeye: Gaze input for cursor control in a structured editor

    Get PDF
    Programmers spend a considerable time jumping through editing positions in the source code, often requiring the use of the mouse and/or arrow keys to position the cursor at the desired editing position. We developed Javardeye, a prototype code editor for Java integrated with eye tracking technology for controlling the editing cursor. Our implementation is based on a structured editor, leveraging on its particular characteristics, and augmenting it with a secondary - latent cursor - controlled by eye gaze. This paper describes the main design decisions and tradeoffs of our approach.info:eu-repo/semantics/publishedVersio

    Reading with a Loss of Central Vision

    Get PDF

    Reading with peripheral vision: A comparison of reading dynamic scrolling and static text with a simulated central scotoma

    Get PDF
    AbstractHorizontally scrolling text is, in theory, ideally suited to enhance viewing strategies recommended to improve reading performance under conditions of central vision loss such as macular disease, although it is largely unproven in this regard. This study investigated if the use of scrolling text produced an observable improvement in reading performed under conditions of eccentric viewing in an artificial scotoma paradigm. Participants (n=17) read scrolling and static text with a central artificial scotoma controlled by an eye-tracker. There was an improvement in measures of reading accuracy, and adherence to eccentric viewing strategies with scrolling, compared to static, text. These findings illustrate the potential benefits of scrolling text as a potential reading aid for those with central vision loss

    Head-mounted displays and dynamic text presentation to aid reading in macular disease

    Get PDF
    The majority of individuals living with significant sight loss have residual vision which can be enhanced using low vision aids. Smart glasses and smartphone-based headsets, both increasing in prevalence, are proposed as a low vision aid platform. Three novel tests for measuring the visibility of displays to partially sighted users are described, along with a questionnaire for assessing subjective preference. Most individuals tested, save those with the weakest vision, were able to see and read from both a smart glasses screen and a smartphone screen mounted in a headset. The scheme for biomimetic scrolling, a text presentation strategy which translates natural eye movement into text movement, is described. It is found to enable the normally sighted to read at a rate five times that of continuous scrolling and is faster than rapid serial visual presentation for individuals with macular disease. With text presentation on the smart glasses optimised to the user, individuals with macular disease read on average 65% faster than when using their habitual optical aid. It is concluded that this aid demonstrates clear benefit over the commonly used devices and is thus recommended for further development towards widespread availability

    VRDoc: Gaze-based Interactions for VR Reading Experience

    Full text link
    Virtual reality (VR) offers the promise of an infinite office and remote collaboration, however, existing interactions in VR do not strongly support one of the most essential tasks for most knowledge workers, reading. This paper presents VRDoc, a set of gaze-based interaction methods designed to improve the reading experience in VR. We introduce three key components: Gaze Select-and-Snap for document selection, Gaze MagGlass for enhanced text legibility, and Gaze Scroll for ease of document traversal. We implemented each of these tools using a commodity VR headset with eye-tracking. In a series of user studies with 13 participants, we show that VRDoc makes VR reading both more efficient (p < 0.01 ) and less demanding (p < 0.01), and when given a choice, users preferred to use our tools over the current VR reading methods.Comment: 8 pages, 4 figures, ISMAR 202

    The feasibility of capturing learner interactions based on logs informed by eye-tracking and remote observation studies

    Get PDF
    Two small studies, one an eye-tracking study and the other a remote observation study, have been conducted to investigate ways to identify two kinds of online learner interactions: users flicking through the web pages in "browsing" action, and users engaging with the content of a page in "learning" action. The video data from four participants of the two small studies using the OpenLearn open educational resource materials offers some evidence for differentiating between 'browsing' and 'learning'. Further analysis of the data has considered possible ways of identifying similar browsing and learning actions based on automatic user logs. This research provides a specification for researching the pedagogical value of capturing and transforming logs of user interactions into external forms of representations. The paper examines the feasibility and challenge of capturing learner interactions giving examples of external representations such as sequence flow charts, timelines, and table of logs. The objective users information these represent offer potential for understanding user interactions both to aid design and improve feedback means that they should be given greater consideration alongside other more subjective ways to research user experience

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    Utilizing a Human Computer Interaction Technique for Enabling Non-Disruptive Exploration of App Contents and Capabilities in a Query Recommendation System

    Get PDF
    Conventional techniques for launching apps do not provide any facility to quickly launch app contents or app capabilities. This disclosure describes techniques for quickly launching app capabilities and contents, surfacing those capabilities/contents that result in high user interaction rate (UIR) without harming a total clicks metric. In contrast to conventional techniques, app contents and capabilities with high potential UIR are adaptively determined using heuristics and user-permitted interaction data. A quick scroll button advantageously enables providing a scroll interface with suggestions that are otherwise hidden behind a virtual keyboard or not displayed. App contents and capabilities with high potential UIR are determined with low computational and UI costs. By directly enabling the scrolling and selection of relevant, popular, or personalized app content and capabilities, the user interface provides enhanced convenience and speed of operation
    • …
    corecore