6,948 research outputs found

    GazePrompt: Enhancing Low Vision People's Reading Experience with Gaze-Aware Augmentations

    Full text link
    Reading is a challenging task for low vision people. While conventional low vision aids (e.g., magnification) offer certain support, they cannot fully address the difficulties faced by low vision users, such as locating the next line and distinguishing similar words. To fill this gap, we present GazePrompt, a gaze-aware reading aid that provides timely and targeted visual and audio augmentations based on users' gaze behaviors. GazePrompt includes two key features: (1) a Line-Switching support that highlights the line a reader intends to read; and (2) a Difficult-Word support that magnifies or reads aloud a word that the reader hesitates with. Through a study with 13 low vision participants who performed well-controlled reading-aloud tasks with and without GazePrompt, we found that GazePrompt significantly reduced participants' line switching time, reduced word recognition errors, and improved their subjective reading experiences. A follow-up silent-reading study showed that GazePrompt can enhance users' concentration and perceived comprehension of the reading contents. We further derive design considerations for future gaze-based low vision aids

    Guiding pre-service teachers' visual attention through instructional settings: an eye-tracking study

    Get PDF
    In complex classroom situations, pre-service teachers often struggle to identify relevant information. Consequently, classroom videos are widely used to support pre-service teachers’ professional vision. However, pre-service teachers need instructional guidance to attend to relevant information in classroom videos. Previous studies identified a specific task instruction and prompts as promising instructions to enhance pre-service teachers’ professional vision. This mixed-methods eye-tracking study aimed to compare pre-service teachers’ visual attention to information relevant for classroom management in one of three instructional conditions. Participants viewed two classroom videos and clicked a button whenever they identified situations relevant to classroom management in the videos. They got either (1) a specific task instruction before video viewing (n = 45), (2) attention-guiding prompts during video viewing (n = 45), or (3) a general task instruction (n = 45) before video viewing as a control group. We expected a specific task instruction and prompts to better guide participants’ visual attention compared to a general task instruction before video viewing because both experimental conditions contained informational cues to focus on specific dimensions of classroom management. As both a specific task and prompts were assumed to activate cognitive schemata, resulting in knowledge-based processing of visual information, we expected the specific task instruction to have a similar attention-guiding effect as prompts during video viewing. Measurements were conducted on an outcome level (mouse clicks) and on a process level (eye tracking). Findings confirmed our hypotheses on an outcome level and in part on a process level regarding participants’ gaze relational index. Nevertheless, in a disruptive classroom situation, participants of the prompting condition showed better attentional performance than participants of the other conditions regarding a higher number of fixation and a shorter time to first fixation on disruptive students. Further qualitative analyses revealed that, when observing classroom videos without instructional guidance, pre-service teachers were less likely to identify disruptive situations in the video and more likely to attend to other situations of classroom management concerning the teachers’ action. We discuss advantages of both attention-guiding instructions for pre-service teacher education in terms of the economy of implementation and the salience of situations

    In the user's eyes we find trust: Using gaze data as a predictor or trust in an artifical intelligence

    Full text link
    Trust is essential for our interactions with others but also with artificial intelligence (AI) based systems. To understand whether a user trusts an AI, researchers need reliable measurement tools. However, currently discussed markers mostly rely on expensive and invasive sensors, like electroencephalograms, which may cause discomfort. The analysis of gaze data has been suggested as a convenient tool for trust assessment. However, the relationship between trust and several aspects of the gaze behaviour is not yet fully understood. To provide more insights into this relationship, we propose a exploration study in virtual reality where participants have to perform a sorting task together with a simulated AI in a simulated robotic arm embedded in a gaming. We discuss the potential benefits of this approach and outline our study design in this submission.Comment: Workshop submission of a proposed research project at TRAIT 2023 (held at CHI2023 in Hamburg

    Visuomotor control, eye movements, and steering : A unified approach for incorporating feedback, feedforward, and internal models

    Get PDF
    The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, on driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a–c) has been considered before (also in the context of driving), integrating them into a single framework and the authors’ multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion.Peer reviewe

    Understanding How Low Vision People Read Using Eye Tracking

    Full text link
    While being able to read with screen magnifiers, low vision people have slow and unpleasant reading experiences. Eye tracking has the potential to improve their experience by recognizing fine-grained gaze behaviors and providing more targeted enhancements. To inspire gaze-based low vision technology, we investigate the suitable method to collect low vision users' gaze data via commercial eye trackers and thoroughly explore their challenges in reading based on their gaze behaviors. With an improved calibration interface, we collected the gaze data of 20 low vision participants and 20 sighted controls who performed reading tasks on a computer screen; low vision participants were also asked to read with different screen magnifiers. We found that, with an accessible calibration interface and data collection method, commercial eye trackers can collect gaze data of comparable quality from low vision and sighted people. Our study identified low vision people's unique gaze patterns during reading, building upon which, we propose design implications for gaze-based low vision technology.Comment: In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23
    • …
    corecore