13 research outputs found

    Integrative visual augmentation content and its optimization based on human visual processing

    Get PDF
    In many daily visual tasks, our brain is remarkably good at prioritizing visual information. Nonetheless, it is undoubtedly not always capable of performing optimally, and all the more so in the ever-evolving demanding world. Supplementary visual guidance could enrich our lives from many perspectives on the individual and population scales. Through rapid technological advancements such as VR and AR systems, diverse visual cues demonstrate a powerful potential to deliberately guide attention and improve users’ performance in daily tasks. Currently, existing solutions are confronting the challenge of overloading and overruling the natural strategy of the user with excessive visual information once digital content is superimposed on the real-world environment. The subtle nature of augmentation content, which considers human visual processing factors, is an essential milestone towards developing adaptive, supportive, and not overwhelming AR systems. The focus of the present thesis was, thus, to investigate how the manipulation of spatial and temporal properties of visual cues affects human performance. Based on the findings of three studies published in peer-reviewed journals, I consider various everyday challenging settings and propose perceptually optimal augmentation solutions. I furthermore discuss possible extensions of the present work and recommendations for future research in this exciting field

    In the user's eyes we find trust: Using gaze data as a predictor or trust in an artifical intelligence

    Full text link
    Trust is essential for our interactions with others but also with artificial intelligence (AI) based systems. To understand whether a user trusts an AI, researchers need reliable measurement tools. However, currently discussed markers mostly rely on expensive and invasive sensors, like electroencephalograms, which may cause discomfort. The analysis of gaze data has been suggested as a convenient tool for trust assessment. However, the relationship between trust and several aspects of the gaze behaviour is not yet fully understood. To provide more insights into this relationship, we propose a exploration study in virtual reality where participants have to perform a sorting task together with a simulated AI in a simulated robotic arm embedded in a gaming. We discuss the potential benefits of this approach and outline our study design in this submission.Comment: Workshop submission of a proposed research project at TRAIT 2023 (held at CHI2023 in Hamburg

    Assessing the relationship between subjective trust, confidence measurements, and mouse trajectory characteristics in an online task

    Full text link
    Trust is essential for our interactions with others but also with artificial intelligence (AI) based systems. To understand whether a user trusts an AI, researchers need reliable measurement tools. However, currently discussed markers mostly rely on expensive and invasive sensors, like electroencephalograms, which may cause discomfort. The analysis of mouse trajectory has been suggested as a convenient tool for trust assessment. However, the relationship between trust, confidence and mouse trajectory is not yet fully understood. To provide more insights into this relationship, we asked participants (n = 146) to rate whether several tweets were offensive while an AI suggested its assessment. Our results reveal which aspects of the mouse trajectory are affected by the users subjective trust and confidence ratings; yet they indicate that these measures might not explain sufficiently the variance to be used on their own. This work examines a potential low-cost trust assessment in AI systems.Comment: Submitted to CHI 2023 and rejecte

    Scene Context in Pick-and-Place Task: Raw Data

    No full text

    Augmentation impacts strategy and gaze distribution in a dual-task interleaving scenario

    No full text

    Not so sure? It's OK, just let me know! The influence of disclosing the AI potential error to the user on the efficiency of User-AI collaboration

    No full text
    This repository contains the raw data for the respective publication

    Integrative visual augmentation content and its optimization based on human visual processing

    Get PDF
    In many daily visual tasks, our brain is remarkably good at prioritizing visual information. Nonetheless, it is undoubtedly not always capable of performing optimally, and all the more so in the ever-evolving demanding world. Supplementary visual guidance could enrich our lives from many perspectives on the individual and population scales. Through rapid technological advancements such as VR and AR systems, diverse visual cues demonstrate a powerful potential to deliberately guide attention and improve users’ performance in daily tasks. Currently, existing solutions are confronting the challenge of overloading and overruling the natural strategy of the user with excessive visual information once digital content is superimposed on the real-world environment. The subtle nature of augmentation content, which considers human visual processing factors, is an essential milestone towards developing adaptive, supportive, and not overwhelming AR systems. The focus of the present thesis was, thus, to investigate how the manipulation of spatial and temporal properties of visual cues affects human performance. Based on the findings of three studies published in peer-reviewed journals, I consider various everyday challenging settings and propose perceptually optimal augmentation solutions. I furthermore discuss possible extensions of the present work and recommendations for future research in this exciting field

    Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR

    No full text
    Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant’s ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user’s performance in everyday visual search tasks

    Context matters during pick-and-place in VR: Impact on search and transport phases

    No full text
    When considering external assistive systems for people with motor impairments, gaze has been shown to be a powerful tool as it is anticipatory to motor actions and is promising for understanding intentions of an individual even before the action. Up until now, the vast majority of studies investigating the coordinated eye and hand movement in a grasping task focused on single objects manipulation without placing them in a meaningful scene. Very little is known about the impact of the scene context on how we manipulate objects in an interactive task. In the present study, it was investigated how the scene context affects human object manipulation in a pick-and-place task in a realistic scenario implemented in VR. During the experiment, participants were instructed to find the target object in a room, pick it up, and transport it to a predefined final location. Thereafter, the impact of the scene context on different stages of the task was examined using head and hand movement, as well as eye tracking. As the main result, the scene context had a significant effect on the search and transport phases, but not on the reach phase of the task. The present work provides insights into the development of potential supporting intention predicting systems, revealing the dynamics of the pick-and-place task behavior once it is realized in a realistic context-rich scenario
    corecore