Human-Computer Interaction in Extended Reality: Exploring the Impact of Visual Guidance on User Performance and Human Factors

Abstract

Extended Reality (XR) technologies, such as Augmented Reality (AR) and Virtual Reality (VR), are increasingly used for workforce augmentation in industrial environments. The ability of XR technologies to overlay visual cues within a user’s field of view enables unique forms of instructional design with the potential to improve operational performance. Yet, usability challenges are reported as an important adoption obstacle for XR, thus pointing towards deficits in how Human-Computer Interaction (HCI) is designed. XR Visual Guidance (XRVG) appears to be a particularly promising avenue to enhance usability. Currently, however, the literature lacks both reliable effect sizes quantifying the impact of implementing XRVG to improve user performance and human factors as well as a structured approach to guide its practical application. To address this gap, this PhD thesis investigates how XRVG may be used to improve user performance and human factors based on three complementary mixed-method studies: one exploratory user study and two between-subject experiments, featuring both AR and VR implementations and a total of 258 participants from a variety of backgrounds to increase generalisability. This research is guided by a novel XRVG framework – a concept developed in this thesis – to support the implementation of visual cues in XR. The experiments reveal a mixed impact of XRVG: despite showing a consistent reduction in task completion time as well as significant increases in usability and perceived helpfulness, an anticipated reduction in cognitive load could not be confirmed. Furthermore, the two experiments reveal mixed evidence on mistakes made. This conflicting evidence, likely stemming from a reduction in placement accuracy, is discussed in the context of depth of processing. Moreover, the investigation reveals a detrimental effect of occlusion, defined as the obstruction of the user’s field of view by visual cues. Occlusion is quantified and successfully mitigated by implementing a new feature to help avoid it. Overall, this thesis contributes the most comprehensive empirical study on Visual Guidance in XR to date. It contributes a new framework for a more systematic study of XRVG and comprehensive new empirical insights into the effectiveness of Visual Guidance in XR for procedural industrial tasks

    Similar works

    Full text

    thumbnail-image