3 research outputs found
CollabAR - Investigating the Mediating Role of Mobile AR Interfaces on Co-Located Group Collaboration
Mobile Augmented Reality (AR) technology is enabling new applications for different domains including architecture, education or medical work. As AR interfaces project digital data, information and models into the real world, it allows for new forms of collaborative work. However, despite the wide availability of AR applications, very little is known about how AR interfaces mediate and shape collaborative practices. This paper presents a study which examines how a mobile AR (M-AR) interface for inspecting and discovering AR models of varying complexity impacts co-located group practices. We contribute new insights into how current mobile AR interfaces impact co-located collaboration. Our results show that M-AR interfaces induce high mental load and frustration, cause a high number of context switches between devices and group discussion, and overall leads to a reduction in group interaction. We present design recommendations for future work focusing on collaborative AR interfaces
Recommended from our members
Communication, Collaboration, and Coordination in a Co-located Shared Augmented Reality Game: Perspectives From Deaf and Hard of Hearing People
Co-located collaborative shared augmented reality (CS-AR) environments have gained considerable research attention, mainly focusing on design, implementation, accuracy, and usability. Yet, a gap persists in our understanding regarding the accessibility and inclusivity of such environments for diverse user groups, such as deaf and Hard of Hearing (DHH) people. To investigate this domain, we used Urban Legends, a multiplayer game in a co-located CS-AR setting. We conducted a user study followed by one-on-one interviews with 17 DHH participants. Our findings revealed the usage of multimodal communication (verbal and non-verbal) before and during the game, impacting the amount of collaboration among participants and how their coordination with AR components, their surroundings, and other participants improved throughout the rounds. We utilize our data to propose design enhancements, including onscreen visuals and speech-to-text transcription, centered on participant perspectives and our analysis