4,101 research outputs found
MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration
Remote collaborative work has become pervasive in many settings, from
engineering to medical professions. Users are immersed in virtual environments
and communicate through life-sized avatars that enable face-to-face
collaboration. Within this context, users often collaboratively view and
interact with virtual 3D models, for example, to assist in designing new
devices such as customized prosthetics, vehicles, or buildings. However,
discussing shared 3D content face-to-face has various challenges, such as
ambiguities, occlusions, and different viewpoints that all decrease mutual
awareness, leading to decreased task performance and increased errors. To
address this challenge, we introduce MAGIC, a novel approach for understanding
pointing gestures in a face-to-face shared 3D space, improving mutual
understanding and awareness. Our approach distorts the remote user\'s gestures
to correctly reflect them in the local user\'s reference space when
face-to-face. We introduce a novel metric called pointing agreement to measure
what two users perceive in common when using pointing gestures in a shared 3D
space. Results from a user study suggest that MAGIC significantly improves
pointing agreement in face-to-face collaboration settings, improving
co-presence and awareness of interactions performed in the shared space. We
believe that MAGIC improves remote collaboration by enabling simpler
communication mechanisms and better mutual awareness.Comment: Presented at IEEE VR 202
Comparing visual representations of collaborative map interfaces for immersive virtual environments
Virtual reality offers unique benefits to support remote collaboration. However, the way of representing the scenario and interacting within the team can influence the effectiveness of a collaborative task. In this context, this research explores the benefits and limitations of two different visual representations of the collaboration space, shared experience and shared workspace, in the specific case of map-based collaboration. Shared experience aims at reproducing face-to-face collaboration in a realistic way whilst shared workspace translates to the virtual world the functionalities of 2D collaborative spaces. The goal is to understand whether sophisticated interfaces with realistic avatars are necessary, or if simpler solutions might be enough to support efficient collaboration. We performed a user study ( n=24 , 12 pairs) through a collaborative task with two roles in a emergency crisis intervention scenario that typically uses map-based interfaces. Despite that a shared experience scenario might provide a better personal experience to the user in terms of realism, our study provides insights that suggest that a shared workspace could be a more effective way to represent the scenario and improve the collaboration.This work was supported by the Spanish State Research Agency (Agencia Estatal de Investigación - AEI) under Grant Sense2makeSense PID2019-109388GB-I00 and Grant CrossColab PGC2018-101884-B-I100. Madrid Government (Comunidad de Madrid -Spain) under the Multiannual Agreement with UC3M in the line of Excellence of University Professors (EPUC3M17), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation)
SUPPORTING MISSION PLANNING WITH A PERSISTENT AUGMENTED ENVIRONMENT
Includes supplementary materialIncludes Supplementary MaterialThe Department of the Navy relies on current naval practices such as briefs, chat, and voice reports to provide an overall operational assessment of the fleet. That includes the cyber domain, or battlespace, depicting a single snapshot of a ship’s network equipment and service statuses. However, the information can be outdated and inaccurate, creating confusion among decision-makers in understanding the service and availability of equipment in the cyber domain. We examine the ability of a persistent augmented environment (PAE) and 3D visualization to support communications and cyber network operations, reporting, and resource management decision-making. We designed and developed a PAE prototype and tested the usability of its interface. Our study examined users’ comprehension of 3D visualization of the naval cyber battlespace onboard multiple ships and evaluated the PAE’s ability to assist in effective mission planning at the tactical level. The results are highly encouraging: the participants were able to complete their tasks successfully. They found the interface easy to understand and operate, and the prototype was characterized as a valuable alternative to their current practices. Our research provides close insights into the feasibility and effectiveness of the novel form of data representation and its capability to support faster and improved situational awareness and decision-making in a complex operational technology (OT) environment between diverse communities.Lieutenant, United States NavyLieutenant, United States NavyApproved for public release. Distribution is unlimited
Virtual Meeting Rooms: From Observation to Simulation
Virtual meeting rooms are used for simulation of real meeting behavior and can show how people behave, how they gesture, move their heads, bodies, their gaze behavior during conversations. They are used for visualising models of meeting behavior, and they can be used for the evaluation of these models. They are also used to show the effects of controlling certain parameters on the behavior and in experiments to see what the effect is on communication when various channels of information - speech, gaze, gesture, posture - are switched off or manipulated in other ways. The paper presents the various stages in the development of a virtual meeting room as well and illustrates its uses by presenting some results of experiments to see whether human judges can induce conversational roles in a virtual meeting situation when they only see the head movements of participants in the meeting
The Effects of Sharing Awareness Cues in Collaborative Mixed Reality
Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues
On Inter-referential Awareness in Collaborative Augmented Reality
For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness
Informal, desktop, audio-video communication
Audio-Video systems have been developed to support many aspects and
modes of human communication, but there has been little support for the informal,
ongoing nature of communication that occurs often in real life. Most existing systems
implement a call metaphor. This presents a barrier to initiating conversation that has a
consequent effect on the formality of the resulting conversation. By contrast, with
informal communication the channel is never explicitly opened or closed. This paper
examines the range of previous systems and seeks to build on these to develop plans for
supporting informal communication, in a desktop environment
- …