1,659 research outputs found
MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration
Remote collaborative work has become pervasive in many settings, from
engineering to medical professions. Users are immersed in virtual environments
and communicate through life-sized avatars that enable face-to-face
collaboration. Within this context, users often collaboratively view and
interact with virtual 3D models, for example, to assist in designing new
devices such as customized prosthetics, vehicles, or buildings. However,
discussing shared 3D content face-to-face has various challenges, such as
ambiguities, occlusions, and different viewpoints that all decrease mutual
awareness, leading to decreased task performance and increased errors. To
address this challenge, we introduce MAGIC, a novel approach for understanding
pointing gestures in a face-to-face shared 3D space, improving mutual
understanding and awareness. Our approach distorts the remote user\'s gestures
to correctly reflect them in the local user\'s reference space when
face-to-face. We introduce a novel metric called pointing agreement to measure
what two users perceive in common when using pointing gestures in a shared 3D
space. Results from a user study suggest that MAGIC significantly improves
pointing agreement in face-to-face collaboration settings, improving
co-presence and awareness of interactions performed in the shared space. We
believe that MAGIC improves remote collaboration by enabling simpler
communication mechanisms and better mutual awareness.Comment: Presented at IEEE VR 202
Designing to Support Workspace Awareness in Remote Collaboration using 2D Interactive Surfaces
Increasing distributions of the global workforce are leading to collaborative workamong remote coworkers. The emergence of such remote collaborations is essentiallysupported by technology advancements of screen-based devices ranging from tabletor laptop to large displays. However, these devices, especially personal and mobilecomputers, still suffer from certain limitations caused by their form factors, that hinder supporting workspace awareness through non-verbal communication suchas bodily gestures or gaze. This thesis thus aims to design novel interfaces andinteraction techniques to improve remote coworkers’ workspace awareness throughsuch non-verbal cues using 2D interactive surfaces.The thesis starts off by exploring how visual cues support workspace awareness infacilitated brainstorming of hybrid teams of co-located and remote coworkers. Basedon insights from this exploration, the thesis introduces three interfaces for mobiledevices that help users maintain and convey their workspace awareness with their coworkers. The first interface is a virtual environment that allows a remote person to effectively maintain his/her awareness of his/her co-located collaborators’ activities while interacting with the shared workspace. To help a person better express his/her hand gestures in remote collaboration using a mobile device, the second interfacepresents a lightweight add-on for capturing hand images on and above the device’sscreen; and overlaying them on collaborators’ device to improve their workspace awareness. The third interface strategically leverages the entire screen space of aconventional laptop to better convey a remote person’s gaze to his/her co-locatedcollaborators. Building on the top of these three interfaces, the thesis envisions an interface that supports a person using a mobile device to effectively collaborate with remote coworkers working with a large display.Together, these interfaces demonstrate the possibilities to innovate on commodity devices to offer richer non-verbal communication and better support workspace awareness in remote collaboration
Tutor In-sight: Guiding and Visualizing Students Attention with Mixed Reality Avatar Presentation Tools
Remote conferencing systems are increasingly used to supplement or even replace in-person teaching. However, prevailing conferencing systems restrict the teacher’s representation to a webcam live-stream, hamper the teacher’s use of body-language, and result in students’ decreased sense of co-presence and participation. While Virtual Reality (VR) systems may increase student engagement, the teacher may not have the time or expertise to conduct the lecture in VR. To address this issue and bridge the requirements between students and teachers, we have developed Tutor In-sight, a Mixed Reality (MR) avatar augmented into the student’s workspace based on four design requirements derived from the existing literature, namely: integrated virtual with physical space, improved teacher’s co-presence through avatar, direct attention with auto-generated body language, and usable workfow for teachers. Two user studies were conducted from the perspectives of students and teachers to determine the advantages of Tutor In-sight in comparison to two existing conferencing systems, Zoom (video-based) and Mozilla Hubs (VR-based). The participants of both studies favoured Tutor In-sight. Among others, this main fnding indicates that Tutor Insight satisfed the needs of both teachers and students. In addition, the participants’ feedback was used to empirically determine the four main teacher requirements and the four main student requirements in order to improve the future design of MR educational tools
A mixed reality telepresence system for collaborative space operation
This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go.
The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported
Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments
Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen.
Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen.
Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern
Establishing Awareness through Pointing Gestures during Collaborative Decision-Making in a Wall-Display Environment
Sharing a physical environment, such as that of a wall-display, facilitates
gaining awareness of others' actions and intentions, thereby bringing benefits
for collaboration. Previous studies have provided first insights on awareness
in the context of tabletops or smaller vertical displays. This paper seeks to
advance the current understanding on how users share awareness information in
wall-display environments and focusses on mid-air pointing gestures as a
foundational part of communication. We present a scenario dealing with the
organization of medical supply chains in crisis situations, and report on the
results of a user study with 24 users, split into 6 groups of 4, performing
several tasks. We investigate pointing gestures and identify three subtypes
used as awareness cues during face-to-face collaboration: narrative pointing,
loose pointing, and sharp pointing. Our observations show that reliance on
gesture subtypes varies across participants and groups, and that sometimes
vague pointing is sufficient to support verbal negotiations.Comment: \c{opyright} Authors | ACM 2023. This is the author's version of the
work. It is posted here for your personal use. Not for redistribution. The
definitive Version of Record was published in the CHI'23 proceedings,
http://dx.doi.org/10.1145/3544549.358583
Distant pointing in desktop collaborative virtual environments
Deictic pointing—pointing at things during conversations—is natural and ubiquitous in human communication. Deictic pointing is important in the real world; it is also important in collaborative virtual environments (CVEs) because CVEs are 3D virtual environments that resemble the real world. CVEs connect people from different locations, allowing them to communicate and collaborate remotely. However, the interaction and communication capabilities of CVEs are not as good as those in the real world. In CVEs, people interact with each other using avatars (the visual representations of users). One problem of avatars is that they are not expressive enough when compare to what we can do in the real world. In particular, deictic pointing has many limitations and is not well supported.
This dissertation focuses on improving the expressiveness of distant pointing—where referents are out of reach—in desktop CVEs. This is done by developing a framework that guides the design and development of pointing techniques; by identifying important aspects of distant pointing through observation of how people point at distant referents in the real world; by designing, implementing, and evaluating distant-pointing techniques; and by providing a set of guidelines for the design of distant pointing in desktop CVEs.
The evaluations of distant-pointing techniques examine whether pointing without extra visual effects (natural pointing) has sufficient accuracy; whether people can control free arm movement (free pointing) along with other avatar actions; and whether free and natural pointing are useful and valuable in desktop CVEs.
Overall, this research provides better support for deictic pointing in CVEs by improving the expressiveness of distant pointing. With better pointing support, gestural communication can be more effective and can ultimately enhance the primary function of CVEs—supporting distributed collaboration
Visual Guidance for User Placement in Avatar-Mediated Telepresence between Dissimilar Spaces
Rapid advances in technology gradually realize immersive mixed-reality (MR)
telepresence between distant spaces. This paper presents a novel visual
guidance system for avatar-mediated telepresence, directing users to optimal
placements that facilitate the clear transfer of gaze and pointing contexts
through remote avatars in dissimilar spaces, where the spatial relationship
between the remote avatar and the interaction targets may differ from that of
the local user. Representing the spatial relationship between the user/avatar
and interaction targets with angle-based interaction features, we assign
recommendation scores of sampled local placements as their maximum feature
similarity with remote placements. These scores are visualized as color-coded
2D sectors to inform the users of better placements for interaction with
selected targets. In addition, virtual objects of the remote space are
overlapped with the local space for the user to better understand the
recommendations. We examine whether the proposed score measure agrees with the
actual user perception of the partner's interaction context and find a score
threshold for recommendation through user experiments in virtual reality (VR).
A subsequent user study in VR investigates the effectiveness and perceptual
overload of different combinations of visualizations. Finally, we conduct a
user study in an MR telepresence scenario to evaluate the effectiveness of our
method in real-world applications
- …