3,955 research outputs found

    MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

    Get PDF
    Remote collaborative work has become pervasive in many settings, from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example, to assist in designing new devices such as customized prosthetics, vehicles, or buildings. However, discussing shared 3D content face-to-face has various challenges, such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, leading to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user\'s gestures to correctly reflect them in the local user\'s reference space when face-to-face. We introduce a novel metric called pointing agreement to measure what two users perceive in common when using pointing gestures in a shared 3D space. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-to-face collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.Comment: Presented at IEEE VR 202

    The combination of visual communication cues in mixed reality remote collaboration

    Full text link
    © 2020, Springer Nature Switzerland AG. Many researchers have studied various visual communication cues (e.g. pointer, sketch, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, within the two user studies when a dependent view and independent view are supported respectively. In the first user study with the dependent view, the results showed that the hand gesture cue was the main visual communication cue and adding sketch cue to the hand gesture cue helped participants complete the task faster. In the second study with the independent view, the results showed that the hand gesture had an issue of local worker understanding remote expert’s hand gesture cue and the main visual communication cue was the pointer cue with fast completion time and high level of co-presence

    Distributed-Pair Programming can work well and is not just Distributed Pair-Programming

    Full text link
    Background: Distributed Pair Programming can be performed via screensharing or via a distributed IDE. The latter offers the freedom of concurrent editing (which may be helpful or damaging) and has even more awareness deficits than screen sharing. Objective: Characterize how competent distributed pair programmers may handle this additional freedom and these additional awareness deficits and characterize the impacts on the pair programming process. Method: A revelatory case study, based on direct observation of a single, highly competent distributed pair of industrial software developers during a 3-day collaboration. We use recordings of these sessions and conceptualize the phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits without visible obstruction of the overall process. 2. Skilled pairs may use the additional editing freedom in a useful limited fashion, resulting in potentially better fluency of the process than local pair programming. Conclusion: When applied skillfully in an appropriate context, distributed-pair programming can (not will!) work at least as well as local pair programming

    Communication in Immersive Social Virtual Reality: A Systematic Review of 10 Years' Studies

    Full text link
    As virtual reality (VR) technologies have improved in the past decade, more research has investigated how they could support more effective communication in various contexts to improve collaboration and social connectedness. However, there was no literature to summarize the uniqueness VR provided and put forward guidance for designing social VR applications for better communication. To understand how VR has been designed and used to facilitate communication in different contexts, we conducted a systematic review of the studies investigating communication in social VR in the past ten years by following the PRISMA guidelines. We highlight current practices and challenges and identify research opportunities to improve the design of social VR to better support communication and make social VR more accessible.Comment: Chinese CHI '22: The Tenth International Symposium of Chinese CHI (Chinese CHI 2022

    Designing to Support Workspace Awareness in Remote Collaboration using 2D Interactive Surfaces

    Get PDF
    Increasing distributions of the global workforce are leading to collaborative workamong remote coworkers. The emergence of such remote collaborations is essentiallysupported by technology advancements of screen-based devices ranging from tabletor laptop to large displays. However, these devices, especially personal and mobilecomputers, still suffer from certain limitations caused by their form factors, that hinder supporting workspace awareness through non-verbal communication suchas bodily gestures or gaze. This thesis thus aims to design novel interfaces andinteraction techniques to improve remote coworkers’ workspace awareness throughsuch non-verbal cues using 2D interactive surfaces.The thesis starts off by exploring how visual cues support workspace awareness infacilitated brainstorming of hybrid teams of co-located and remote coworkers. Basedon insights from this exploration, the thesis introduces three interfaces for mobiledevices that help users maintain and convey their workspace awareness with their coworkers. The first interface is a virtual environment that allows a remote person to effectively maintain his/her awareness of his/her co-located collaborators’ activities while interacting with the shared workspace. To help a person better express his/her hand gestures in remote collaboration using a mobile device, the second interfacepresents a lightweight add-on for capturing hand images on and above the device’sscreen; and overlaying them on collaborators’ device to improve their workspace awareness. The third interface strategically leverages the entire screen space of aconventional laptop to better convey a remote person’s gaze to his/her co-locatedcollaborators. Building on the top of these three interfaces, the thesis envisions an interface that supports a person using a mobile device to effectively collaborate with remote coworkers working with a large display.Together, these interfaces demonstrate the possibilities to innovate on commodity devices to offer richer non-verbal communication and better support workspace awareness in remote collaboration

    Wearable performance

    Get PDF
    This is the post-print version of the article. The official published version can be accessed from the link below - Copyright @ 2009 Taylor & FrancisWearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment. Wearable computing devices worn on the body provide the potential for digital interaction in the world. A new stage of computing technology at the beginning of the 21st Century links the personal and the pervasive through mobile wearables. The convergence between the miniaturisation of microchips (nanotechnology), intelligent textile or interfacial materials production, advances in biotechnology and the growth of wireless, ubiquitous computing emphasises not only mobility but integration into clothing or the human body. In artistic contexts one expects such integrated wearable devices to have the two-way function of interface instruments (e.g. sensor data acquisition and exchange) worn for particular purposes, either for communication with the environment or various aesthetic and compositional expressions. 'Wearable performance' briefly surveys the context for wearables in the performance arts and distinguishes display and performative/interfacial garments. It then focuses on the authors' experiments with 'design in motion' and digital performance, examining prototyping at the DAP-Lab which involves transdisciplinary convergences between fashion and dance, interactive system architecture, electronic textiles, wearable technologies and digital animation. The concept of an 'evolving' garment design that is materialised (mobilised) in live performance between partners originates from DAP Lab's work with telepresence and distributed media addressing the 'connective tissues' and 'wearabilities' of projected bodies through a study of shared embodiment and perception/proprioception in the wearer (tactile sensory processing). Such notions of wearability are applied both to the immediate sensory processing on the performer's body and to the processing of the responsive, animate environment

    Hands on Design : Comparing the use of sketching and gesturing in collaborative designing

    Get PDF
    This study explored the remaining potential of gestures as creative tools for collaborative designing. We compared novice designers' use of sketching against gesturing in early ideation and rough visualisation. To preserve the kinesic character of gestures, we developed a detailed video analysis method, which revealed that the majority of sketching and gesturing was complementary to speech. Sketching was important for defining complicated structures, while gesturing was frequently used for all aspects of designing. Moreover, we identified that the level of collaboration – the level and immediacy of sharing one's ideas for others – is an important factor. As an underrepresented phenomenon in the design literature, the meaning of collaboration unearthed here leads to unmistakable conclusions regarding the nature of gesturing, to the process of learning design, and to the use of design tools. Most notably, gesturing offers a complementary creative dimension - kinaesthetic thinking - which invites us to communicate and share instantaneously.Peer reviewe

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd

    Using natural user interfaces to support synchronous distributed collaborative work

    Get PDF
    Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW
    corecore