321 research outputs found

    A mixed reality telepresence system for collaborative space operation

    Get PDF
    This paper presents a Mixed Reality system that results from the integration of a telepresence system and an application to improve collaborative space exploration. The system combines free viewpoint video with immersive projection technology to support non-verbal communication, including eye gaze, inter-personal distance and facial expression. Importantly, these can be interpreted together as people move around the simulation, maintaining natural social distance. The application is a simulation of Mars, within which the collaborators must come to agreement over, for example, where the Rover should land and go. The first contribution is the creation of a Mixed Reality system supporting contextualization of non-verbal communication. Tw technological contributions are prototyping a technique to subtract a person from a background that may contain physical objects and/or moving images, and a light weight texturing method for multi-view rendering which provides balance in terms of visual and temporal quality. A practical contribution is the demonstration of pragmatic approaches to sharing space between display systems of distinct levels of immersion. A research tool contribution is a system that allows comparison of conventional authored and video based reconstructed avatars, within an environment that encourages exploration and social interaction. Aspects of system quality, including the communication of facial expression and end-to-end latency are reported

    Eye tracking and avatar-mediated communication in immersive collaborative virtual environments

    Get PDF
    The research presented in this thesis concerns the use of eye tracking to both enhance and understand avatar-mediated communication (AMC) performed by users of immersive collaborative virtual environment (ICVE) systems. AMC, in which users are embodied by graphical humanoids within a shared virtual environment (VE), is rapidly emerging as a prevalent and popular form of remote interaction. However, compared with video-mediated communication (VMC), which transmits interactants’ actual appearance and behaviour, AMC fails to capture, transmit, and display many channels of nonverbal communication (NVC). This is a significant hindrance to the medium’s ability to support rich interpersonal telecommunication. In particular, oculesics (the communicative properties of the eyes), including gaze, blinking, and pupil dilation, are central nonverbal cues during unmediated social interaction. This research explores the interactive and analytical application of eye tracking to drive the oculesic animation of avatars during real-time communication, and as the primary method of experimental data collection and analysis, respectively. Three distinct but interrelated questions are addressed. First, the thesis considers the degree to which quality of communication may be improved through the use of eye tracking, to increase the nonverbal, oculesic, information transmitted during AMC. Second, the research asks whether users engaged in AMC behave and respond in a socially realistic manner in comparison with VMC. Finally, the degree to which behavioural simulations of oculesics can both enhance the realism of virtual humanoids, and complement tracked behaviour in AMC, is considered. These research questions were investigated over a series of telecommunication experiments investigating scenarios common to computer supported cooperative work (CSCW), and a further series of experiments investigating behavioural modelling for virtual humanoids. The first, exploratory, telecommunication experiment compared AMC with VMC in a three-party conversational scenario. Results indicated that users employ gaze similarly when faced with avatar and video representations of fellow interactants, and demonstrated how interaction is influenced by the technical characteristics and limitations of a medium. The second telecommunication experiment investigated the impact of varying methods of avatar gaze control on quality of communication during object-focused multiparty AMC. The main finding of the experiment was that quality of communication is reduced when avatars demonstrate misleading gaze behaviour. The final telecommunication study investigated truthful and deceptive dyadic interaction in AMC and VMC over two closely-related experiments. Results from the first experiment indicated that users demonstrate similar oculesic behaviour and response in both AMC and VMC, but that psychological arousal is greater following video-based interaction. Results from the second experiment found that the use of eye tracking to drive the oculesic behaviour of avatars during AMC increased the richness of NVC to the extent that more accurate estimation of embodied users’ states of veracity was enabled. Rather than directly investigating AMC, the second series of experiments addressed behavioural modelling of oculesics for virtual humanoids. Results from the these experiments indicated that oculesic characteristics are highly influential to the perceived realism of virtual humanoids, and that behavioural models are able to complement the use of eye tracking in AMC. The research presented in this thesis explores AMC and eye tracking over a range of collaborative and perceptual studies. The overall conclusion is that eye tracking is able to enhance AMC towards a richer medium for interpersonal telecommunication, and that users’ behaviour in AMC is no less socially ‘real’ than that demonstrated in VMC. However, there are distinct differences between the two communication mediums, and the importance of matching the characteristics of a planned communication with those of the medium itself is critical

    Effectiveness of Social Virtual Reality

    Get PDF
    A lot of work in social virtual reality, including our own group's, has focused on effectiveness of specific social behaviours such as eye-gaze, turn taking, gestures and other verbal and non-verbal cues. We have built upon these to look at emergent phenomena such as co-presence, leadership and trust. These give us good information about the usability issues of specific social VR systems, but they don't give us much information about the requirements for such systems going forward. In this short paper we discuss how we are broadening the scope of our work on social systems, to move out of the laboratory to more ecologically valid situations and to study groups using social VR for longer periods of time

    Effect of Avatar Anthropomorphism on Body Ownership, Attractiveness and Collaboration in Immersive Virtual Environments

    Get PDF
    Effective collaboration in immersive virtual environments requires to be able to communicate flawlessly using both verbal and non-verbal communication. We present an experiment investigating the impact of anthropomorphism on the sense of body ownership, avatar attractiveness and performance in an asymmetric collaborative task. Using three avatars presenting different facial properties, participants have to solve a construction game according to their partner’s instructions. Results reveal no significant difference in terms of body ownership, but demonstrate significant differences concerning attractiveness and completion duration of the collaborative task. However the relative verbal interaction duration seems not impacted by the anthropomorphism level of the characters, meaning that participants were able to interact verbally independently of the way their character physically express their words in the virtual environment. Unexpectedly, correlation analyses also reveal a link between attractiveness and performance. The more attractive the avatar, the shorter the completion duration of the game. One could argue that, in the context of this experiment, avatar attractiveness could have led to an improvement in non-verbal communication as users could be more prone to observe their partner which translates into better performance in collaborative tasks. Other experiments must be conducted using gaze tracking to support this new hypothesis

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    Virtual Meeting Rooms: From Observation to Simulation

    Get PDF
    Much working time is spent in meetings and, as a consequence, meetings have become the subject of multidisciplinary research. Virtual Meeting Rooms (VMRs) are 3D virtual replicas of meeting rooms, where various modalities such as speech, gaze, distance, gestures and facial expressions can be controlled. This allows VMRs to be used to improve remote meeting participation, to visualize multimedia data and as an instrument for research into social interaction in meetings. This paper describes how these three uses can be realized in a VMR. We describe the process from observation through annotation to simulation and a model that describes the relations between the annotated features of verbal and non-verbal conversational behavior.\ud As an example of social perception research in the VMR, we describe an experiment to assess human observers’ accuracy for head orientation

    Negotiation of meaning via virtual exchange in immersive virtual reality environments

    Get PDF
    This study examines how English-as-lingua-franca (ELF) learners employ semiotic resources, including head movements, gestures, facial expression, body posture, and spatial juxtaposition, to negotiate for meaning in an immersive virtual reality (VR) environment. Ten ELF learners participated in a Taiwan-Spain VR virtual exchange project and completed two VR tasks on an immersive VR platform. Multiple datasets, including the recordings of VR sessions, pre- and post-task questionnaires, observation notes, and stimulated recall interviews, were analyzed quantitatively and qualitatively with triangulation. Built upon multimodal interaction analysis (Norris, 2004) and Varonis and Gass’ (1985a) negotiation of meaning model, the findings indicate that ELF learners utilized different embodied semiotic resources in constructing and negotiating meaning at all primes to achieve effective communication in an immersive VR space. The avatar-mediated representations and semiotic modalities were shown to facilitate indication, comprehension, and explanation to signal and resolve non-understanding instances. The findings show that with space proxemics and object handling as the two distinct features of VR-supported environments, VR platforms transform learners’ social interaction from plane to three-dimensional communication, and from verbal to embodied, which promotes embodied learning. VR thus serves as a powerful immersive interactive environment for ELF learners from distant locations to be engaged in situated languacultural practices that goes beyond physical space. Pedagogical implications are discussed

    Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction

    Get PDF
    Background The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). Methodology/Principal Findings 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. Conclusion/Significance This work confirms that people can exploit gaze cues to predict another person's movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user's sense of affiliation

    The Effects of Sharing Awareness Cues in Collaborative Mixed Reality

    Get PDF
    Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Through a collaborative object finding and placing task, the results showed that awareness cues significantly improved user performance, usability, and subjective preferences, with the combination of the FoV frustum and the Head-gaze ray being best. This work establishes the feasibility of room-scale MR collaboration and the utility of providing virtual awareness cues
    • 

    corecore