683 research outputs found

    Augmenting Immersive Telepresence Experience with a Virtual Body

    Full text link
    We propose augmenting immersive telepresence by adding a virtual body, representing the user's own arm motions, as realized through a head-mounted display and a 360-degree camera. Previous research has shown the effectiveness of having a virtual body in simulated environments; however, research on whether seeing one's own virtual arms increases presence or preference for the user in an immersive telepresence setup is limited. We conducted a study where a host introduced a research lab while participants wore a head-mounted display which allowed them to be telepresent at the host's physical location via a 360-degree camera, either with or without a virtual body. We first conducted a pilot study of 20 participants, followed by a pre-registered 62 participant confirmatory study. Whereas the pilot study showed greater presence and preference when the virtual body was present, the confirmatory study failed to replicate these results, with only behavioral measures suggesting an increase in presence. After analyzing the qualitative data and modeling interactions, we suspect that the quality and style of the virtual arms, and the contrast between animation and video, led to individual differences in reactions to the virtual body which subsequently moderated feelings of presence.Comment: Accepted for publication in Transactions in Visualization and Computer Graphics (TVCG), to be presented in IEEE VR 202

    Multi-party holomeetings: toward a new era of low-cost volumetric holographic meetings in virtual reality

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Fueled by advances in multi-party communications, increasingly mature immersive technologies being adopted, and the COVID-19 pandemic, a new wave of social virtual reality (VR) platforms have emerged to support socialization, interaction, and collaboration among multiple remote users who are integrated into shared virtual environments. Social VR aims to increase levels of (co-)presence and interaction quality by overcoming the limitations of 2D windowed representations in traditional multi-party video conferencing tools, although most existing solutions rely on 3D avatars to represent users. This article presents a social VR platform that supports real-time volumetric holographic representations of users that are based on point clouds captured by off-the-shelf RGB-D sensors, and it analyzes the platform’s potential for conducting interactive holomeetings (i.e., holoconferencing scenarios). This work evaluates such a platform’s performance and readiness for conducting meetings with up to four users, and it provides insights into aspects of the user experience when using single-camera and low-cost capture systems in scenarios with both frontal and side viewpoints. Overall, the obtained results confirm the platform’s maturity and the potential of holographic communications for conducting interactive multi-party meetings, even when using low-cost systems and single-camera capture systems in scenarios where users are sitting or have a limited translational movement along the X, Y, and Z axes within the 3D virtual environment (commonly known as 3 Degrees of Freedom plus, 3DoF+).The authors would like to thank the members of the EU H2020 VR-Together consortium for their valuable contributions, especially Marc Martos and Mohamad Hjeij for their support in developing and evaluating tasks. This work has been partially funded by: the EU’s Horizon 2020 program, under agreement nº 762111 (VR-Together project); by ACCIÓ (Generalitat de Catalunya), under agreement COMRDI18-1-0008 (ViVIM project); and by Cisco Research and the Silicon Valley Community Foundation, under the grant Extended Reality Multipoint Control Unit (ID: 1779376). The work by Mario Montagud has been additionally funded by Spain’s Agencia Estatal de Investigación under grant RYC2020-030679-I (AEI / 10.13039/501100011033) and by Fondo Social Europeo. The work of David Rincón was supported by Spain’s Agencia Estatal de Investigación within the Ministerio de Ciencia e Innovación under Project PID2019-108713RB-C51 MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Supporting a Closely Coupled Task between a Distributed Team: Using Immersive Virtual Reality Technology

    Get PDF
    Collaboration and teamwork is important in many areas of our lives. People come together to share and discuss ideas, split and distribute work or help and support each other. The sharing of information and artefacts is a central part of collaboration. This often involves the manipulation of shared objects, both sequentially as well as concurrently. For coordinating an efficient collaboration, communication between the team members is necessary. This can happen verbally in form of speech or text and non-verbally through gesturing, pointing, gaze or facial expressions and the referencing and manipulation of shared objects. Collaborative Virtual Environments (CVE) allow remote users to come together and interact with each other and virtual objects within a computer simulated environment. Immersive display interfaces, such as a walk-in display (e.g. CAVE), that place a human physically into the synthetic environment, lend themselves well to support a natural manipulation of objects as well a set of natural non-verbal human communication, as they can both capture and display human movement. Communication of tracking data, however, can saturate the network and result in delay or loss of messages vital to the manipulation of shared objects. This paper investigates the reality of shared object manipulation between remote users collaborating through linked walk-in displays and extends our research in [27]. Various forms of shared interaction are examined through a set of structured sub tasks within a representative construction task. We report on extensive user-trials between three walk-in displays in the UK and Austria linked over the Internet using a CVE, and demonstrate such effects on a naive implementation of a benchmark application, the Gazebo building task. We then present and evaluate application-level workarounds and conclude by suggesting solutions that may be implemented within next-generation CVE infrastructures

    Media Presence and Inner Presence: The Sense of Presence in Virtual Reality Technologies

    Get PDF
    Abstract. Presence is widely accepted as the key concept to be considered in any research involving human interaction with Virtual Reality (VR). Since its original description, the concept of presence has developed over the past decade to be considered by many researchers as the essence of any experience in a virtual environment. The VR generating systems comprise two main parts: a technological component and a psychological experience. The different relevance given to them produced two different but coexisting visions of presence: the rationalist and the psychological/ecological points of view. The rationalist point of view considers a VR system as a collection of specific machines with the necessity of the inclusion \ud of the concept of presence. The researchers agreeing with this approach describe the sense of presence as a function of the experience of a given medium (Media Presence). The main result of this approach is the definition of presence as the perceptual illusion of non-mediation produced by means of the disappearance of the medium from the conscious attention of the subject. At the other extreme, there \ud is the psychological or ecological perspective (Inner Presence). Specifically, this perspective considers presence as a neuropsychological phenomenon, evolved from the interplay of our biological and cultural inheritance, whose goal is the control of the human activity. \ud Given its key role and the rate at which new approaches to understanding and examining presence are appearing, this chapter draws together current research on presence to provide an up to date overview of the most widely accepted approaches to its understanding and measurement

    Fewer Faces Displayed Simultaneously, Less Videoconference Fatigue in Distance Learning? An Experimental Study

    Get PDF
    In the last two years, there has been a massive use of videoconferencing tools for distance learning all over the world. However, a feeling of fatigue has been found among students. Researchers have proposed multiple problems in the online interaction with human faces that may contribute to videoconference fatigue (VCF). To contribute to this upcoming new research domain, this study investigates whether VCF can be reduced if we change the unnatural interaction with multiple enlarged faces on videoconferencing tools. We compare Zoom’s “speaker view” with “gallery view”, and based on theoretical insights from the information processing and brain research domains, we argue that Zoom “gallery view” leads to higher fatigue and stress levels than “speaker view”. Moreover, we investigate whether the face manipulation (“gallery view” vs. “speaker view”) affects learning outcome and learning satisfaction, as well as the role of fatigue and stress as mediators in this relationship

    Modulation of P3 and the Late Positive Potential ERP Components by Standard Stimulus Restorativeness and Naturalness

    Get PDF
    Tests of attention restoration theory (ART) consistently support that exposure to restorative environments can replenish finite cognitive resources, needed to focus attention, from a depleted state. These environments are usually natural, but the dimensions of naturalness and restorativeness are not one and the same, and yet have not been empirically delineated. The restorative effect has been documented in children and adults. However, neuroscientists have barely begun to test for neural correlates of ART. In this dissertation, I employ electroencephalography (EEG) to record electrophysiological brain activity during an active visual oddball task to capture and analyze p3 elicitation and late positive potential (LPP) activation, event-related potential (ERP) components. The p3 component is a pronounced, positive-going potential in brain activity occurring in the window between 200 and 600 milliseconds after the onset of a stimulus. Previous research has shown that the amplitude of the p3 potential is attenuated – and latency increased – when task difficulty is high and/or attentional resources are depleted. Conversely, when task demands are low, p3 amplitude is greater without an accompanying increase in latency, suggesting cognitive efficiency. LPP is positive activity from 500 ms or more after stimulus onset until stimulus termination that is associated with stimulus emotional valence. I hypothesized that, in an active discrimination oddball task adults would show increased p3 amplitude for low-frequency target images occurring amidst standard (high-frequency) images of highly restorative environments versus when standard images are of lowly restorative environments or a solid brown tile, and that naturalness would not interact with restorativeness such that targets amidst restorative natural environments elicit p3’s that are no stronger than targets amidst restorative built environments. Results showed p3 amplitude was greater, and latency earlier, for HR standard stimuli, rather than targets, which was unusual for the oddball paradigm but is explained within the framework of ART according to standard stimulus content. Also, LPP activity was only different between one occipital channel and three frontal channels between 600 ms and 1000 ms post stimulus onset, but greater in the nature stimulus group than the built between 1000 ms and 2000 ms post stimulus onset. This finding is consistent with previous research and interpreted to mean that natural stimuli are more pleasant and arousing than built stimuli. Limitations and future directions are also discussed

    Designing 3D scenarios and interaction tasks for immersive environments

    Get PDF
    In the world of today, immersive reality such as virtual and mixed reality, is one of the most attractive research fields. Virtual Reality, also called VR, has a huge potential to be used in in scientific and educational domains by providing users with real-time interaction or manipulation. The key concept in immersive technologies to provide a high level of immersive sensation to the user, which is one of the main challenges in this field. Wearable technologies play a key role to enhance the immersive sensation and the degree of embodiment in virtual and mixed reality interaction tasks. This project report presents an application study where the user interacts with virtual objects, such as grabbing objects, open or close doors and drawers while wearing a sensory cyberglove developed in our lab (Cyberglove-HT). Furthermore, it presents the development of a methodology that provides inertial measurement unit(IMU)-based gesture recognition. The interaction tasks and 3D immersive scenarios were designed in Unity 3D. Additionally, we developed an inertial sensor-based gesture recognition by employing an Long short-term memory (LSTM) network. In order to distinguish the effect of wearable technologies in the user experience in immersive environments, we made an experimental study comparing the Cyberglove-HT to standard VR controllers (HTC Vive Controller). The quantitive and subjective results indicate that we were able to enhance the immersive sensation and self embodiment with the Cyberglove-HT. A publication resulted from this work [1] which has been developed in the framework of the R&D project Human Tracking and Perception in Dynamic Immersive Rooms (HTPDI
    corecore