10 research outputs found

    CWIPC-SXR: Point cloud dynamic human dataset for Social XR

    Get PDF
    Real-time, immersive telecommunication systems are quickly becoming a reality, thanks to the advances in acquisition, transmission, and rendering technologies. Point clouds in particular serve as a promising representation in these type of systems, offering photorealistic rendering capabilities with low complexity. Further development of transmission, coding, and quality evaluation algorithms, though, is currently hindered by the lack of publicly available datasets that represent realistic scenarios of remote communication between people in real-time.

    Evaluating the user experience of a photorealistic social VR Movie

    Get PDF
    We all enjoy watching movies together. However, this is not always possible if we live apart. While we can remotely share our screens, the experience differs from being together. We present a social Virtual Reality (VR) system that captures, reconstructs, and transmits multiple users’ volumetric representations into a commercially produced 3D virtual movie, so they have the feeling of “being there” together. We conducted a 48-user experiment where we invited users to experience the virtual movie either using a Head Mounted Display (HMD) or using a 2D screen with a game controller. In addition, we invited 14 VR experts to experience both the HMD and the screen version of the movie and discussed their experiences in two focus groups. Our results showed that both end-users and VR experts found that the way they navigated and interacted inside a 3D virtual movie was novel. They also found that the photorealistic volumetric representations enhanced feelings of co-presence. Our study lays the groundwork for future interactive and immersive VR movie co-watching experiences

    A collaborative VR Murder Mystery using Photorealistic User Representations

    Get PDF
    The VRTogether project has developed a Social VR platform for remote communication and collaboration. The hyper-realistic representation of users, as volumetric video, allows for natural interaction in a virtual environment with others. This video shows one of the use cases, an escape room style, where remote users need to collaboratively resolve a murder mystery. The experience takes place in the victim’s apartment where the police team (avatars) together with up to four real-time captured users (point clouds), work as a team to find clues and come up with a conclusion about what happened to the victim and who was the criminal. This experience includes a layer of interaction, enabling the users to interact with the environment, by touching objects, and to talk to the characters. It also allows for navigating between the rooms of the apartment. The experience provides immersion and social connectedness, where users are protagonists of the story, sharing the virtual environment and following the narrative. The combination of virtual reality environments (space and characters) with novel technologies for real-time volumetric video conferencing enables unique new experiences in a number of areas such as healthcare, broadcasting, and gaming. The video can be watched here: https://youtu.be/Hsj1YWo55k

    Temporal Interpolation of Human Point Clouds Using Neural Networks and Body Part Segmentation

    Get PDF
    In the context of social VR, one of the media formats that is gaining popularity is that of a point cloud. Point clouds are unstructured volumetric representations of individual points that represent a 3D shape. They are easy to render but are voluminous in size, and thus they require high bandwidth to be transmitted, so concessions have to be made either in spatial or temporal resolution. In this thesis we explore the state-of-the-art solutions for temporal interpolation of dynamic point clouds, with a focus on human bodies. We see that the current solutions work well predicting rigid motions but not deformations, which is the case of the human bodies. We hypothesize that the performance of these architectures can be boosted by segmenting the body in different body parts and predicting the interpolation for each body part individually. Due to the lack of dynamic human point clouds datasets, we generate our own point cloud dataset based on a publicly available image dataset, being that the first contribution of this thesis. It consists of a total of 248.080 point cloud frames representing 40 avatars (20 males and 20 females) performing 70 actions each. We adapt a current state of the art neural network architecture to fit our data, changing the loss function, tuning some parameters from its feature's extraction layers, and adding an extra layer to obtain the desired output. We obtain an architecture capable of performing temporal interpolation, which is the second contribution of this thesis. We design a set of experiments in order to validate our hypothesis. These consist on a series of models trained to interpolate individual body parts, and one model trained to interpolate the full body. We observe performance gains in all the models trained with individual body parts, so we conclude with the hypothesis that applying body part segmentation and predicting the interpolation of individual body parts can improve the accuracy of point cloud temporal interpolation systems

    CWIPC API, memory management, utilities

    No full text
    CWI Point Cloud library - in-core storage of pointclouds, reading/writing, multi-language support (C, C++, Python, C#) Github release of v6.4_stable release on GitLab, for Zenodo archiving. The following repositories belong together, and together form the cwipc software suite:: cwipc 10.5281/zenodo.5779250 cwipc_util 10.5281/zenodo.5779368 cwipc_codec 10.5281/zenodo.5779374 cwipc_kinect 10.5281/zenodo.5779370 cwipc_realsense2 10.5281/zenodo.5779320</p

    CWIPC Azure Kinect capture module

    No full text
    CWI Point Cloud library - Capture pointclouds using Microsoft Kinect Azure cameras. Github release of v6.4_stable release on GitLab, for Zenodo archiving. The following repositories belong together, and together form the cwipc software suite:: cwipc 10.5281/zenodo.5779250 cwipc_util 10.5281/zenodo.5779368 cwipc_codec 10.5281/zenodo.5779374 cwipc_kinect 10.5281/zenodo.5779370 cwipc_realsense2 10.5281/zenodo.5779320</p

    CWIPC Intel Realsense capture module

    No full text
    CWI Point Cloud library - capture pointclouds using Intel Realsense cameras. Github release of v6.4_stable release on GitLab, for Zenodo archiving. The following repositories belong together, and together form the cwipc software suite:: cwipc 10.5281/zenodo.5779250 cwipc_util 10.5281/zenodo.5779368 cwipc_codec 10.5281/zenodo.5779374 cwipc_kinect 10.5281/zenodo.5779370 cwipc_realsense2 10.5281/zenodo.5779320</p

    CWIPC-SXR: Point cloud dynamic human dataset for Social XR

    Get PDF
    Real-time, immersive telecommunication systems are quickly becoming a reality, thanks to the advances in acquisition, transmission, and rendering technologies. Point clouds in particular serve as a promising representation in these type of systems, offering photorealistic rendering capabilities with low complexity. Further development of transmission, coding, and quality evaluation algorithms, though, is currently hindered by the lack of publicly available datasets that represent realistic scenarios of remote communication between people in real-time. In this paper, we release a dynamic point cloud dataset that depicts humans interacting in social XR settings. Using commodity hardware, we capture a total of 45 unique sequences, according to several use cases for social XR. As part of our release, we provide annotated raw material, resulting point cloud sequences, and an auxiliary software toolbox to acquire, process, encode, and visualize data, suitable for real-time applications. The dataset can be accessed via the following link: https://www.dis.cwi.nl/cwipc-sxr-dataset/
    corecore