246 research outputs found

    SILVR: A Synthetic Immersive Large-Volume Plenoptic Dataset

    Full text link
    In six-degrees-of-freedom light-field (LF) experiences, the viewer's freedom is limited by the extent to which the plenoptic function was sampled. Existing LF datasets represent only small portions of the plenoptic function, such that they either cover a small volume, or they have limited field of view. Therefore, we propose a new LF image dataset "SILVR" that allows for six-degrees-of-freedom navigation in much larger volumes while maintaining full panoramic field of view. We rendered three different virtual scenes in various configurations, where the number of views ranges from 642 to 2226. One of these scenes (called Zen Garden) is a novel scene, and is made publicly available. We chose to position the virtual cameras closely together in large cuboid and spherical organisations (2.2m32.2m^3 to 48m348m^3), equipped with 180{\deg} fish-eye lenses. Every view is rendered to a color image and depth map of 2048px ×\times 2048px. Additionally, we present the software used to automate the multi-view rendering process, as well as a lens-reprojection tool that converts between images with panoramic or fish-eye projection to a standard rectilinear (i.e., perspective) projection. Finally, we demonstrate how the proposed dataset and software can be used to evaluate LF coding/rendering techniques(in this case for training NeRFs with instant-ngp). As such, we provide the first publicly-available LF dataset for large volumes of light with full panoramic field of viewComment: In 13th ACM Multimedia Systems Conference (MMSys '22), June 14-17, 2022, Athlone, Ireland. ACM, New York, NY, USA, 6 page

    Development and user evaluation of an immersive light field system for space exploration

    Get PDF
    This paper presents the developmental work and user evaluation results of an immersive light field system built for the European Space Agency’s (ESA) project called “Light field-enhanced immersive teleoperation system for space station and ground control.” The main aim of the project is to evaluate the usefulness and feasibility of light fields in space exploration, and compare it to other types of immersive content, such as 360° photos and point clouds. In the course of the project, light field data were captured with a robotically controlled camera and processed into a suitable format. The light field authoring process was performed, and a light field renderer capable of displaying immersive panoramic or planar light fields on modern virtual reality hardware was developed. The planetary surface points of interest (POIs) were modeled in the laboratory environment, and three distinct test use cases utilizing them were developed. The user evaluation was held in the European Astronaut Centre (EAC) in the summer of 2023, involving prospective end-users of various backgrounds. During the evaluation, questionnaires, interviews, and observation were used for data collection. At the end of the paper, the evaluation results, as well as a discussion about lessons learned and possible improvements to the light field system, are presented

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    OmniPhotos: Casual 360° VR Photography

    Get PDF
    Virtual reality headsets are becoming increasingly popular, yet it remains difficult for casual users to capture immersive 360° VR panoramas. State-of-the-art approaches require capture times of usually far more than a minute and are often limited in their supported range of head motion. We introduce OmniPhotos, a novel approach for quickly and casually capturing high-quality 360° panoramas with motion parallax. Our approach requires a single sweep with a consumer 360° video camera as input, which takes less than 3 seconds to capture with a rotating selfie stick or 10 seconds handheld. This is the fastest capture time for any VR photography approach supporting motion parallax by an order of magnitude. We improve the visual rendering quality of our OmniPhotos by alleviating vertical distortion using a novel deformable proxy geometry, which we fit to a sparse 3D reconstruction of captured scenes. In addition, the 360° input views significantly expand the available viewing area, and thus the range of motion, compared to previous approaches. We have captured more than 50 OmniPhotos and show video results for a large variety of scenes.This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 66599

    3D Scene Geometry Estimation from 360^\circ Imagery: A Survey

    Full text link
    This paper provides a comprehensive survey on pioneer and state-of-the-art 3D scene geometry estimation methodologies based on single, two, or multiple images captured under the omnidirectional optics. We first revisit the basic concepts of the spherical camera model, and review the most common acquisition technologies and representation formats suitable for omnidirectional (also called 360^\circ, spherical or panoramic) images and videos. We then survey monocular layout and depth inference approaches, highlighting the recent advances in learning-based solutions suited for spherical data. The classical stereo matching is then revised on the spherical domain, where methodologies for detecting and describing sparse and dense features become crucial. The stereo matching concepts are then extrapolated for multiple view camera setups, categorizing them among light fields, multi-view stereo, and structure from motion (or visual simultaneous localization and mapping). We also compile and discuss commonly adopted datasets and figures of merit indicated for each purpose and list recent results for completeness. We conclude this paper by pointing out current and future trends.Comment: Published in ACM Computing Survey
    corecore