3 research outputs found

    Motion parallax for 360° RGBD video

    Get PDF
    We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea

    Discontinuity preserving stereo with small baseline multi-flash illumination

    No full text
    Currently, sharp discontinuities in depth and partial occlusions in multiview imaging systems pose serious challenges for many dense correspondence algorithms. However, it is important for 3D reconstruction methods to preserve depth edges as they correspond to important shape features like silhouettes which are critical for understanding the structure of a scene. In this paper we show how active illumination algorithms can produce a rich set of feature maps that are useful in dense 3D reconstruction. We start by showing a method to compute a qualitative depth map from a single camera, which encodes object relative distances and can be used as a prior for stereo. In a multiview setup, we show that along with depth edges, binocular half-occluded pixels can also be explicitly and reliably labeled. To demonstrate the usefulness of these feature maps, we show how they can be used in two different algorithms for dense stereo correspondence. Our experimental results show that our enhanced stereo algorithms are able to extract high quality, discontinuity preserving correspondence maps from scenes that are extremely challenging for conventional stereo methods. 1
    corecore