17 research outputs found

    Parallax360: Stereoscopic 360Ā° Scene Representation for Head-Motion Parallax

    Get PDF
    We propose a novel 360Ā° scene representation for converting real scenes into stereoscopic 3D virtual reality content with head-motion parallax. Our image-based scene representation enables efficient synthesis of novel views with six degrees-of-freedom (6-DoF) by fusing motion fields at two scales: (1) disparity motion fields carry implicit depth information and are robustly estimated from multiple laterally displaced auxiliary viewpoints, and (2) pairwise motion fields enable real-time flow-based blending, which improves the visual fidelity of results by minimizing ghosting and view transition artifacts. Based on our scene representation, we present an end-to-end system that captures real scenes with a robotic camera arm, processes the recorded data, and finally renders the scene in a head-mounted display in real time (more than 40 Hz). Our approach is the first to support head-motion parallax when viewing real 360Ā° scenes. We demonstrate compelling results that illustrate the enhanced visual experience ā€“ and hence sense of immersion ā€“ achieved with our approach compared to widely-used stereoscopic panoramas

    Calibration and disparity maps for a depth camera based on a four-lens device

    Get PDF
    We propose a model of depth camera based on a four-lens device. This device is used for validating alternate approaches for calibrating multiview cameras and also for computing disparity or depth images. The calibration method arises from previous works, where principles of variable homography were extended for three-dimensional (3-D) measurement. Here, calibration is performed between two contiguous views obtained on the same image sensor. This approach leads us to propose a new approach for simplifying calibration by using the properties of the variable homography. Here, the second part addresses new principles for obtaining disparity images without any matching. A fast algorithm using a contour propagation algorithm is proposed without requiring structured or random pattern projection. These principles are proposed in a framework of quality control by vision, for inspection in natural illumination. By preserving scene photometry, some other standard controls, as for example calipers, shape recognition, or barcode reading, can be done conjointly with 3-D measurements. Approaches presented here are evaluated. First, we show that rapid calibration is relevant for devices mounted with multiple lenses. Second, synthetic and real experimentations validate our method for computing depth images

    Light ļ¬eld panorama by a plenoptic camera

    Get PDF
    Consumer-grade plenoptic camera Lytro draws a lot of interest from both academic and industrial world. However its low resolution in both spatial and angular domain prevents it from being used for ļ¬ne and detailed light ļ¬eld acquisition. This paper proposes to use a plenoptic camera as an image scanner and perform light ļ¬eld stitching to increase the size of the acquired light ļ¬eld data. We consider a simpliļ¬ed plenoptic camera model comprising a pinhole camera moving behind a thin lens. Based on this model, we describe how to perform light ļ¬eld acquisition and stitching under two diļ¬€erent scenarios: by camera translation or by camera translation and rotation. In both cases, we assume the camera motion to be known. In the case of camera translation, we show how the acquired light ļ¬elds should be resampled to increase the spatial range and ultimately obtain a wider ļ¬eld of view. In the case of camera translation and rotation, the camera motion is calculated such that the light ļ¬elds can be directly stitched and extended in the angular domain. Simulation results verify our approach and demonstrate the potential of the motion model for further light ļ¬eld applications such as registration and super-resolution

    Epipolar Plane Image Rectification and Flat Surface Detection in Light Field

    Get PDF

    Parallax360: Stereoscopic 360Ā° Scene Representation for Head-Motion Parallax

    Get PDF
    We propose a novel 360Ā° scene representation for converting real scenes into stereoscopic 3D virtual reality content with head-motion parallax. Our image-based scene representation enables efficient synthesis of novel views with six degrees-of-freedom (6-DoF) by fusing motion fields at two scales: (1) disparity motion fields carry implicit depth information and are robustly estimated from multiple laterally displaced auxiliary viewpoints, and (2) pairwise motion fields enable real-time flow-based blending, which improves the visual fidelity of results by minimizing ghosting and view transition artifacts. Based on our scene representation, we present an end-to-end system that captures real scenes with a robotic camera arm, processes the recorded data, and finally renders the scene in a head-mounted display in real time (more than 40 Hz). Our approach is the first to support head-motion parallax when viewing real 360Ā° scenes. We demonstrate compelling results that illustrate the enhanced visual experience ā€“ and hence sense of immersion ā€“ achieved with our approach compared to widely-used stereoscopic panoramas

    3D Scene Geometry Estimation from 360āˆ˜^\circ Imagery: A Survey

    Full text link
    This paper provides a comprehensive survey on pioneer and state-of-the-art 3D scene geometry estimation methodologies based on single, two, or multiple images captured under the omnidirectional optics. We first revisit the basic concepts of the spherical camera model, and review the most common acquisition technologies and representation formats suitable for omnidirectional (also called 360āˆ˜^\circ, spherical or panoramic) images and videos. We then survey monocular layout and depth inference approaches, highlighting the recent advances in learning-based solutions suited for spherical data. The classical stereo matching is then revised on the spherical domain, where methodologies for detecting and describing sparse and dense features become crucial. The stereo matching concepts are then extrapolated for multiple view camera setups, categorizing them among light fields, multi-view stereo, and structure from motion (or visual simultaneous localization and mapping). We also compile and discuss commonly adopted datasets and figures of merit indicated for each purpose and list recent results for completeness. We conclude this paper by pointing out current and future trends.Comment: Published in ACM Computing Survey

    Improved Displaying System for HMD with Focusing on Gazing Point Using Photographed Panorama Light Field

    Get PDF
    We improve a displaying system for HMD which displays a photographed image focused on userā€™s gazing point. By showing an image focused on userā€™s gazing point, weā€™ve displayed image which is more similar to the human view. The refocused image is generated from a rendered and trimmed panorama light field image. Our system is realized by displaying a refocus image according to the depth information of the gazing point obtained from the HMD having the gaze tracking. By combining our system with the depth estimation method, we generated a consistent depth map between multiple light fields. This makes our system possible to display a more correct image matching the gazing point. We also experimented on whether we can augment the depth perception of the user by displaying images which focused on gazing point, and show that we can extend the depth perception of the user like ordinary displays with gaze detection
    corecore