2,959 research outputs found

    Learning to Synthesize a 4D RGBD Light Field from a Single Image

    Full text link
    We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point. Please see our supplementary video at https://youtu.be/yLCvWoQLnmsComment: International Conference on Computer Vision (ICCV) 201

    Light Field Blind Motion Deblurring

    Full text link
    We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201

    Accurate Light Field Depth Estimation with Superpixel Regularization over Partially Occluded Regions

    Full text link
    Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex scenes with multiple occlusions. To address these challenging issues, we propose a very effective depth estimation framework which focuses on regularizing the initial label confidence map and edge strength weights. Specifically, we first detect partially occluded boundary regions (POBR) via superpixel based regularization. Series of shrinkage/reinforcement operations are then applied on the label confidence map and edge strength weights over the POBR. We show that after weight manipulations, even a low-complexity weighted least squares model can produce much better depth estimation than state-of-the-art methods in terms of average disparity error rate, occlusion boundary precision-recall rate, and the preservation of intricate visual features

    Depth Estimation Through a Generative Model of Light Field Synthesis

    Full text link
    Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks. A crucial ingredient in such endeavors is accurate depth recovery. We present a novel framework that allows the recovery of a high quality continuous depth map from light field data. To this end we propose a generative model of a light field that is fully parametrized by its corresponding depth map. The model allows for the integration of powerful regularization techniques such as a non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201

    Computational Schlieren Photography with Light Field Probes

    Get PDF
    We introduce a new approach to capturing refraction in transparent media, which we call light field background oriented Schlieren photography. By optically coding the locations and directions of light rays emerging from a light field probe, we can capture changes of the refractive index field between the probe and a camera or an observer. Our prototype capture setup consists of inexpensive off-the-shelf hardware, including inkjet-printed transparencies, lenslet arrays, and a conventional camera. By carefully encoding the color and intensity variations of 4D light field probes, we show how to code both spatial and angular information of refractive phenomena. Such coding schemes are demonstrated to allow for a new, single image approach to reconstructing transparent surfaces, such as thin solids or surfaces of fluids. The captured visual information is used to reconstruct refractive surface normals and a sparse set of control points independently from a single photograph.Natural Sciences and Engineering Research Council of CanadaAlfred P. Sloan FoundationUnited States. Defense Advanced Research Projects Agency. Young Faculty Awar

    Optomotor Swimming in Larval Zebrafish Is Driven by Global Whole-Field Visual Motion and Local Light-Dark Transitions

    No full text
    Stabilizing gaze and position within an environment constitutes an important task for the nervous system of many animals. The optomotor response (OMR) is a reflexive behavior, present across many species, in which animals move in the direction of perceived whole-field visual motion, therefore stabilizing themselves with respect to the visual environment. Although the OMR has been extensively used to probe visuomotor neuronal circuitry, the exact visual cues that elicit the behavior remain unidentified. In this study, we use larval zebrafish to identify spatio-temporal visual features that robustly elicit forward OMR swimming. These cues consist of a local, forward-moving, off edge together with on/off symmetric, similarly directed, global motion. Imaging experiments reveal neural units specifically activated by the forward-moving light-dark transition. We conclude that the OMR is driven not just by whole-field motion but by the interplay between global and local visual stimuli, where the latter exhibits a strong light-dark asymmetry
    • …
    corecore