369 research outputs found

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Single exposure 3D imaging of dusty plasma clusters

    Full text link
    We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.Comment: 6 pages, 7 figures. Submitted to Rev. Sci. Inst

    Exploring plenoptic properties of correlation imaging with chaotic light

    Full text link
    In a setup illuminated by chaotic light, we consider different schemes that enable to perform imaging by measuring second-order intensity correlations. The most relevant feature of the proposed protocols is the ability to perform plenoptic imaging, namely to reconstruct the geometrical path of light propagating in the system, by imaging both the object and the focusing element. This property allows to encode, in a single data acquisition, both multi-perspective images of the scene and light distribution in different planes between the scene and the focusing element. We unveil the plenoptic property of three different setups, explore their refocusing potentialities and discuss their practical applications.Comment: 9 pages, 4 figure

    3D Face Reconstruction from Light Field Images: A Model-free Approach

    Full text link
    Reconstructing 3D facial geometry from a single RGB image has recently instigated wide research interest. However, it is still an ill-posed problem and most methods rely on prior models hence undermining the accuracy of the recovered 3D faces. In this paper, we exploit the Epipolar Plane Images (EPI) obtained from light field cameras and learn CNN models that recover horizontal and vertical 3D facial curves from the respective horizontal and vertical EPIs. Our 3D face reconstruction network (FaceLFnet) comprises a densely connected architecture to learn accurate 3D facial curves from low resolution EPIs. To train the proposed FaceLFnets from scratch, we synthesize photo-realistic light field images from 3D facial scans. The curve by curve 3D face estimation approach allows the networks to learn from only 14K images of 80 identities, which still comprises over 11 Million EPIs/curves. The estimated facial curves are merged into a single pointcloud to which a surface is fitted to get the final 3D face. Our method is model-free, requires only a few training samples to learn FaceLFnet and can reconstruct 3D faces with high accuracy from single light field images under varying poses, expressions and lighting conditions. Comparison on the BU-3DFE and BU-4DFE datasets show that our method reduces reconstruction errors by over 20% compared to recent state of the art

    Structured Light-Field Focusing 3D Density Measurements of A Supersonic Cone

    Get PDF
    This study describes three-dimensional (3D) quantitative visualization of density field in a supersonic flow around a cone spike. A measurement of the density gradient is conducted within a supersonic wind tunnel facility at the Propulsion and Energy Research Laboratory at the University of Central Florida utilizing Structured Light-Field Focusing Schlieren (SLLF). In conventional schlieren and Shadowgraph techniques, it is widely known that a complicated optical system is needed and yet visualizable area depends on an effective diameter of lenses and mirrors. Unlike these techniques, SLLF is yet one of the same family as schlieren photography, it is capable of non-intrusive turbulent flow measurement with relatively low cost and easy-to-setup instruments. In this technique, cross-sectional area in the flow field that is parallel to flows can be observed while other schlieren methods measure density gradients in line-of-sight, meaning that it measures integrated density distribution caused by discontinuous flow parameters. To reconstruct a 3D model of shock structure, two-dimensional (2D) images are pictured to process in MATLAB. The ultimate goal of this study is to introduce a novel technique of SLLF and quantitative 3D shock structures generated around a cone spike to reveal the interaction between free-stream flow and the high-pressure region

    Light Field Blind Motion Deblurring

    Full text link
    We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201
    corecore