52,519 research outputs found

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Light field super resolution through controlled micro-shifts of light field sensor

    Get PDF
    Light field cameras enable new capabilities, such as post-capture refocusing and aperture control, through capturing directional and spatial distribution of light rays in space. Micro-lens array based light field camera design is often preferred due to its light transmission efficiency, cost-effectiveness and compactness. One drawback of the micro-lens array based light field cameras is low spatial resolution due to the fact that a single sensor is shared to capture both spatial and angular information. To address the low spatial resolution issue, we present a light field imaging approach, where multiple light fields are captured and fused to improve the spatial resolution. For each capture, the light field sensor is shifted by a pre-determined fraction of a micro-lens size using an XY translation stage for optimal performance

    Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    Get PDF
    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement

    Direct imaging of a digital-micromirror device for configurable microscopic optical potentials

    Full text link
    Programable spatial light modulators (SLMs) have significantly advanced the configurable optical trapping of particles. Typically, these devices are utilized in the Fourier plane of an optical system, but direct imaging of an amplitude pattern can potentially result in increased simplicity and computational speed. Here we demonstrate high-resolution direct imaging of a digital micromirror device (DMD) at high numerical apertures (NA), which we apply to the optical trapping of a Bose-Einstein condensate (BEC). We utilise a (1200 x 1920) pixel DMD and commercially available 0.45 NA microscope objectives, finding that atoms confined in a hybrid optical/magnetic or all-optical potential can be patterned using repulsive blue-detuned (532 nm) light with 630(10) nm full-width at half-maximum (FWHM) resolution, within 5% of the diffraction limit. The result is near arbitrary control of the density the BEC without the need for expensive custom optics. We also introduce the technique of time-averaged DMD potentials, demonstrating the ability to produce multiple grayscale levels with minimal heating of the atomic cloud, by utilising the high switching speed (20 kHz maximum) of the DMD. These techniques will enable the realization and control of diverse optical potentials for superfluid dynamics and atomtronics applications with quantum gases. The performance of this system in a direct imaging configuration has wider application for optical trapping at non-trivial NAs.Comment: 9 page

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging
    corecore