43,545 research outputs found

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Optimal Depth Estimation and Extended Depth of Field from Single Images by Computational Imaging using Chromatic Aberrations

    Get PDF
    The thesis presents a thorough analysis of a computational imaging approach to estimate the optimal depth, and the extended depth of field from a single image using axial chromatic aberrations. To assist a camera design process, a digital camera simulator is developed which can efficiently simulate different kind of lenses for a 3D scene. The main contribution in the simulator is the fast implementation of space variant filtering and accurate simulation of optical blur at occlusion boundaries. The simulator also includes sensor modeling and digital post processing to facilitate a co-design of optics and digital processing algorithms. To estimate the depth from color images, which are defocused to different amount due to axial chromatic aberrations, a low cost algorithm is developed. Due to varying contrast across colors, a local contrast independent blur measure is proposed. The normalized ratios between the blur measure of all three colors (red, green and blue) are used to estimate the depth for a larger distance range. The analysis of depth errors is performed, which shows the limitations of depth from chromatic aberrations, especially for narrowband object spectra. Since the blur changes over the field and hence depth, therefore, a simple calibration procedure is developed to correct the field varying behavior of estimated depth. A prototype lens is designed with optimal amount of axial chromatic aberrations for a focal length of 4 mm and F-number 2.4. The real captured and synthetic images show the depth measurement with the root mean square error of 10% in the distance range of 30 cm to 2 m. Taking the advantage of chromatic aberrations and estimated depth, a method is proposed to extend the depth of field of the captured image. An imaging sensor with white (W) pixel along with red, green and blue (RGB) pixels with a lens exhibiting axial chromatic aberrations is used to overcome the limitations of previous methods. The proposed method first restores the white image with depth invariant point spread function, and then transfers the sharpness information of the sharpest color or white image to blurred colors. Due to broadband color filter responses, the blur of each RGB color at its focus position is larger in case of chromatic aberrations as compared to chromatic aberrations corrected lens. Therefore, restored white image helps in getting a sharper image for these positions, and also for the objects where the sharpest color information is missing. An efficient implementation of the proposed algorithm achieves better image quality with low computational complexity. Finally, the performance of the depth estimation and extended depth of field is studied for different camera parameters. The criteria are defined to select optimal lens and sensor parameters to acquire desired results with the proposed digital post processing algorithms
    • …
    corecore