17,359 research outputs found

    Multi-channel coded-aperture photography

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 87-89).This thesis describes the multi-channel coded-aperture photography, a modified camera system that can extract an all-focus image of the scene along with a depth estimate over the scene. The modification consists of inserting a set of patterned color filters into the aperture of the camera lens. This work generalizes the previous research on a single-channel coded aperture, by deploying distinct filters in the three primary color channels, in order to cope better with the effect of a Bayer filter and to exploit the correlation among the channels. We derive the model and algorithms for the multi-channel coded aperture, comparing the simulated performance of the reconstruction algorithm against that of the original single-channel coded aperture. We also demonstrate a physical prototype, discussing the challenges arising from the use of multiple filters. We provide a comparison with the single-channel coded aperture in performance, and present results on several scenes of cluttered objects at various depths.by Jongmin Baek.M.Eng

    Light Field Blind Motion Deblurring

    Full text link
    We study the problem of deblurring light fields of general 3D scenes captured under 3D camera motion and present both theoretical and practical contributions. By analyzing the motion-blurred light field in the primal and Fourier domains, we develop intuition into the effects of camera motion on the light field, show the advantages of capturing a 4D light field instead of a conventional 2D image for motion deblurring, and derive simple methods of motion deblurring in certain cases. We then present an algorithm to blindly deblur light fields of general scenes without any estimation of scene geometry, and demonstrate that we can recover both the sharp light field and the 3D camera motion path of real and synthetically-blurred light fields.Comment: To be presented at CVPR 201

    Depth Estimation Through a Generative Model of Light Field Synthesis

    Full text link
    Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks. A crucial ingredient in such endeavors is accurate depth recovery. We present a novel framework that allows the recovery of a high quality continuous depth map from light field data. To this end we propose a generative model of a light field that is fully parametrized by its corresponding depth map. The model allows for the integration of powerful regularization techniques such as a non-local means prior, facilitating accurate depth map estimation.Comment: German Conference on Pattern Recognition (GCPR) 201

    Pixel level optical-transfer-function design based on the surface-wave-interferometry aperture

    Get PDF
    The design of optical transfer function (OTF) is of significant importance for optical information processing in various imaging and vision systems. Typically, OTF design relies on sophisticated bulk optical arrangement in the light path of the optical systems. In this letter, we demonstrate a surface-wave-interferometry aperture (SWIA) that can be directly incorporated onto optical sensors to accomplish OTF design on the pixel level. The whole aperture design is based on the bull’s eye structure. It composes of a central hole (diameter of 300 nm) and periodic groove (period of 560 nm) on a 340 nm thick gold layer. We show, with both simulation and experiment, that different types of optical transfer functions (notch, highpass and lowpass filter) can be achieved by manipulating the interference between the direct transmission of the central hole and the surface wave (SW) component induced from the periodic groove. Pixel level OTF design provides a low-cost, ultra robust, highly compact method for numerous applications such as optofluidic microscopy, wavefront detection, darkfield imaging, and computational photography

    Aperture Supervision for Monocular Depth Estimation

    Full text link
    We present a novel method to train machine learning algorithms to estimate scene depths from a single image, by using the information provided by a camera's aperture as supervision. Prior works use a depth sensor's outputs or images of the same scene from alternate viewpoints as supervision, while our method instead uses images from the same viewpoint taken with a varying camera aperture. To enable learning algorithms to use aperture effects as supervision, we introduce two differentiable aperture rendering functions that use the input image and predicted depths to simulate the depth-of-field effects caused by real camera apertures. We train a monocular depth estimation network end-to-end to predict the scene depths that best explain these finite aperture images as defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    Variational Disparity Estimation Framework for Plenoptic Image

    Full text link
    This paper presents a computational framework for accurately estimating the disparity map of plenoptic images. The proposed framework is based on the variational principle and provides intrinsic sub-pixel precision. The light-field motion tensor introduced in the framework allows us to combine advanced robust data terms as well as provides explicit treatments for different color channels. A warping strategy is embedded in our framework for tackling the large displacement problem. We also show that by applying a simple regularization term and a guided median filtering, the accuracy of displacement field at occluded area could be greatly enhanced. We demonstrate the excellent performance of the proposed framework by intensive comparisons with the Lytro software and contemporary approaches on both synthetic and real-world datasets

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201
    • …
    corecore