282 research outputs found

    Toward Depth Estimation Using Mask-Based Lensless Cameras

    Full text link
    Recently, coded masks have been used to demonstrate a thin form-factor lensless camera, FlatCam, in which a mask is placed immediately on top of a bare image sensor. In this paper, we present an imaging model and algorithm to jointly estimate depth and intensity information in the scene from a single or multiple FlatCams. We use a light field representation to model the mapping of 3D scene onto the sensor in which light rays from different depths yield different modulation patterns. We present a greedy depth pursuit algorithm to search the 3D volume and estimate the depth and intensity of each pixel within the camera field-of-view. We present simulation results to analyze the performance of our proposed model and algorithm with different FlatCam settings

    Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography

    Get PDF
    We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing. A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo

    Optical sectioning microscopy through single-shot Lightfield protocol

    Get PDF
    Optical sectioning microscopy is usually performed by means of a scanning, multi-shot procedure in combination with non-uniform illumination. In this paper, we change the paradigm and report a method that is based in the light field concept, and that provides optical sectioning for 3D microscopy images after a single-shot capture. To do this we fi rst capture multiple orthographic perspectives of the sample by means of Fourier-domain integral microscopy (FiMic). The second stage of our protocol is the application of a novel refocusing algorithm that is able to produce optical sectioning in real time, and with no resolution worsening, in the case of sparse f luorescent samples.We provide the theoretical derivation of the algorithm, and demonstrate its utility by applying it to simulations and to experimental data

    Overcoming spatio-angular trade-off in light field acquisition using compressive sensing

    Get PDF
    In contrast to conventional cameras which capture a 2D projection of a 3D scene by integrating the angular domain, light field cameras preserve the angular information of individual light rays by capturing a 4D light field of a scene. On the one hand, light field photography enables powerful post-capture capabilities such as refocusing, virtual aperture, depth sensing and perspective shift. On the other hand, it has several drawbacks, namely, high-dimensionality of the captured light fields and a fundamental trade-off between spatial and angular resolution in the camera design. In this paper, we propose a compressive sensing approach to light field acquisition from a sub-Nyquist number of samples. Using an off-the-shelf measurement setup consisting of a digital projector and a Lytro Illum light field camera, we demonstrate the efficiency of the compressive sensing approach by improving the spatial resolution of the acquired light field. This paper presents a proof of concept with a simplified 3D scene as the scene of interest. Results obtained by the proposed method show significant improvement in the spatial resolution of the light field as well as preserved post-capture capabilities

    Light Transport Refocusing for Unknown Scattering Medium

    Get PDF
    2014 22nd International Conference on Pattern Recognition,Stockholm, Sweden,24-28 Aug. 2014In this paper we propose a new light transport refocusing method for depth estimation as well as for investigation inside scattering media with unknown scattering properties. Propagated visible light rays through scattering media are utilized in our proposed refocusing method. We use 2D light source to illuminate the scattering media and 2D image sensor for capturing transported rays. The proposed method that uses 4D light transport can clearly visualize shallow depth, as well as deep depth plane of the medium. We apply our light transport refocusing method for depth estimation using conventional depth-from-focus method and for clear visualization by descattering the light rays passing through the medium. To evaluate the effectiveness we have done experiments using acrylic and milk-water type scattering medium in various optical and geometrical conditions. Finally, we show up the results of depth estimation and clear visualization, as well as with numeric evaluation

    Baseline and triangulation geometry in a standard plenoptic camera

    Get PDF
    In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. The advancement of micro lenses and image sensors enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or to specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model’s accuracy with deviations of less than 0:33 % for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model

    Fast Disparity Estimation from a Single Compressed Light Field Measurement

    Full text link
    The abundant spatial and angular information from light fields has allowed the development of multiple disparity estimation approaches. However, the acquisition of light fields requires high storage and processing cost, limiting the use of this technology in practical applications. To overcome these drawbacks, the compressive sensing (CS) theory has allowed the development of optical architectures to acquire a single coded light field measurement. This measurement is decoded using an optimization algorithm or deep neural network that requires high computational costs. The traditional approach for disparity estimation from compressed light fields requires first recovering the entire light field and then a post-processing step, thus requiring long times. In contrast, this work proposes a fast disparity estimation from a single compressed measurement by omitting the recovery step required in traditional approaches. Specifically, we propose to jointly optimize an optical architecture for acquiring a single coded light field snapshot and a convolutional neural network (CNN) for estimating the disparity maps. Experimentally, the proposed method estimates disparity maps comparable with those obtained from light fields reconstructed using deep learning approaches. Furthermore, the proposed method is 20 times faster in training and inference than the best method that estimates the disparity from reconstructed light fields

    Widening Viewing Angles of Automultiscopic Displays using Refractive Inserts

    Get PDF

    BiDi screen: a thin, depth-sensing LCD for 3D interaction using light fields

    Get PDF
    We transform an LCD into a display that supports both 2D multi-touch and unencumbered 3D gestures. Our BiDirectional (BiDi) screen, capable of both image capture and display, is inspired by emerging LCDs that use embedded optical sensors to detect multiple points of contact. Our key contribution is to exploit the spatial light modulation capability of LCDs to allow lensless imaging without interfering with display functionality. We switch between a display mode showing traditional graphics and a capture mode in which the backlight is disabled and the LCD displays a pinhole array or an equivalent tiled-broadband code. A large-format image sensor is placed slightly behind the liquid crystal layer. Together, the image sensor and LCD form a mask-based light field camera, capturing an array of images equivalent to that produced by a camera array spanning the display surface. The recovered multi-view orthographic imagery is used to passively estimate the depth of scene points. Two motivating applications are described: a hybrid touch plus gesture interaction and a light-gun mode for interacting with external light-emitting widgets. We show a working prototype that simulates the image sensor with a camera and diffuser, allowing interaction up to 50 cm in front of a modified 20.1 inch LCD.National Science Foundation (U.S.) (Grant CCF-0729126)Alfred P. Sloan Foundatio
    corecore