3,025 research outputs found

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    Correlation plenoptic imaging

    Full text link
    Plenoptic imaging is a promising optical modality that simultaneously captures the location and the propagation direction of light in order to enable three-dimensional imaging in a single shot. However, in classical imaging systems, the maximum spatial and angular resolutions are fundamentally linked; thereby, the maximum achievable depth of field is inversely proportional to the spatial resolution. We propose to take advantage of the second-order correlation properties of light to overcome this fundamental limitation. In this paper, we demonstrate that the momentum/position correlation of chaotic light leads to the enhanced refocusing power of correlation plenoptic imaging with respect to standard plenoptic imaging.Comment: 6 pages, 3 figure

    A switchable light field camera architecture with Angle Sensitive Pixels and dictionary-based sparse coding

    Get PDF
    We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that-contrary to light field cameras today-our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.National Science Foundation (U.S.) (NSF Grant IIS-1218411)National Science Foundation (U.S.) (NSF Grant IIS-1116452)MIT Media Lab ConsortiumNational Science Foundation (U.S.) (NSF Graduate Research Fellowship)Natural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)Alfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (DARPA Young Faculty Award

    Light field super resolution through controlled micro-shifts of light field sensor

    Get PDF
    Light field cameras enable new capabilities, such as post-capture refocusing and aperture control, through capturing directional and spatial distribution of light rays in space. Micro-lens array based light field camera design is often preferred due to its light transmission efficiency, cost-effectiveness and compactness. One drawback of the micro-lens array based light field cameras is low spatial resolution due to the fact that a single sensor is shared to capture both spatial and angular information. To address the low spatial resolution issue, we present a light field imaging approach, where multiple light fields are captured and fused to improve the spatial resolution. For each capture, the light field sensor is shifted by a pre-determined fraction of a micro-lens size using an XY translation stage for optimal performance

    Correlation Plenoptic Imaging With Entangled Photons

    Full text link
    Plenoptic imaging is a novel optical technique for three-dimensional imaging in a single shot. It is enabled by the simultaneous measurement of both the location and the propagation direction of light in a given scene. In the standard approach, the maximum spatial and angular resolutions are inversely proportional, and so are the resolution and the maximum achievable depth of focus of the 3D image. We have recently proposed a method to overcome such fundamental limits by combining plenoptic imaging with an intriguing correlation remote-imaging technique: ghost imaging. Here, we theoretically demonstrate that correlation plenoptic imaging can be effectively achieved by exploiting the position-momentum entanglement characterizing spontaneous parametric down-conversion (SPDC) photon pairs. As a proof-of-principle demonstration, we shall show that correlation plenoptic imaging with entangled photons may enable the refocusing of an out-of-focus image at the same depth of focus of a standard plenoptic device, but without sacrificing diffraction-limited image resolution.Comment: 12 pages, 5 figure

    Overcoming spatio-angular trade-off in light field acquisition using compressive sensing

    Get PDF
    In contrast to conventional cameras which capture a 2D projection of a 3D scene by integrating the angular domain, light field cameras preserve the angular information of individual light rays by capturing a 4D light field of a scene. On the one hand, light field photography enables powerful post-capture capabilities such as refocusing, virtual aperture, depth sensing and perspective shift. On the other hand, it has several drawbacks, namely, high-dimensionality of the captured light fields and a fundamental trade-off between spatial and angular resolution in the camera design. In this paper, we propose a compressive sensing approach to light field acquisition from a sub-Nyquist number of samples. Using an off-the-shelf measurement setup consisting of a digital projector and a Lytro Illum light field camera, we demonstrate the efficiency of the compressive sensing approach by improving the spatial resolution of the acquired light field. This paper presents a proof of concept with a simplified 3D scene as the scene of interest. Results obtained by the proposed method show significant improvement in the spatial resolution of the light field as well as preserved post-capture capabilities
    corecore