749 research outputs found

    Single view reflectance capture using multiplexed scattering and time-of-flight imaging

    Get PDF
    This paper introduces the concept of time-of-flight reflectance estimation, and demonstrates a new technique that allows a camera to rapidly acquire reflectance properties of objects from a single view-point, over relatively long distances and without encircling equipment. We measure material properties by indirectly illuminating an object by a laser source, and observing its reflected light indirectly using a time-of-flight camera. The configuration collectively acquires dense angular, but low spatial sampling, within a limited solid angle range - all from a single viewpoint. Our ultra-fast imaging approach captures space-time "streak images" that can separate out different bounces of light based on path length. Entanglements arise in the streak images mixing signals from multiple paths if they have the same total path length. We show how reflectances can be recovered by solving for a linear system of equations and assuming parametric material models; fitting to lower dimensional reflectance models enables us to disentangle measurements. We demonstrate proof-of-concept results of parametric reflectance models for homogeneous and discretized heterogeneous patches, both using simulation and experimental hardware. As compared to lengthy or highly calibrated BRDF acquisition techniques, we demonstrate a device that can rapidly, on the order of seconds, capture meaningful reflectance information. We expect hardware advances to improve the portability and speed of this device.National Science Foundation (U.S.) (Award CCF-0644175)National Science Foundation (U.S.) (Award CCF-0811680)National Science Foundation (U.S.) (Award IIS-1011919)Intel Corporation (PhD Fellowship)Alfred P. Sloan Foundation (Research Fellowship

    Time-resolved reconstruction of scene reflectance hidden by a diffuser

    Get PDF
    We use time-of-flight information in an iterative non-linear optimization algorithm to recover reflectance properties of a three-dimensional scene hidden behind a diffuser. We demonstrate reconstruction of wide-field images without relying on diffuser correlation properties

    Quantum-inspired computational imaging

    Get PDF
    Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip

    Physics vs. Learned Priors: Rethinking Camera and Algorithm Design for Task-Specific Imaging

    Full text link
    Cameras were originally designed using physics-based heuristics to capture aesthetic images. In recent years, there has been a transformation in camera design from being purely physics-driven to increasingly data-driven and task-specific. In this paper, we present a framework to understand the building blocks of this nascent field of end-to-end design of camera hardware and algorithms. As part of this framework, we show how methods that exploit both physics and data have become prevalent in imaging and computer vision, underscoring a key trend that will continue to dominate the future of task-specific camera design. Finally, we share current barriers to progress in end-to-end design, and hypothesize how these barriers can be overcome

    Computational periscopy with an ordinary digital camera

    Full text link
    Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems^1,2,3,4,5,6,7,8,9,10,11,12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.We thank F. Durand, W. T. Freeman, Y. Ma, J. Rapp, J. H. Shapiro, A. Torralba, F. N. C. Wong and G. W. Wornell for discussions. This work was supported by the Defense Advanced Research Projects Agency (DARPA) REVEAL Program contract number HR0011-16-C-0030. (HR0011-16-C-0030 - Defense Advanced Research Projects Agency (DARPA) REVEAL Program)Accepted manuscrip

    Non-line-of-sight transient rendering

    Get PDF
    The capture and analysis of light in flight, or light in transient state, has enabled applications such as range imaging, reflectance estimation and especially non-line-of-sight (NLOS) imaging. For this last case, hidden geometry can be reconstructed using time-resolved measurements of indirect diffuse light emitted by a laser. Transient rendering is a key tool for developing such new applications, significantly more challenging than its steady-state counterpart. In this work, we introduce a set of simple yet effective subpath sampling techniques targeting transient light transport simulation in occluded scenes. We analyze the usual capture setups of NLOS scenes, where both the camera and light sources are focused on particular points in the scene. Also, the hidden geometry can be difficult to sample using conventional techniques. We leverage that configuration to reduce the integration path space. We implement our techniques in a modified version of Mitsuba 2 adapted for transient light transport, allowing us to support parallelization, polarization, and differentiable rendering. © 2022 The Author(s
    • …
    corecore