402 research outputs found

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201

    A trillion frames per second: the techniques and applications of light-in-flight photography

    Get PDF
    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light-in-flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost.Comment: Published in Reports on progress in Physic

    Non-line-of-sight imaging using a time-gated single photon avalanche diode

    Get PDF
    By using time-of-flight information encoded in multiply scattered light, it is possible to reconstruct images of objects hidden from the camera’s direct line of sight. Here, we present a non-line-of-sight imaging system that uses a single-pixel, single-photon avalanche diode (SPAD) to collect time-of-flight information. Compared to earlier systems, this modification provides significant improvements in terms of power requirements, form factor, cost, and reconstruction time, while maintaining a comparable time resolution. The potential for further size and cost reduction of this technology make this system a good base for developing a practical system that can be used in real world applications

    Non-line-of-sight tracking of people at long range

    Get PDF
    A remote-sensing system that can determine the position of hidden objects has applications in many critical real-life scenarios, such as search and rescue missions and safe autonomous driving. Previous work has shown the ability to range and image objects hidden from the direct line of sight, employing advanced optical imaging technologies aimed at small objects at short range. In this work we demonstrate a long-range tracking system based on single laser illumination and single-pixel single-photon detection. This enables us to track one or more people hidden from view at a stand-off distance of over 50~m. These results pave the way towards next generation LiDAR systems that will reconstruct not only the direct-view scene but also the main elements hidden behind walls or corners

    Picosecond time-resolved imaging using SPAD cameras

    Get PDF
    The recent development of 2D arrays of single-photon avalanche diodes (SPAD) has driven the development of applications based on the ability to capture light in motion. Such arrays are composed typically of 32x32 SPAD detectors, each having the ability to detect single photons and measure their time of arrival with a resolution of about 100 ps. Thanks to the single-photon sensitivity and the high temporal resolution of these detectors, it is now possible to image light as it is travelling on a centimetre scale. This opens the door for the direct observation and study of dynamics evolving over picoseconds and nanoseconds timescales such as laser propagation in air, laser-induced plasma and laser propagation in optical fibres. Another interesting application enabled by the ability to image light in motion is the detection of objects hidden from view, based on the recording of scattered waves originating from objects hidden by an obstacle. Similarly to LIDAR systems, the temporal information acquired at every pixel of a SPAD array, combined with the spatial information it provides, allows to pinpoint the position of an object located outside the line-of-sight of the detector. A non-line-of-sight tracking can be a valuable asset in many scenarios, including for search and rescue mission and safer autonomous driving

    Femto-photography: capturing and visualizing the propagation of light

    Get PDF
    We present femto-photography, a novel imaging technique to capture and visualize the propagation of light. With an effective exposure time of 1.85 picoseconds (ps) per frame, we reconstruct movies of ultrafast events at an equivalent resolution of about one half trillion frames per second. Because cameras with this shutter speed do not exist, we re-purpose modern imaging hardware to record an ensemble average of repeatable events that are synchronized to a streak sensor, in which the time of arrival of light from the scene is coded in one of the sensor's spatial dimensions. We introduce reconstruction methods that allow us to visualize the propagation of femtosecond light pulses through macroscopic scenes; at such fast resolution, we must consider the notion of time-unwarping between the camera's and the world's space-time coordinate systems to take into account effects associated with the finite speed of light. We apply our femto-photography technique to visualizations of very different scenes, which allow us to observe the rich dynamics of time-resolved light transport effects, including scattering, specular reflections, diffuse interreflections, diffraction, caustics, and subsurface scattering. Our work has potential applications in artistic, educational, and scientific visualizations; industrial imaging to analyze material properties; and medical imaging to reconstruct subsurface elements. In addition, our time-resolved technique may motivate new forms of computational photography.MIT Media Lab ConsortiumLincoln LaboratoryMassachusetts Institute of Technology. Institute for Soldier NanotechnologiesAlfred P. Sloan Foundation (Research Fellowship)United States. Defense Advanced Research Projects Agency (Young Faculty Award

    Computational periscopy with an ordinary digital camera

    Full text link
    Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems^1,2,3,4,5,6,7,8,9,10,11,12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.We thank F. Durand, W. T. Freeman, Y. Ma, J. Rapp, J. H. Shapiro, A. Torralba, F. N. C. Wong and G. W. Wornell for discussions. This work was supported by the Defense Advanced Research Projects Agency (DARPA) REVEAL Program contract number HR0011-16-C-0030. (HR0011-16-C-0030 - Defense Advanced Research Projects Agency (DARPA) REVEAL Program)Accepted manuscrip

    Non-line-of-sight transient rendering

    Get PDF
    The capture and analysis of light in flight, or light in transient state, has enabled applications such as range imaging, reflectance estimation and especially non-line-of-sight (NLOS) imaging. For this last case, hidden geometry can be reconstructed using time-resolved measurements of indirect diffuse light emitted by a laser. Transient rendering is a key tool for developing such new applications, significantly more challenging than its steady-state counterpart. In this work, we introduce a set of simple yet effective subpath sampling techniques targeting transient light transport simulation in occluded scenes. We analyze the usual capture setups of NLOS scenes, where both the camera and light sources are focused on particular points in the scene. Also, the hidden geometry can be difficult to sample using conventional techniques. We leverage that configuration to reduce the integration path space. We implement our techniques in a modified version of Mitsuba 2 adapted for transient light transport, allowing us to support parallelization, polarization, and differentiable rendering. © 2022 The Author(s
    corecore