27 research outputs found

    Non-line-of-sight tracking of people at long range

    Get PDF
    A remote-sensing system that can determine the position of hidden objects has applications in many critical real-life scenarios, such as search and rescue missions and safe autonomous driving. Previous work has shown the ability to range and image objects hidden from the direct line of sight, employing advanced optical imaging technologies aimed at small objects at short range. In this work we demonstrate a long-range tracking system based on single laser illumination and single-pixel single-photon detection. This enables us to track one or more people hidden from view at a stand-off distance of over 50~m. These results pave the way towards next generation LiDAR systems that will reconstruct not only the direct-view scene but also the main elements hidden behind walls or corners

    Computational periscopy with an ordinary digital camera

    Full text link
    Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems^1,2,3,4,5,6,7,8,9,10,11,12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.We thank F. Durand, W. T. Freeman, Y. Ma, J. Rapp, J. H. Shapiro, A. Torralba, F. N. C. Wong and G. W. Wornell for discussions. This work was supported by the Defense Advanced Research Projects Agency (DARPA) REVEAL Program contract number HR0011-16-C-0030. (HR0011-16-C-0030 - Defense Advanced Research Projects Agency (DARPA) REVEAL Program)Accepted manuscrip

    A single-shot non-line-of-sight range-finder

    Get PDF
    The ability to locate a target around a corner is crucial in situations where it is impractical or unsafe to physically move around the obstruction. However, current techniques are limited to long acquisition times as they rely on single-photon counting for precise arrival time measurements. Here, we demonstrate a single-shot non-line-of-sight range-finding method operating at 10 Hz and capable of detecting a moving human target up to distances of 3 m around a corner. Due to the potential data acquisition speeds, this technique will find applications in search and rescue and autonomous vehicles

    Occlusion-based computational periscopy with consumer cameras

    Full text link
    The ability to form images of scenes hidden from direct view would be advantageous in many applications – from improved motion planning and collision avoidance in autonomous navigation to enhanced danger anticipation for first-responders in search-and-rescue missions. Recent techniques for imaging around corners have mostly relied on time-of-flight measurements of light propagation, necessitating the use of expensive, specialized optical systems. In this work, we demonstrate how to form images of hidden scenes from intensity-only measurements of the light reaching a visible surface from the hidden scene. Our approach exploits the penumbra cast by an opaque occluding object onto a visible surface. Specifically, we present a physical model that relates the measured photograph to the radiosity of the hidden scene and the visibility function due to the opaque occluder. For a given scene–occluder setup, we characterize the parts of the hidden region for which the physical model is well-conditioned for inversion – i.e., the computational field of view (CFOV) of the imaging system. This concept of CFOV is further verified through the Cram´er–Rao bound of the hidden-scene estimation problem. Finally, we present a two-step computational method for recovering the occluder and the scene behind it. We demonstrate the effectiveness of the proposed method using both synthetic and experimentally measured data.Accepted manuscrip
    corecore