384 research outputs found

    Non line of sight imaging using phasor field virtual wave optics

    Get PDF
    Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications1,2,3,4,5,6,7,8,9, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh–Sommerfeld diffraction integral. Fast solutions to Rayleigh–Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions

    Reconstruction of hidden 3D shapes using diffuse reflections

    Get PDF
    We analyze multi-bounce propagation of light in an unknown hidden volume and demonstrate that the reflected light contains sufficient information to recover the 3D structure of the hidden scene. We formulate the forward and inverse theory of secondary and tertiary scattering reflection using ideas from energy front propagation and tomography. We show that using careful choice of approximations, such as Fresnel approximation, greatly simplifies this problem and the inversion can be achieved via a backpropagation process. We provide a theoretical analysis of the invertibility, uniqueness and choices of space-time-angle dimensions using synthetic examples. We show that a 2D streak camera can be used to discover and reconstruct hidden geometry. Using a 1D high speed time of flight camera, we show that our method can be used recover 3D shapes of objects "around the corner"

    SAR Image Formation via Subapertures and 2D Backprojection

    Get PDF
    Radar imaging requires the use of wide bandwidth and a long coherent processing interval, resulting in range and Doppler migration throughout the observation period. This migration must be compensated in order to properly image a scene of interest at full resolution and there are many available algorithms having various strengths and weaknesses. Here, a subaperture-based imaging algorithm is proposed, which first forms range-Doppler (RD) images from slow-time sub-intervals, and then coherently integrates over the resulting coarse-resolution RD maps to produce a full resolution SAR image. A two-dimensional backprojection-style approach is used to perform distortion-free integration of these RD maps. This technique benefits from many of the same benefits as traditional backprojection; however, the architecture of the algorithm is chosen such that several steps are shared with typical target detection algorithms. These steps are chosen such that no compromises need to be made to data quality, allowing for high quality imaging while also preserving data for implementation of detection algorithms. Additionally, the algorithm benefits from computational savings that make it an excellent imaging algorithm for implementation in a simultaneous SAR-GMTI architecture

    Comparison of Image Processing Techniques Using Random Noise Radar

    Get PDF
    Radar imaging is a tool used by our military to provide information to enhance situational awareness for both war fighters on the front lines and military leaders planning and forming strategies from afar. Noise radar technology is especially exciting as it has properties of covertness as well as the ability to see through walls, foliage, and other types of cover. In this thesis, AFIT\u27s NoNet was used to generate images utilizing a random noise radar waveform as the transmission signal. The NoNet was arranged in four configurations: arc, line, cluster, and surround. Images were formed using three algorithms: multilateration and the SAR imaging techniques, convolution backprojection, and polar format algorithm. Each configuration was assessed based on image quality, in terms of its resolution, and computational complexity, in terms of its execution time. Experiments revealed tradeoffs between computational complexity and achieving fine resolutions. Depending on image size, the multilateration algorithm was approximately 6 to 35 faster than polar format and 16 to 26 times faster than convolution backprojection. Backprojection yielded images with resolutions up to approximately 11 times finer in range and 18 times finer in cross-range for the surround configuration, over multilateration images. Pixel size in polar format images made comparisons of resolution unusable. This thesis provides information on the performance of imaging algorithms given a configuration of nodes. The information will provide groundwork for future use of the AFIT NoNet as a covertly operating imaging radar in dynamic applications

    Non-line-of-sight 3D imaging with a single-pixel camera

    Get PDF
    Real time, high resolution 3D reconstruction of scenes hidden from the direct field of view is a challenging field of research with applications in real-life situations related e.g. to surveillance, self-driving cars and rescue missions. Most current techniques recover the 3D structure of a non-lineof-sight (NLOS) static scene by detecting the return signal from the hidden object on a scattering observation area. Here, we demonstrate the full colour retrieval of the 3D shape of a hidden scene by coupling back-projection imaging algorithms with the high-resolution time-of-flight information provided by a single-pixel camera. By using a high effciency Single-Photon Avalanche Diode (SPAD) detector, this technique provides the advantage of imaging with no mechanical scanning parts, with acquisition times down to sub-seconds.Comment: 6 pages, 4 figure

    Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging

    Get PDF
    The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space.MIT Media Lab ConsortiumUnited States. Defense Advanced Research Projects Agency. Young Faculty AwardMassachusetts Institute of Technology. Institute for Soldier Nanotechnologies (Contract W911NF-07-D-0004

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /
    corecore