501 research outputs found

    Non line of sight imaging using phasor field virtual wave optics

    Get PDF
    Non-line-of-sight imaging allows objects to be observed when partially or fully occluded from direct view, by analysing indirect diffuse reflections off a secondary relay surface. Despite many potential applications1,2,3,4,5,6,7,8,9, existing methods lack practical usability because of limitations including the assumption of single scattering only, ideal diffuse reflectance and lack of occlusions within the hidden scene. By contrast, line-of-sight imaging systems do not impose any assumptions about the imaged scene, despite relying on the mathematically simple processes of linear diffractive wave propagation. Here we show that the problem of non-line-of-sight imaging can also be formulated as one of diffractive wave propagation, by introducing a virtual wave field that we term the phasor field. Non-line-of-sight scenes can be imaged from raw time-of-flight data by applying the mathematical operators that model wave propagation in a conventional line-of-sight imaging system. Our method yields a new class of imaging algorithms that mimic the capabilities of line-of-sight cameras. To demonstrate our technique, we derive three imaging algorithms, modelled after three different line-of-sight systems. These algorithms rely on solving a wave diffraction integral, namely the Rayleigh–Sommerfeld diffraction integral. Fast solutions to Rayleigh–Sommerfeld diffraction and its approximations are readily available, benefiting our method. We demonstrate non-line-of-sight imaging of complex scenes with strong multiple scattering and ambient light, arbitrary materials, large depth range and occlusions. Our method handles these challenging cases without explicitly inverting a light-transport model. We believe that our approach will help to unlock the potential of non-line-of-sight imaging and promote the development of relevant applications not restricted to laboratory conditions

    Virtual mirrors: non-line-of-sight imaging beyond the third bounce

    Get PDF
    Non-line-of-sight (NLOS) imaging methods are capable of reconstructing complex scenes that are not visible to an observer using indirect illumination. However, they assume only third-bounce illumination, so they are currently limited to single-corner configurations, and present limited visibility when imaging surfaces at certain orientations. To reason about and tackle these limitations, we make the key observation that planar diffuse surfaces behave specularly at wavelengths used in the computational wave-based NLOS imaging domain. We call such surfaces virtual mirrors. We leverage this observation to expand the capabilities of NLOS imaging using illumination beyond the third bounce, addressing two problems: imaging single-corner objects at limited visibility angles, and imaging objects hidden behind two corners. To image objects at limited visibility angles, we first analyze the reflections of the known illuminated point on surfaces of the scene as an estimator of the position and orientation of objects with limited visibility. We then image those limited visibility objects by computationally building secondary apertures at other surfaces that observe the target object from a direct visibility perspective. Beyond single-corner NLOS imaging, we exploit the specular behavior of virtual mirrors to image objects hidden behind a second corner by imaging the space behind such virtual mirrors, where the mirror image of objects hidden around two corners is formed. No specular surfaces were involved in the making of this paper

    Virtual Mirrors: Non-Line-of-Sight Imaging Beyond the Third Bounce

    Full text link
    Non-line-of-sight (NLOS) imaging methods are capable of reconstructing complex scenes that are not visible to an observer using indirect illumination. However, they assume only third-bounce illumination, so they are currently limited to single-corner configurations, and present limited visibility when imaging surfaces at certain orientations. To reason about and tackle these limitations, we make the key observation that planar diffuse surfaces behave specularly at wavelengths used in the computational wave-based NLOS imaging domain. We call such surfaces virtual mirrors. We leverage this observation to expand the capabilities of NLOS imaging using illumination beyond the third bounce, addressing two problems: imaging single-corner objects at limited visibility angles, and imaging objects hidden behind two corners. To image objects at limited visibility angles, we first analyze the reflections of the known illuminated point on surfaces of the scene as an estimator of the position and orientation of objects with limited visibility. We then image those limited visibility objects by computationally building secondary apertures at other surfaces that observe the target object from a direct visibility perspective. Beyond single-corner NLOS imaging, we exploit the specular behavior of virtual mirrors to image objects hidden behind a second corner by imaging the space behind such virtual mirrors, where the mirror image of objects hidden around two corners is formed. No specular surfaces were involved in the making of this paper

    Un enfoque de aprendizaje automático para la reconstrucción de imágenes transitorias

    Get PDF
    The recent advances in non-line-of-sight imaging have made it possible to reconstruct scenes hidden around a corner, with potential applications in e.g. autonomous driving or medical imaging. By operating at frame rates comparable to the speed of light, recent virtual-wave propagation methods leverage the temporal footprint of indirect light transport at a visible auxiliary surface to take virtual photos of objects hidden from the observer. Despite these advances, these methods have a critical computational bottleneck: The reconstruction quality and the computational performance are highly dependent on the resolution of the capture grid, which is typically discretized in space and time, leading to high processing and memory constraints. Inspired by recent machine learning techniques, in this work we propose a new computational imaging method to address these limitations. For this purpose we propose to learn implicit representations of the captured data using neural networks, allowing us to convert the discrete space of the captured data into a continuous one. However, working directly with the captured data is a complex task due to its huge size and its high dynamic range values. In order to avoid these problems, we leverage recent wave-based phasor-field imaging methods to transform the time-resolved captured data into sets of 2D complex-valued fields (i.e. phasor fields) at different frequencies, which provides a more favorable representation for machine learning methods. nder our implicit representation formulation, we analyze the performance of different neural network models to represent the complex structure of phasor fields, starting from simpler representations, and iteratively providing more powerful models to add support for the complexity of the data. We demonstrate how recent machine learning techniques based on multilayer perceptrons with sine activation functions are capable of representing phasor fields analytically in both spatial and temporal frequency domains, and integrate them into the phasor-field framework to reconstruct hidden geometry. We finally test this neural model in different scenes, and measure its performance at higher resolutions not seen by the captured data. We show how the model is able to analytically upsample all dimensions, and demonstrate how our implicit representation additionally works as a denoiser of the source discretized phasor field.<br /

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /

    PERISCOPE: PERIapsis Subsurface Cave Optical Explorer

    Get PDF
    The PERISCOPE study focuses primarily on lunar caves, due to the potential for being imaged in orbital scenarios. In the intervening years, from 2012-2015, scientists developed further rationales and interest in the scientific value of lunar caves. It does not appear that they are likely to be sinks for water-ice due to the relatively warm temperatures(~-20 degrees Celsius) in the caves leading to geologically-rapid migration of unbound water due to sublimation, and inevitable loss through any skylights. However, the skylights themselves reveal apparent complex layering, which may speak to a more complex multi-stage evolution of mare flood basalts than previously considered, and so their examination may provide even more insight into the lunar mare, which in turn provide a primary record of early solar system crustal formal and evolution processes. Further extrapolation of these insights can be found within the exoplanet community of researchers,who find the information useful for calibrating star formation and planetary evolution models. In addition, catalogues of lunar and martian skylights, "caves" or "atypical pit craters" have been developed, with numbers for both bodies now in the low hundreds thanks to additional high resolution surveys and revisiting the existing image databases

    Self-Calibrating, Fully Differentiable NLOS Inverse Rendering

    Full text link
    Existing time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces. These methods are prone to reconstruction artifacts due to inversion ambiguities and capture noise, which are typically mitigated through the manual selection of filtering functions and parameters. We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes, using as input only the measured illumination while working both in the time and frequency domains. Our pipeline extracts a geometric representation of the hidden scene from NLOS volumetric intensities and estimates the time-resolved illumination at the relay wall produced by such geometric information using differentiable transient rendering. We then use gradient descent to optimize imaging parameters by minimizing the error between our simulated time-resolved illumination and the measured illumination. Our end-to-end differentiable pipeline couples diffraction-based volumetric NLOS reconstruction with path-space light transport and a simple ray marching technique to extract detailed, dense sets of surface points and normals of hidden scenes. We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels
    corecore