6 research outputs found

    Cohesive framework for non-line-of-sight imaging based on Dirac notation

    Get PDF
    The non-line-of-sight (NLOS) imaging field encompasses both experimental and computational frameworks that focus on imaging elements that are out of the direct line-of-sight, for example, imaging elements that are around a corner. Current NLOS imaging methods offer a compromise between accuracy and reconstruction time as experimental setups have become more reliable, faster, and more accurate. However, all these imaging methods implement different assumptions and light transport models that are only valid under particular circumstances. This paper lays down the foundation for a cohesive theoretical framework which provides insights about the limitations and virtues of existing approaches in a rigorous mathematical manner. In particular, we adopt Dirac notation and concepts borrowed from quantum mechanics to define a set of simple equations that enable: i) the derivation of other NLOS imaging methods from such single equation (we provide examples of the three most used frameworks in NLOS imaging: back-propagation, phasor fields, and f-k migration); ii) the demonstration that the Rayleigh-Sommerfeld diffraction operator is the propagation operator for wave-based imaging methods; and iii) the demonstration that back-propagation and wave-based imaging formulations are equivalent since, as we show, propagation operators are unitary. We expect that our proposed framework will deepen our understanding of the NLOS field and expand its utility in practical cases by providing a cohesive intuition on how to image complex NLOS scenes independently of the underlying reconstruction method

    Virtual mirrors: non-line-of-sight imaging beyond the third bounce

    Get PDF
    Non-line-of-sight (NLOS) imaging methods are capable of reconstructing complex scenes that are not visible to an observer using indirect illumination. However, they assume only third-bounce illumination, so they are currently limited to single-corner configurations, and present limited visibility when imaging surfaces at certain orientations. To reason about and tackle these limitations, we make the key observation that planar diffuse surfaces behave specularly at wavelengths used in the computational wave-based NLOS imaging domain. We call such surfaces virtual mirrors. We leverage this observation to expand the capabilities of NLOS imaging using illumination beyond the third bounce, addressing two problems: imaging single-corner objects at limited visibility angles, and imaging objects hidden behind two corners. To image objects at limited visibility angles, we first analyze the reflections of the known illuminated point on surfaces of the scene as an estimator of the position and orientation of objects with limited visibility. We then image those limited visibility objects by computationally building secondary apertures at other surfaces that observe the target object from a direct visibility perspective. Beyond single-corner NLOS imaging, we exploit the specular behavior of virtual mirrors to image objects hidden behind a second corner by imaging the space behind such virtual mirrors, where the mirror image of objects hidden around two corners is formed. No specular surfaces were involved in the making of this paper

    Virtual Mirrors: Non-Line-of-Sight Imaging Beyond the Third Bounce

    Full text link
    Non-line-of-sight (NLOS) imaging methods are capable of reconstructing complex scenes that are not visible to an observer using indirect illumination. However, they assume only third-bounce illumination, so they are currently limited to single-corner configurations, and present limited visibility when imaging surfaces at certain orientations. To reason about and tackle these limitations, we make the key observation that planar diffuse surfaces behave specularly at wavelengths used in the computational wave-based NLOS imaging domain. We call such surfaces virtual mirrors. We leverage this observation to expand the capabilities of NLOS imaging using illumination beyond the third bounce, addressing two problems: imaging single-corner objects at limited visibility angles, and imaging objects hidden behind two corners. To image objects at limited visibility angles, we first analyze the reflections of the known illuminated point on surfaces of the scene as an estimator of the position and orientation of objects with limited visibility. We then image those limited visibility objects by computationally building secondary apertures at other surfaces that observe the target object from a direct visibility perspective. Beyond single-corner NLOS imaging, we exploit the specular behavior of virtual mirrors to image objects hidden behind a second corner by imaging the space behind such virtual mirrors, where the mirror image of objects hidden around two corners is formed. No specular surfaces were involved in the making of this paper

    Reconstrucción de transporte de luz transitorio en escenas ocultas.

    Get PDF
    La imagen computacional es un conjunto de técnicas digitales que permiten formar imágenes a partir de medidas de sensores de distintos tipos, en contraste a los procesos ópticos que generan las imágenes en una cámara tradicional, y sustituyen o incrementan las capacidades de algunos procesos ópticos. Uno de los avances recientes más espectaculares tiene que ver con cámaras computacionales ultrarrápidas, que permiten capturar el transporte de luz en la escena a lo largo del tiempo. Una de las nuevas aplicaciones que han surgido a raíz de estas cámaras es la de ser capaz de ver objetos ocultos alrededor de esquinas, dentro del campo conocido generalmente como non-line-of-sight (NLOS) imaging. Para ello es necesario analizar la luz indirecta que se refleja sobre objetos visibles, y a partir de ella reconstruir computacionalmente la escena oculta. Esto puede dar lugar a muchas aplicaciones prácticas como evitar colisiones entre vehículos, o visualizar regiones de difícil acceso. Sin embargo, la mayoría de los algoritmos existentes se limitan a recuperar la geometría de la escena oculta, descartando la información temporal del transporte de luz. Si pudiéramos recuperar esta información en un entorno de NLOS, podríamos por ejemplo analizar de qué materiales son los objetos de la escena oculta, o ver incluso alrededor de dos esquinas. El objetivo de este trabajo es desarrollar un método para reconstruir el transporte de luz transitorio en escenas ocultas, y analizar el comportamiento de distintas funciones de filtrado de cara a potenciales aplicaciones novedosas de análisis de escenas ocultas. En concreto, primero se ha implementado el algoritmo clásico de reconstrucción de escenas ocultas, backprojection filtrado, que estima la posición de los objetos ocultos triangulando a partir del tiempo de propagación de la luz. A continuación se ha extendido dicho algoritmo para obtener uno nuevo, al que llamamos backprojection filtrado resuelto en tiempo, que proporciona información sobre el transporte de la luz en la escena oculta a lo largo del tiempo. Se ha analizado el efecto de distintas técnicas de filtrado sobre ambos algoritmos y la información que proporciona la reconstrucción obtenida por el nuevo algoritmo en distintas escenas con variaciones de material, rango, y complejidad. Se espera que este segundo algoritmo sea utilizado en futuras investigaciones sobre NLOS.<br /
    corecore