23 research outputs found

    Correction of Errors in Time of Flight Cameras

    Get PDF
    En esta tesis se aborda la corrección de errores en cámaras de profundidad basadas en tiempo de vuelo (Time of Flight - ToF). De entre las más recientes tecnologías, las cámaras ToF de modulación continua (Continuous Wave Modulation - CWM) son una alternativa prometedora para la creación de sensores compactos y rápidos. Sin embargo, existen gran variedad de errores que afectan notablemente la medida de profundidad, poniendo en compromiso posibles aplicaciones. La corrección de dichos errores propone un reto desafiante. Actualmente, se consideran dos fuentes principales de error: i) sistemático y ii) no sistemático. Mientras que el primero admite calibración, el último depende de la geometría y el movimiento relativo de la escena. Esta tesis propone métodos que abordan i) la distorsión sistemática de profundidad y dos de las fuentes de error no sistemático más relevantes: ii.a) la interferencia por multicamino (Multipath Interference - MpI) y ii.b) los artefactos de movimiento. La distorsión sistemática de profundidad en cámaras ToF surge principalmente debido al uso de señales sinusoidales no perfectas para modular. Como resultado, las medidas de profundidad aparecen distorsionadas, pudiendo ser reducidas con una etapa de calibración. Esta tesis propone un método de calibración basado en mostrar a la cámara un plano en diferentes posiciones y orientaciones. Este método no requiere de patrones de calibración y, por tanto, puede emplear los planos, que de manera natural, aparecen en la escena. El método propuesto encuentra una función que obtiene la corrección de profundidad correspondiente a cada píxel. Esta tesis mejora los métodos existentes en cuanto a precisión, eficiencia e idoneidad. La interferencia por multicamino surge debido a la superposición de la señal reflejada por diferentes caminos con la reflexión directa, produciendo distorsiones que se hacen más notables en superficies convexas. La MpI es la causa de importantes errores en la estimación de profundidad en cámaras CWM ToF. Esta tesis propone un método que elimina la MpI a partir de un solo mapa de profundidad. El enfoque propuesto no requiere más información acerca de la escena que las medidas ToF. El método se fundamenta en un modelo radio-métrico de las medidas que se emplea para estimar de manera muy precisa el mapa de profundidad sin distorsión. Una de las tecnologías líderes para la obtención de profundidad en imagen ToF está basada en Photonic Mixer Device (PMD), la cual obtiene la profundidad mediante el muestreado secuencial de la correlación entre la señal de modulación y la señal proveniente de la escena en diferentes desplazamientos de fase. Con movimiento, los píxeles PMD capturan profundidades diferentes en cada etapa de muestreo, produciendo artefactos de movimiento. El método propuesto en esta tesis para la corrección de dichos artefactos destaca por su velocidad y sencillez, pudiendo ser incluido fácilmente en el hardware de la cámara. La profundidad de cada píxel se recupera gracias a la consistencia entre las muestras de correlación en el píxel PMD y de la vecindad local. Este método obtiene correcciones precisas, reduciendo los artefactos de movimiento enormemente. Además, como resultado de este método, puede obtenerse el flujo óptico en los contornos en movimiento a partir de una sola captura. A pesar de ser una alternativa muy prometedora para la obtención de profundidad, las cámaras ToF todavía tienen que resolver problemas desafiantes en relación a la corrección de errores sistemáticos y no sistemáticos. Esta tesis propone métodos eficaces para enfrentarse con estos errores

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Experimental Procedure for the Metrological Characterization of Time-of-Flight Cameras for Human Body 3D Measurements

    Get PDF
    Time-of-flight cameras are widely adopted in a variety of indoor applications ranging from industrial object measurement to human activity recognition. However, the available products may differ in terms of the quality of the acquired point cloud, and the datasheet provided by the constructors may not be enough to guide researchers in the choice of the perfect device for their application. Hence, this work details the experimental procedure to assess time-of-flight cameras' error sources that should be considered when designing an application involving time-of-flight technology, such as the bias correction and the temperature influence on the point cloud stability. This is the first step towards a standardization of the metrological characterization procedure that could ensure the robustness and comparability of the results among tests and different devices. The procedure was conducted on Kinect Azure, Basler Blaze 101, and Basler ToF 640 cameras. Moreover, we compared the devices in the task of 3D reconstruction following a procedure involving the measure of both an object and a human upper-body-shaped mannequin. The experiment highlighted that, despite the results of the previously conducted metrological characterization, some devices showed evident difficulties in reconstructing the target objects. Thus, we proved that performing a rigorous evaluation procedure similar to the one proposed in this paper is always necessary when choosing the right device

    Investigation of jitter on full-field amplitude modulated continuous wave time-of-flight range imaging cameras

    Get PDF
    The time-of-flight (ToF) range imaging cameras indirectly measure the time taken from the modulation light source to the scene and back to the camera and it is this principle that is used in depth cameras to perform depth measurements. This thesis is focused on ToF cameras that are based on the amplitude modulated continuous wave (AMCW) lidar techniques which measure the phase difference between the emitted and reflected light signals. Due to their portable size, feasible design, low weight and low energy consumption, these cameras have high demand in many applications. Commercially available AMCW ToF cameras have relatively high noise levels due to electronic sources such as shot noise, reset noise, amplifier noise, crosstalk, analogue to digital converters quantization and multipath light interference. Many noise sources in these cameras such as harmonic contamination, non-linearity, multipath interferences and light scattering are well investigated. In contrast, the effect of electronic jitter as a noise source in ranging cameras is barely studied. Jitter is defined to be any timing movement with reference to an ideal signal. An investigation of the effect of jitter on range imaging is important because timing errors potentially could cause errors in measuring phase, thus in range. The purpose of this research is to investigate the effect of jitter on range measurement in AMCW ToF range imaging. This is achieved by three main contributions: a development of a common algorithm for measurement of the jitter present in signals from depth cameras, secondly the proposal of a cost effective alternative method to measure jitter by using a software defined radio receiver, and finally an analysis of the influence of jitter on range measurement. Among the three contributions of this thesis, first, an algorithm for jitter extraction of a signal without access to a reference clock signal is proposed. The proposed algorithm is based upon Fourier analysis with signal processing techniques and it can be used for real time jitter extraction on a modulated signal with any kind of shape (sinusoidal, triangular, rectangular). The method is used to measure the amount of jitter in the light signals of two AMCW ToF range imaging cameras, namely, MESA Imaging SwissRanger 4000 and SoftKinetic DepthSense 325. Periodic and random jitter were found to be present in the light sources of both cameras with the MESA camera notably worse with random jitter of (159.6 +/- 0.1) ps RMS in amplitude. Next, in a novel approach, an inexpensive software defined radio (SDR) USB dongle is used with the proposed algorithm to extract the jitter in the light signal of the above two ToF cameras. This is a cost effective alternative to the expensive real-time medium speed digital oscilloscope. However, it is shown that this method has some significant limitations, (1) it can measure the jitter only up to half of the intermediate-frequency obtained from the down shift of the amplified radio frequency with the local oscillator which is less than the Nyquist frequency of the dongle and (2) if the number of samples per cycle captured from this dongle is not sufficient then the jitter extraction does not succeed since the signal is not properly (smoothly) represented. Finally, the periodic and random jitter influence on range measurements made with AMCW range imaging cameras are studied. An analytical model for the periodic jitter on the range measurements under the heterodyne and homodyne operations in AMCW ToF range imaging cameras is obtained in the frequency domain. The analytical model is tested through simulated data with various parameters in the system. The product of angular modulation frequency of the camera and the amplitude of the periodic jitter is a characteristic parameter for the phase error due to the presence of periodic jitter. We found that for currently available AMCW cameras (modulation frequency less than 100 MHz), neither periodic nor random jitter has a measurable effect on range measurement. But with modulation frequency increases and integration period decreases likely in the near future, periodic jitter may have a measurable detection affect on ranging. The influence of random jitter is also investigated by obtaining an analytical model based on stochastic calculus by using fundamental statistics and Fourier analysis. It is assumed that the random jitter follows the Gaussian distribution. Monte Carlo simulation is performed on the model obtained for a 1 ms integration period. We found increasing the modulation frequency above approximately 400 MHz with random jitter of 140 ps has a measurable affect on ranging

    Efficient Methods for Computational Light Transport

    Get PDF
    En esta tesis presentamos contribuciones sobre distintos retos computacionales relacionados con transporte de luz. Los algoritmos que utilizan información sobre el transporte de luz están presentes en muchas aplicaciones de hoy en día, desde la generación de efectos visuales, a la detección de objetos en tiempo real. La luz es una valiosa fuente de información que nos permite entender y representar nuestro entorno, pero obtener y procesar esta información presenta muchos desafíos debido a la complejidad de las interacciones entre la luz y la materia. Esta tesis aporta contribuciones en este tema desde dos puntos de vista diferentes: algoritmos en estado estacionario, en los que se asume que la velocidad de la luz es infinita; y algoritmos en estado transitorio, que tratan la luz no solo en el dominio espacial, sino también en el temporal. Nuestras contribuciones en algoritmos estacionarios abordan problemas tanto en renderizado offline como en tiempo real. Nos enfocamos en la reducción de varianza para métodos offline,proponiendo un nuevo método para renderizado eficiente de medios participativos. En renderizado en tiempo real, abordamos las limitacionesde consumo de batería en dispositivos móviles proponiendo un sistema de renderizado que incrementa la eficiencia energética en aplicaciones gráficas en tiempo real. En el transporte de luz transitorio, formalizamos la simulación de este tipo transporte en este nuevo dominio, y presentamos nuevos algoritmos y métodos para muestreo eficiente para render transitorio. Finalmente, demostramos la utilidad de generar datos en este dominio, presentando un nuevo método para corregir interferencia multi-caminos en camaras Timeof- Flight, un problema patológico en el procesamiento de imágenes transitorias.n this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.<br /

    MODELAGEM DO ERRO SISTEMÁTICO DE DISTÂNCIA NAS MEDIÇÕES REALIZADAS COM A CÂMARA PMD CAMCUBE 3.0

    Get PDF
    As câmaras de distância são capazes de medir a distância entre o sensor e a superfície dos objetos para cada pixel da imagem. Comparando com os equipamentos de varredura a laser possuem a vantagem de obter a distância de vários pontos em um único instante, sem equipamento de varredura. As medidas obtidas pela câmara possuem erros sistemáticos que devem ser minimizados. Alguns fatores como o tempo de integração, a distância a ser medida, bem como a iluminação da cena, influenciam na obtenção da medida. Neste estudo foi analisada a influência da variação do tempo de integração e da distância câmara-alvo na exatidão do cálculo da distância, procurando modelar os erros sistemáticos das medições feitas com uma câmara PMD Camcube 3.0. A modelagem foi feita por meio da Transformada Discreta de Fourier e permitiu diminuir o erro médio quadrático (RMSE) de 15,01 cm para 5,05 cm, para das observações feitas com tempo de integração de 4000 s. Também foi verificado que a amplitude do erro é diretamente proporcional ao tempo de integração utilizado

    Handling Artifacts in Dynamic Depth Sequences

    Get PDF
    Image sequences of dynamic scenes recorded using various depth imaging devices and handling the artifacts arising within are the main scope of this work. First, a framework for range flow estimation from Microsoft’s multi-modal imaging device Kinect is presented. All essential stages of the flow computation pipeline, starting from camera calibration, followed by the alignment of the range and color channels and finally the introduction of a novel multi-modal range flow algorithm which is robust against typical (technology dependent) range estimation artifacts are discussed. Second, regarding Time-of-Flight data, motion artifacts arise in recordings of dynamic scenes, caused by the sequential nature of the raw image acquisition process. While many methods for compensation of such errors have been proposed so far, there is still a lack of proper comparison. This gap is bridged here by not only evaluating all proposed methods, but also by providing additional insight in the technical properties and depth correction of the recorded data as base-line for future research. Exchanging the tap calibration model necessary for these methods by a model closer to reality improves the results of all related methods without any loss of performance
    corecore