11 research outputs found

    Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset

    Full text link
    Scene motion, multiple reflections, and sensor noise introduce artifacts in the depth reconstruction performed by time-of-flight cameras. We propose a two-stage, deep-learning approach to address all of these sources of artifacts simultaneously. We also introduce FLAT, a synthetic dataset of 2000 ToF measurements that capture all of these nonidealities, and allows to simulate different camera hardware. Using the Kinect 2 camera as a baseline, we show improved reconstruction errors over state-of-the-art methods, on both simulated and real data.Comment: ECCV 201

    Determination of Chern numbers with a phase retrieval algorithm

    Full text link
    Ultracold atoms in optical lattices form a clean quantum simulator platform which can be utilized to examine topological phenomena and test exotic topological materials. Here we propose an experimental scheme to measure the Chern numbers of two-dimensional multiband topological insulators with bosonic atoms. We show how to extract the topological invariants out of a sequence of time-of-flight images by applying a phase retrieval algorithm to matter waves. We illustrate advantages of using bosonic atoms as well as efficiency and robustness of the method with two prominent examples: the Harper-Hofstadter model with an arbitrary commensurate magnetic flux and the Haldane model on a brick-wall lattice.Comment: Version accepted for publication in Phys. Rev. A (11 pages, 8 figures

    Effect of Location, Clone, and Measurement Season on the Propagation Velocity of Poplar Trees Using the Akaike Information Criterion for Arrival Time Determination

    Get PDF
    The purchase price of any forest plantation depends on the quality of its raw wood, and specifically, variables such as density, orientation of the fibers, bending strength, and bending MoE (Modulus of Elasticity). The elastic waves propagation velocity has become one of the most popular parameters to evaluate the wood in standing trees. This study had two objectives: (1) Show how this velocity is clearly affected by the clone, the location of the crop, and the measurement season of poplar crops; and (2) apply the Akaike information criterion to determinate the arrival time of the waves, on the basis of the entropy of the signals recorded by the piezoelectric sensors placed on the trunk of the tree.This work has been possible due to the financial support of the COMPOP_Timber project “Desarrollo de productos de ingeniería elaborados a base de tablones y chapas de chopo con inserciones de material compuesto para su uso en construcción”, BIA2017-82650-R. The authors thank to Antonio Aguilar and Chihab Abarkane, for his help during measurements in field, and Esther Merlo from MADERAS PLUS, for the analysis of results and methods

    Background Subtraction for Time of Flight Imaging

    Get PDF
    A time of flight camera provides two types of images simultaneously, depth and intensity. In this paper a computational method for background subtraction, combining both images or fast sequences of images, is proposed. The background model is based on unbalanced or semi-supervised classifiers, in particular support vector machines. A brief review of one class support vector machines is first given. A method that combines the range and intensity data in two operational modes is then provided. Finally, experimental results are presented and discussed.Facultad de Informátic

    Supresión de segundo plano en imágenes de tiempo de vuelo

    Get PDF
    En este artículo se presenta un método computacional para detectar y extraer el plano de fondo, segundo plano, a partir de datos obtenidos por cámaras de tiempo de vuelo. Se utiliza una variante de un método de clasificación basado en máquinas de soporte vectorial. Considerando las características particulares del tipo de cámaras utilizadas, se incorpora adecuadamente la información de rango e intensidad, y se utiliza la capacidad para obtener secuencias rápidas de datos en una modalidad particular. El artículo revisa las técnicas específicas de reconocimiento de patrones utilizadas, presenta la solución propuesta y muestra resultados experimentales preliminares del método propuesto.VII Workshop Procesamiento de Señales y Sistemas de Tiempo Real (WPSTR).Red de Universidades con Carreras en Informática (RedUNCI

    Supresión de segundo plano en imágenes de tiempo de vuelo

    Get PDF
    En este artículo se presenta un método computacional para detectar y extraer el plano de fondo, segundo plano, a partir de datos obtenidos por cámaras de tiempo de vuelo. Se utiliza una variante de un método de clasificación basado en máquinas de soporte vectorial. Considerando las características particulares del tipo de cámaras utilizadas, se incorpora adecuadamente la información de rango e intensidad, y se utiliza la capacidad para obtener secuencias rápidas de datos en una modalidad particular. El artículo revisa las técnicas específicas de reconocimiento de patrones utilizadas, presenta la solución propuesta y muestra resultados experimentales preliminares del método propuesto.VII Workshop Procesamiento de Señales y Sistemas de Tiempo Real (WPSTR).Red de Universidades con Carreras en Informática (RedUNCI

    Supresión de segundo plano en imágenes de tiempo de vuelo

    Get PDF
    En este artículo se presenta un método computacional para detectar y extraer el plano de fondo, segundo plano, a partir de datos obtenidos por cámaras de tiempo de vuelo. Se utiliza una variante de un método de clasificación basado en máquinas de soporte vectorial. Considerando las características particulares del tipo de cámaras utilizadas, se incorpora adecuadamente la información de rango e intensidad, y se utiliza la capacidad para obtener secuencias rápidas de datos en una modalidad particular. El artículo revisa las técnicas específicas de reconocimiento de patrones utilizadas, presenta la solución propuesta y muestra resultados experimentales preliminares del método propuesto.VII Workshop Procesamiento de Señales y Sistemas de Tiempo Real (WPSTR).Red de Universidades con Carreras en Informática (RedUNCI

    Real-time video-plus-depth content creation utilizing time-of-flight sensor - from capture to display

    Get PDF
    Recent developments in 3D camera technologies, display technologies and other related fields have been aiming to provide 3D experience for home user and establish services such as Three-Dimensional Television (3DTV) and Free-Viewpoint Television (FTV). Emerging multiview autostereoscopic displays do not require any eyewear and can be watched by multiple users at the same time, thus are very attractive for home environment usage. To provide a natural 3D impression, autostereoscopic 3D displays have been design to synthesize multi-perspective virtual views of a scene using Depth-Image-Based Rendering (DIBR) techniques. One key issue of DIBR is that scene depth information in a form of a depth map is required in order to synthesize virtual views. Acquiring this information is quite complex and challenging task and still an active research topic. In this thesis, the problem of dynamic 3D video content creation of real-world visual scenes is addressed. The work assumed data acquisition setting including Time-of-Flight (ToF) depth sensor and a single conventional video camera. The main objective of the work is to develop efficient algorithms for the stages of synchronous data acquisition, color and ToF data fusion, and final view-plus-depth frame formatting and rendering. The outcome of this thesis is a prototype 3DTV system capable for rendering live 3D video on a 3D autostereoscopic display. The presented system makes extensive use of the processing capabilities of modern Graphics Processing Units (GPUs) in order to achieve real-time processing rates while providing an acceptable visual quality. Furthermore, the issue of arbitrary view synthesis is investigated in the context of DIBR and a novel approach based on depth layering is proposed. The proposed approach is applicable for general virtual views synthesis, i.e. in terms of different camera parameters such as position, orientation, focal length and varying sensors spatial resolutions. The experimental results demonstrate real-time capability of the proposed method even for CPU-based implementations. It compares favorably to other view synthesis methods in terms of visual quality, while being more computationally efficient

    Analysis and Modeling of Passive Stereo and Time-of-Flight Imaging

    Get PDF
    This thesis is concerned with the analysis and modeling of effects which cause errors in passive stereo and Time-of-Flight imaging systems. The main topics are covered in four chapters: I commence with a treatment of a system combining Time-of-Flight imaging with passive stereo and show how commonly used fusion models relate to the measurements of the individual modalities. In addition, I present novel fusion techniques capable of improving the depth reconstruction over those obtained separately by either modality. Next, I present a pipeline and uncertainty analysis for the generation of large amounts of reference data for quantitative stereo evaluation. The resulting datasets not only contain reference geometry, but also per pixel measures of reference data uncertainty. The next two parts deal with individual effects observed: Time-of-Flight cameras suffer from range ambiguity if the scene extends beyond a certain distance. I show that it is possible to extend the valid range by changing design parameters of the underlying measurement system. Finally, I present methods that make it possible to amend model violation errors in stereo due to reflections. This is done by means of modeling a limited level of light transport and material properties in the scene
    corecore