966 research outputs found

    Tackling 3D ToF Artifacts Through Learning and the FLAT Dataset

    Full text link
    Scene motion, multiple reflections, and sensor noise introduce artifacts in the depth reconstruction performed by time-of-flight cameras. We propose a two-stage, deep-learning approach to address all of these sources of artifacts simultaneously. We also introduce FLAT, a synthetic dataset of 2000 ToF measurements that capture all of these nonidealities, and allows to simulate different camera hardware. Using the Kinect 2 camera as a baseline, we show improved reconstruction errors over state-of-the-art methods, on both simulated and real data.Comment: ECCV 201

    A review of three-dimensional imaging technologies for pavement distress detection and measurements

    Get PDF
    With the ever-increasing emphasis on maintaining road assets to a high standard, the need for fast accurate inspection for road distresses is becoming extremely important. Surface distresses on roads are essentially three dimensional (3-D) in nature. Automated visual surveys are the best option available. However, the imaging conditions, in terms of lighting, etc., are very random. For example, the challenge of measuring the volume of the pothole requires a large field of view with a reasonable spatial resolution, whereas microtexture evaluation requires very accurate imaging. Within the two extremes, there is a range of situations that require 3-D imaging. Three-dimensional imaging consists of a number of techniques such as interferometry and depth from focus. Out of these, laser imagers are mainly used for road surface distress inspection. Many other techniques are relatively unknown among the transportation community, and industrial products are rare. The main impetus for this paper is derived from the rarity of 3-D industrial imagers that employ alternative techniques for use in transportation. In addition, the need for this work is also highlighted by a lack of literature that evaluates the relative merits/demerits of various imaging methods for different distress measurement situations in relation to pavements. This overview will create awareness of available 3-D imaging methods in order to help make a fast initial technology selection and deployment. The review is expected to be helpful for researchers, practicing engineers, and decision makers in transportation engineering

    Decomposing global light transport using time of flight imaging

    Get PDF
    Global light transport is composed of direct and indirect components. In this paper, we take the first steps toward analyzing light transport using high temporal resolution information via time of flight (ToF) images. The time profile at each pixel encodes complex interactions between the incident light and the scene geometry with spatially-varying material properties. We exploit the time profile to decompose light transport into its constituent direct, subsurface scattering, and interreflection components. We show that the time profile is well modelled using a Gaussian function for the direct and interreflection components, and a decaying exponential function for the subsurface scattering component. We use our direct, subsurface scattering, and interreflection separation algorithm for four computer vision applications: recovering projective depth maps, identifying subsurface scattering objects, measuring parameters of analytical subsurface scattering models, and performing edge detection using ToF images.United States. Army Research Office (contract W911NF-07-D-0004)United States. Defense Advanced Research Projects Agency (YFA grant)Massachusetts Institute of Technology. Media Laboratory (Consortium Members)Massachusetts Institute of Technology. Institute for Soldier Nanotechnologie

    Image-guided ToF depth upsampling: a survey

    Get PDF
    Recently, there has been remarkable growth of interest in the development and applications of time-of-flight (ToF) depth cameras. Despite the permanent improvement of their characteristics, the practical applicability of ToF cameras is still limited by low resolution and quality of depth measurements. This has motivated many researchers to combine ToF cameras with other sensors in order to enhance and upsample depth images. In this paper, we review the approaches that couple ToF depth images with high-resolution optical images. Other classes of upsampling methods are also briefly discussed. Finally, we provide an overview of performance evaluation tests presented in the related studies

    Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect

    Full text link
    Recently, the new Kinect One has been issued by Microsoft, providing the next generation of real-time range sensing devices based on the Time-of-Flight (ToF) principle. As the first Kinect version was using a structured light approach, one would expect various differences in the characteristics of the range data delivered by both devices. This paper presents a detailed and in-depth comparison between both devices. In order to conduct the comparison, we propose a framework of seven different experimental setups, which is a generic basis for evaluating range cameras such as Kinect. The experiments have been designed with the goal to capture individual effects of the Kinect devices as isolatedly as possible and in a way, that they can also be adopted, in order to apply them to any other range sensing device. The overall goal of this paper is to provide a solid insight into the pros and cons of either device. Thus, scientists that are interested in using Kinect range sensing cameras in their specific application scenario can directly assess the expected, specific benefits and potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and Image Understanding (CVIU
    corecore