3,097 research outputs found

    SPLODE: Semi-Probabilistic Point and Line Odometry with Depth Estimation from RGB-D Camera Motion

    Get PDF
    Active depth cameras suffer from several limitations, which cause incomplete and noisy depth maps, and may consequently affect the performance of RGB-D Odometry. To address this issue, this paper presents a visual odometry method based on point and line features that leverages both measurements from a depth sensor and depth estimates from camera motion. Depth estimates are generated continuously by a probabilistic depth estimation framework for both types of features to compensate for the lack of depth measurements and inaccurate feature depth associations. The framework models explicitly the uncertainty of triangulating depth from both point and line observations to validate and obtain precise estimates. Furthermore, depth measurements are exploited by propagating them through a depth map registration module and using a frame-to-frame motion estimation method that considers 3D-to-2D and 2D-to-3D reprojection errors, independently. Results on RGB-D sequences captured on large indoor and outdoor scenes, where depth sensor limitations are critical, show that the combination of depth measurements and estimates through our approach is able to overcome the absence and inaccuracy of depth measurements.Comment: IROS 201

    Evaluation of CNN-based Single-Image Depth Estimation Methods

    Get PDF
    While an increasing interest in deep models for single-image depth estimation methods can be observed, established schemes for their evaluation are still limited. We propose a set of novel quality criteria, allowing for a more detailed analysis by focusing on specific characteristics of depth maps. In particular, we address the preservation of edges and planar regions, depth consistency, and absolute distance accuracy. In order to employ these metrics to evaluate and compare state-of-the-art single-image depth estimation approaches, we provide a new high-quality RGB-D dataset. We used a DSLR camera together with a laser scanner to acquire high-resolution images and highly accurate depth maps. Experimental results show the validity of our proposed evaluation protocol

    Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty

    Full text link
    This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework based on Mixtures of Gaussians to denoise and derive the depth uncertainty, which is then propagated throughout the visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are used to model the uncertainties of the feature parameters and pose is estimated by combining the three types of primitives based on their uncertainties. Performance evaluation on RGB-D sequences collected in this work and two public RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth fusion framework and combining the three feature-types, particularly in scenes with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34 page

    Intelligent surveillance of indoor environments based on computer vision and 3D point cloud fusion

    Get PDF
    A real-time detection algorithm for intelligent surveillance is presented. The system, based on 3D change detection with respect to a complex scene model, allows intruder monitoring and detection of added and missing objects, under different illumination conditions. The proposed system has two independent stages. First, a mapping application provides an accurate 3D wide model of the scene, using a view registration approach. This registration is based on computer vision and 3D point cloud. Fusion of visual features with 3D descriptors is used in order to identify corresponding points in two consecutive views. The matching of these two views is first estimated by a pre-alignment stage, based on the tilt movement of the sensor, later they are accurately aligned by an Iterative Closest Point variant (Levenberg-Marquardt ICP), which performance has been improved by a previous filter based on geometrical assumptions. The second stage provides accurate intruder and object detection by means of a 3D change detection approach, based on Octree volumetric representation, followed by a clusters analysis. The whole scene is continuously scanned, and every captured is compared with the corresponding part of the wide model thanks to the previous analysis of the sensor movement parameters. With this purpose a tilt-axis calibration method has been developed. Tests performed show the reliable performance of the system under real conditions and the improvements provided by each stage independently. Moreover, the main goal of this application has been enhanced, for reliable intruder detection by the tilting of the sensors using its built-in motor to increase the size of the monitored area. (C) 2015 Elsevier Ltd. All rights reserved.This work was supported by the Spanish Government through the CICYT projects (TRA2013-48314-C3-1-R) and (TRA2011-29454-C03-02)

    Exploiting 2D Floorplan for Building-scale Panorama RGBD Alignment

    Full text link
    This paper presents a novel algorithm that utilizes a 2D floorplan to align panorama RGBD scans. While effective panorama RGBD alignment techniques exist, such a system requires extremely dense RGBD image sampling. Our approach can significantly reduce the number of necessary scans with the aid of a floorplan image. We formulate a novel Markov Random Field inference problem as a scan placement over the floorplan, as opposed to the conventional scan-to-scan alignment. The technical contributions lie in multi-modal image correspondence cues (between scans and schematic floorplan) as well as a novel coverage potential avoiding an inherent stacking bias. The proposed approach has been evaluated on five challenging large indoor spaces. To the best of our knowledge, we present the first effective system that utilizes a 2D floorplan image for building-scale 3D pointcloud alignment. The source code and the data will be shared with the community to further enhance indoor mapping research
    • …
    corecore