36 research outputs found

    Learning a Bias Correction for Lidar-only Motion Estimation

    Full text link
    This paper presents a novel technique to correct for bias in a classical estimator using a learning approach. We apply a learned bias correction to a lidar-only motion estimation pipeline. Our technique trains a Gaussian process (GP) regression model using data with ground truth. The inputs to the model are high-level features derived from the geometry of the point-clouds, and the outputs are the predicted biases between poses computed by the estimator and the ground truth. The predicted biases are applied as a correction to the poses computed by the estimator. Our technique is evaluated on over 50km of lidar data, which includes the KITTI odometry benchmark and lidar datasets collected around the University of Toronto campus. After applying the learned bias correction, we obtained significant improvements to lidar odometry in all datasets tested. We achieved around 10% reduction in errors on all datasets from an already accurate lidar odometry algorithm, at the expense of only less than 1% increase in computational cost at run-time.Comment: 15th Conference on Computer and Robot Vision (CRV 2018

    A 3D descriptor to detect task-oriented grasping points in clothing

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Manipulating textile objects with a robot is a challenging task, especially because the garment perception is difficult due to the endless configurations it can adopt, coupled with a large variety of colors and designs. Most current approaches follow a multiple re-grasp strategy, in which clothes are sequentially grasped from different points until one of them yields a recognizable configuration. In this work we propose a method that combines 3D and appearance information to directly select a suitable grasping point for the task at hand, which in our case consists of hanging a shirt or a polo shirt from a hook. Our method follows a coarse-to-fine approach in which, first, the collar of the garment is detected and, next, a grasping point on the lapel is chosen using a novel 3D descriptor. In contrast to current 3D descriptors, ours can run in real time, even when it needs to be densely computed over the input image. Our central idea is to take advantage of the structured nature of range images that most depth sensors provide and, by exploiting integral imaging, achieve speed-ups of two orders of magnitude with respect to competing approaches, while maintaining performance. This makes it especially adequate for robotic applications as we thoroughly demonstrate in the experimental section.Peer ReviewedPostprint (author's final draft

    Accurate Localization of 3D Objects from RGB-D Data Using Segmentation Hypotheses

    Full text link
    In this paper we focus on the problem of detecting ob-jects in 3D from RGB-D images. We propose a novel frame-work that explores the compatibility between segmentation hypotheses of the object in the image and the corresponding 3D map. Our framework allows to discover the optimal lo-cation of the object using a generalization of the structural latent SVM formulation in 3D as well as the definition of a new loss function defined over the 3D space in training. We evaluate our method using two existing RGB-D datasets. Extensive quantitative and qualitative experimental results show that our proposed approach outperforms state-of-the-art as methods well as a number of baseline approaches for both 3D and 2D object recognition tasks. 1

    Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries

    No full text
    Abstract—In this paper we address the topic of feature extraction in 3D point cloud data for object recognition and pose identification. We present a novel interest keypoint extraction methodthatoperatesonrangeimagesgenerated fromarbitrary 3D point clouds, which explicitly considers the borders of the objects identified by transitions from foreground to background. We furthermore present a feature descriptor that takes the same information into account. We have implemented our approachandpresentrigorousexperimentsinwhichweanalyze the individual components with respect to their repeatability and matching capabilities and evaluate the usefulness for point feature based object detection methods. I

    ENHANCING THE RESOLUTION OF URBAN DIGITAL TERRAIN MODELS USING MOBILE MAPPING SYSTEMS

    Get PDF
    Digital Terrain Models (DTMs) are essential surveying products for terrain based analyses, especially for overland flow modelling. Nowadays, many high resolution DTM products are generated by Airborne Laser Scanning (ALS). However, DTMs with even higher resolution are of great interest for a more precise overland flow modelling in urban areas. With the help of mobile mapping techniques, we can obtain much denser measurements of the ground in the vicinity of roads. In this research, a study area in Hannover, Germany was measured by a mobile mapping system. Point clouds from 485 scan strips were aligned and a DTM was extracted. In order to achieve a product with completeness, this mobile mapping produced DTM was then merged and adapted with a DTM product with 0.5 m resolution from a mapping agency. Systematic evaluations have been conducted with respect to the height accuracy of the DTM products. The results show that the final DTM product achieved a higher resolution (0.1 m) near the roads while essentially maintaining its height accuracy
    corecore