2,021 research outputs found

    Single-tree detection in high-density LiDAR data from UAV-based survey

    Get PDF
    UAV-based LiDAR survey provides very-high-density point clouds, which involve very rich information about forest detailed structure, allowing for detection of individual trees, as well as demanding high computational load. Single-tree detection is of great interest for forest management and ecology purposes, and the task is relatively well solved for forests made of single or largely dominant species, and trees having a very evident pointed shape in the upper part of the canopy (in particular conifers). Most authors proposed methods based totally or partially on search of local maxima in the canopy, which has poor performance for species that have flat or irregular upper canopy, and for mixed forests, especially where taller trees hide smaller ones. Such considerations apply in particular to Mediterranean hardwood forests. In such context, it is imperative to use the whole volume of the point cloud, however keeping computational load tractable. The authors propose the use of a methodology based on modelling the 3D-shape of the tree, which improves performance w.r.t to maxima-based models. A case study, performed on a hazel grove, is provided to document performance improvement on a relatively simple, but significant, case

    Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling

    Full text link
    Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with a more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy to a convolution kernel for images, we define a point-set kernel as a set of learnable 3D points that jointly respond to a set of neighboring data points according to their geometric affinities measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local high-dimensional feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network can efficiently capture local information and robustly achieve better performances on major datasets. Our code is available at http://www.merl.com/research/license#KCNetComment: Accepted in CVPR'18. *indicates equal contributio

    Fast and robust 3D feature extraction from sparse point clouds

    Get PDF
    Matching 3D point clouds, a critical operation in map building and localization, is difficult with Velodyne-type sensors due to the sparse and non-uniform point clouds that they produce. Standard methods from dense 3D point clouds are generally not effective. In this paper, we describe a featurebased approach using Principal Components Analysis (PCA) of neighborhoods of points, which results in mathematically principled line and plane features. The key contribution in this work is to show how this type of feature extraction can be done efficiently and robustly even on non-uniformly sampled point clouds. The resulting detector runs in real-time and can be easily tuned to have a low false positive rate, simplifying data association. We evaluate the performance of our algorithm on an autonomous car at the MCity Test Facility using a Velodyne HDL-32E, and we compare our results against the state-of-theart NARF keypoint detector. © 2016 IEEE

    Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

    Full text link
    Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.Comment: 10 pages in Conference on Computer Vision and Pattern Recognition (CVPR), 201
    • …
    corecore