3,529 research outputs found

    Fast 3-D Urban Object Detection on Streaming Point Clouds

    Get PDF
    Efficient and fast object detection from continuously streamed 3-D point clouds has a major impact in many related research tasks, such as autonomous driving, self localization and mapping and understanding large scale environment. This paper presents a LIDAR-based framework, which provides fast detection of 3-D urban objects from point cloud sequences of a Velodyne HDL-64E terrestrial LIDAR scanner installed on a moving platform. The pipeline of our framework receives raw streams of 3-D data, and produces distinct groups of points which belong to different urban objects. In the proposed framework we present a simple, yet efficient hierarchical grid data structure and corresponding algorithms that significantly improve the processing speed of the object detection task. Furthermore, we show that this approach confidently handles streaming data, and provides a speedup of two orders of magnitude, with increased detection accuracy compared to a baseline connected component analysis algorithm

    Ground Extraction from 3D Lidar Point Clouds

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works Pomares, A., Martínez, J.L., Mandow, A., Martínez, M.A., Morán, M., Morales, J. Ground extraction from 3D lidar point clouds with the Classification Learner App (2018) 26th Mediterranean Conference on Control and Automation, Zadar, Croatia, June 2018, pp.400-405. DOI: PendingGround extraction from three-dimensional (3D) range data is a relevant problem for outdoor navigation of unmanned ground vehicles. Even if this problem has received attention with specific heuristics and segmentation approaches, identification of ground and non-ground points can benefit from state-of-the-art classification methods, such as those included in the Matlab Classification Learner App. This paper proposes a comparative study of the machine learning methods included in this tool in terms of training times as well as in their predictive performance. With this purpose, we have combined three suitable features for ground detection, which has been applied to an urban dataset with several labeled 3D point clouds. Most of the analyzed techniques achieve good classification results, but only a few offer low training and prediction times.This work was partially supported by the Spanish project DPI 2015- 65186-R. The publication has received support from Universidad de Málaga, Campus de Excelencia Andalucía Tech

    SEGCloud: Semantic Segmentation of 3D Point Clouds

    Full text link
    3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the state-of-the-art on all datasets.Comment: Accepted as a spotlight at the International Conference of 3D Vision (3DV 2017

    Open software and standards in the realm of laser scanning technology

    Get PDF
    Abstract This review aims at introducing laser scanning technology and providing an overview of the contribution of open source projects for supporting the utilization and analysis of laser scanning data. Lidar technology is pushing to new frontiers in mapping and surveying topographic data. The open source community has supported this by providing libraries, standards, interfaces, modules all the way to full software. Such open solutions provide scientists and end-users valuable tools to access and work with lidar data, fostering new cutting-edge investigation and improvements of existing methods. The first part of this work provides an introduction on laser scanning principles, with references for further reading. It is followed by sections respectively reporting on open standards and formats for lidar data, tools and finally web-based solutions for accessing lidar data. It is not intended to provide a thorough review of state of the art regarding lidar technology itself, but to provide an overview of the open source toolkits available to the community to access, visualize, edit and process point clouds. A range of open source features for lidar data access and analysis is provided, providing an overview of what can be done with alternatives to commercial end-to-end solutions. Data standards and formats are also discussed, showing what are the challenges for storing and accessing massive point clouds. The desiderata are to provide scientists that have not yet worked with lidar data an overview of how this technology works and what open source tools can be a valid solution for their needs in analysing such data. Researchers that are already involved with lidar data will hopefully get ideas on integrating and improving their workflow through open source solutions

    Automatic 3D Building Detection and Modeling from Airborne LiDAR Point Clouds

    Get PDF
    Urban reconstruction, with an emphasis on man-made structure modeling, is an active research area with broad impact on several potential applications. Urban reconstruction combines photogrammetry, remote sensing, computer vision, and computer graphics. Even though there is a huge volume of work that has been done, many problems still remain unsolved. Automation is one of the key focus areas in this research. In this work, a fast, completely automated method to create 3D watertight building models from airborne LiDAR (Light Detection and Ranging) point clouds is presented. The developed method analyzes the scene content and produces multi-layer rooftops, with complex rigorous boundaries and vertical walls, that connect rooftops to the ground. The graph cuts algorithm is used to separate vegetative elements from the rest of the scene content, which is based on the local analysis about the properties of the local implicit surface patch. The ground terrain and building rooftop footprints are then extracted, utilizing the developed strategy, a two-step hierarchical Euclidean clustering. The method presented here adopts a divide-and-conquer scheme. Once the building footprints are segmented from the terrain and vegetative areas, the whole scene is divided into individual pendent processing units which represent potential points on the rooftop. For each individual building region, significant features on the rooftop are further detected using a specifically designed region-growing algorithm with surface smoothness constraints. The principal orientation of each building rooftop feature is calculated using a minimum bounding box fitting technique, and is used to guide the refinement of shapes and boundaries of the rooftop parts. Boundaries for all of these features are refined for the purpose of producing strict description. Once the description of the rooftops is achieved, polygonal mesh models are generated by creating surface patches with outlines defined by detected vertices to produce triangulated mesh models. These triangulated mesh models are suitable for many applications, such as 3D mapping, urban planning and augmented reality
    • …
    corecore