24,980 research outputs found
Deep Generative Modeling of LiDAR Data
Building models capable of generating structured output is a key challenge
for AI and robotics. While generative models have been explored on many types
of data, little work has been done on synthesizing lidar scans, which play a
key role in robot mapping and localization. In this work, we show that one can
adapt deep generative models for this task by unravelling lidar scans into a 2D
point map. Our approach can generate high quality samples, while simultaneously
learning a meaningful latent representation of the data. We demonstrate
significant improvements against state-of-the-art point cloud generation
methods. Furthermore, we propose a novel data representation that augments the
2D signal with absolute positional information. We show that this helps
robustness to noisy and imputed input; the learned model can recover the
underlying lidar scan from seemingly uninformative dataComment: Presented at IROS 201
Deep Semantic Classification for 3D LiDAR Data
Robots are expected to operate autonomously in dynamic environments.
Understanding the underlying dynamic characteristics of objects is a key
enabler for achieving this goal. In this paper, we propose a method for
pointwise semantic classification of 3D LiDAR data into three classes:
non-movable, movable and dynamic. We concentrate on understanding these
specific semantics because they characterize important information required for
an autonomous system. Non-movable points in the scene belong to unchanging
segments of the environment, whereas the remaining classes corresponds to the
changing parts of the scene. The difference between the movable and dynamic
class is their motion state. The dynamic points can be perceived as moving,
whereas movable objects can move, but are perceived as static. To learn the
distinction between movable and non-movable points in the environment, we
introduce an approach based on deep neural network and for detecting the
dynamic points, we estimate pointwise motion. We propose a Bayes filter
framework for combining the learned semantic cues with the motion cues to infer
the required semantic classification. In extensive experiments, we compare our
approach with other methods on a standard benchmark dataset and report
competitive results in comparison to the existing state-of-the-art.
Furthermore, we show an improvement in the classification of points by
combining the semantic cues retrieved from the neural network with the motion
cues.Comment: 8 pages to be published in IROS 201
Recommended from our members
Riparian vegetation classification from airborne laser scanning data with an emphasis on cottonwood trees
The high point density of airborne laser mapping systems enables achieving a detailed description of geographic objects and the terrain. Growing experience indicates, however, that extracting useful information directly from the data can be difficult. In this study, small-footprint lidar data were used to differentiate between young, mature, and old cottonwood trees in the San Pedro River Basin near Benson, Arizona, USA. The lidar data were acquired in June 2003, using the Optech Incorporated ALTM 1233 (Optech Incorporated, Toronto, Ont.), during flyovers conducted at an altitude of 750 m. The lidar data were preprocessed to create a two-band image of the study site: a high-accuracy canopy altitude model band, and a near-infrared intensity band. These lidar-derived images provided the basis for supervised classification of cottonwood age categories, using a maximum likelihood algorithm. The results of classification illustrate the potential of airborne lidar data to differentiate age classes of cottonwood trees for riparian areas quickly and accurately. © 2006, Taylor & Francis Group, LLC. All rights reserved
Generation of attenuation corrected images from lidar data
The interpretation of data generated by aerosol backscatter lidars is often facilitated by presentation of RHI and PPI images. These pictures are especially useful in studies of atmospheric boundary layer structure where convective elements, stratifications and aerosol laden plumes can be easily delineated. Procedures used at the University of Wisconsin to generate lidar images on a color enhanced raster scan display are described
Towards online mobile mapping using inhomogeneous lidar data
In this paper we present a novel approach to quickly obtain detailed 3D reconstructions of large scale environments. The method is based on the consecutive registration of 3D point clouds generated by modern lidar scanners such as the Velodyne HDL-32e or HDL-64e. The main contribution of this work is that the proposed system specifically deals with the problem of sparsity and inhomogeneity of the point clouds typically produced by these scanners. More specifically, we combine the simplicity of the traditional iterative closest point (ICP) algorithm with the analysis of the underlying surface of each point in a local neighbourhood. The algorithm was evaluated on our own collected dataset captured with accurate ground truth. The experiments demonstrate that the system is producing highly detailed 3D maps at the speed of 10 sensor frames per second
CNN for Very Fast Ground Segmentation in Velodyne LiDAR Data
This paper presents a novel method for ground segmentation in Velodyne point
clouds. We propose an encoding of sparse 3D data from the Velodyne sensor
suitable for training a convolutional neural network (CNN). This general
purpose approach is used for segmentation of the sparse point cloud into ground
and non-ground points. The LiDAR data are represented as a multi-channel 2D
signal where the horizontal axis corresponds to the rotation angle and the
vertical axis the indexes channels (i.e. laser beams). Multiple topologies of
relatively shallow CNNs (i.e. 3-5 convolutional layers) are trained and
evaluated using a manually annotated dataset we prepared. The results show
significant improvement of performance over the state-of-the-art method by
Zhang et al. in terms of speed and also minor improvements in terms of
accuracy.Comment: ICRA 2018 submissio
Mesh-based 3D Textured Urban Mapping
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
- …