1,141 research outputs found
Robust Moving Objects Detection in Lidar Data Exploiting Visual Cues
Detecting moving objects in dynamic scenes from sequences of lidar scans is an important task in object tracking, mapping, localization, and navigation. Many works focus on changes detection in previously observed scenes, while a very limited amount of literature addresses moving objects detection. The state-of-the-art method exploits Dempster-Shafer Theory to evaluate the occupancy of a lidar scan and to discriminate points belonging to the static scene from moving ones. In this paper we improve both speed and accuracy of this method by discretizing the occupancy representation, and by removing false positives through visual cues. Many false positives lying on the ground plane are also removed thanks to a novel ground plane removal algorithm. Efficiency is improved through an octree indexing strategy. Experimental evaluation against the KITTI public dataset shows the effectiveness of our approach, both qualitatively and quantitatively with respect to the state- of-the-art
Mesh-based 3D Textured Urban Mapping
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
Long-Term Localization using Semantic Cues in Floor Plan Maps
Lifelong localization in a given map is an essential capability for
autonomous service robots. In this paper, we consider the task of long-term
localization in a changing indoor environment given sparse CAD floor plans. The
commonly used pre-built maps from the robot sensors may increase the cost and
time of deployment. Furthermore, their detailed nature requires that they are
updated when significant changes occur. We address the difficulty of
localization when the correspondence between the map and the observations is
low due to the sparsity of the CAD map and the changing environment. To
overcome both challenges, we propose to exploit semantic cues that are commonly
present in human-oriented spaces. These semantic cues can be detected using RGB
cameras by utilizing object detection, and are matched against an
easy-to-update, abstract semantic map. The semantic information is integrated
into a Monte Carlo localization framework using a particle filter that operates
on 2D LiDAR scans and camera data. We provide a long-term localization solution
and a semantic map format, for environments that undergo changes to their
interior structure and detailed geometric maps are not available. We evaluate
our localization framework on multiple challenging indoor scenarios in an
office environment, taken weeks apart. The experiments suggest that our
approach is robust to structural changes and can run on an onboard computer. We
released the open source implementation of our approach written in C++ together
with a ROS wrapper.Comment: Under review for RA-
Mapless Online Detection of Dynamic Objects in 3D Lidar
This paper presents a model-free, setting-independent method for online
detection of dynamic objects in 3D lidar data. We explicitly compensate for the
moving-while-scanning operation (motion distortion) of present-day 3D spinning
lidar sensors. Our detection method uses a motion-compensated freespace
querying algorithm and classifies between dynamic (currently moving) and static
(currently stationary) labels at the point level. For a quantitative analysis,
we establish a benchmark with motion-distorted lidar data using CARLA, an
open-source simulator for autonomous driving research. We also provide a
qualitative analysis with real data using a Velodyne HDL-64E in driving
scenarios. Compared to existing 3D lidar methods that are model-free, our
method is unique because of its setting independence and compensation for
pointcloud motion distortion.Comment: 7 pages, 8 figure
Dynablox: Real-time Detection of Diverse Dynamic Objects in Complex Environments
Real-time detection of moving objects is an essential capability for robots
acting autonomously in dynamic environments. We thus propose Dynablox, a novel
online mapping-based approach for robust moving object detection in complex
environments. The central idea of our approach is to incrementally estimate
high confidence free-space areas by modeling and accounting for sensing, state
estimation, and mapping limitations during online robot operation. The
spatio-temporally conservative free space estimate enables robust detection of
moving objects without making any assumptions on the appearance of objects or
environments. This allows deployment in complex scenes such as multi-storied
buildings or staircases, and for diverse moving objects such as people carrying
various items, doors swinging or even balls rolling around. We thoroughly
evaluate our approach on real-world data sets, achieving 86% IoU at 17 FPS in
typical robotic settings. The method outperforms a recent appearance-based
classifier and approaches the performance of offline methods. We demonstrate
its generality on a novel data set with rare moving objects in complex
environments. We make our efficient implementation and the novel data set
available as open-source.Comment: Code released at https://github.com/ethz-asl/dynablo
- …