2,073 research outputs found
Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping
Modern 3D laser-range scanners have a high data rate, making online
simultaneous localization and mapping (SLAM) computationally challenging.
Recursive state estimation techniques are efficient but commit to a state
estimate immediately after a new scan is made, which may lead to misalignments
of measurements. We present a 3D SLAM approach that allows for refining
alignments during online mapping. Our method is based on efficient local
mapping and a hierarchical optimization back-end. Measurements of a 3D laser
scanner are aggregated in local multiresolution maps by means of surfel-based
registration. The local maps are used in a multi-level graph for allocentric
mapping and localization. In order to incorporate corrections when refining the
alignment, the individual 3D scans in the local map are modeled as a sub-graph
and graph optimization is performed to account for drift and misalignments in
the local maps. Furthermore, in each sub-graph, a continuous-time
representation of the sensor trajectory allows to correct measurements between
scan poses. We evaluate our approach in multiple experiments by showing
qualitative results. Furthermore, we quantify the map quality by an
entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and
Automation (ICRA) 201
Tightly Coupled 3D Lidar Inertial Odometry and Mapping
Ego-motion estimation is a fundamental requirement for most mobile robotic
applications. By sensor fusion, we can compensate the deficiencies of
stand-alone sensors and provide more reliable estimations. We introduce a
tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing
the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO)
can perform well with acceptable drift after long-term experiment, even in
challenging cases where the lidar measurements can be degraded. Besides, to
obtain more reliable estimations of the lidar poses, a rotation-constrained
refinement algorithm (LIO-mapping) is proposed to further align the lidar poses
with the global map. The experiment results demonstrate that the proposed
method can estimate the poses of the sensor pair at the IMU update rate with
high precision, even under fast motion conditions or with insufficient
features.Comment: Accepted by ICRA 201
RadarSLAM: Radar based Large-Scale SLAM in All Weathers
Numerous Simultaneous Localization and Mapping (SLAM) algorithms have been
presented in last decade using different sensor modalities. However, robust
SLAM in extreme weather conditions is still an open research problem. In this
paper, RadarSLAM, a full radar based graph SLAM system, is proposed for
reliable localization and mapping in large-scale environments. It is composed
of pose tracking, local mapping, loop closure detection and pose graph
optimization, enhanced by novel feature matching and probabilistic point cloud
generation on radar images. Extensive experiments are conducted on a public
radar dataset and several self-collected radar sequences, demonstrating the
state-of-the-art reliability and localization accuracy in various adverse
weather conditions, such as dark night, dense fog and heavy snowfall
Probabilistic Surfel Fusion for Dense LiDAR Mapping
With the recent development of high-end LiDARs, more and more systems are
able to continuously map the environment while moving and producing spatially
redundant information. However, none of the previous approaches were able to
effectively exploit this redundancy in a dense LiDAR mapping problem. In this
paper, we present a new approach for dense LiDAR mapping using probabilistic
surfel fusion. The proposed system is capable of reconstructing a high-quality
dense surface element (surfel) map from spatially redundant multiple views.
This is achieved by a proposed probabilistic surfel fusion along with a
geometry considered data association. The proposed surfel data association
method considers surface resolution as well as high measurement uncertainty
along its beam direction which enables the mapping system to be able to control
surface resolution without introducing spatial digitization. The proposed
fusion method successfully suppresses the map noise level by considering
measurement noise caused by laser beam incident angle and depth distance in a
Bayesian filtering framework. Experimental results with simulated and real data
for the dense surfel mapping prove the ability of the proposed method to
accurately find the canonical form of the environment without further
post-processing.Comment: Accepted in Multiview Relationships in 3D Data 2017 (IEEE
International Conference on Computer Vision Workshops
A General Framework for Flexible Multi-Cue Photometric Point Cloud Registration
The ability to build maps is a key functionality for the majority of mobile
robots. A central ingredient to most mapping systems is the registration or
alignment of the recorded sensor data. In this paper, we present a general
methodology for photometric registration that can deal with multiple different
cues. We provide examples for registering RGBD as well as 3D LIDAR data. In
contrast to popular point cloud registration approaches such as ICP our method
does not rely on explicit data association and exploits multiple modalities
such as raw range and image data streams. Color, depth, and normal information
are handled in an uniform manner and the registration is obtained by minimizing
the pixel-wise difference between two multi-channel images. We developed a
flexible and general framework and implemented our approach inside that
framework. We also released our implementation as open source C++ code. The
experiments show that our approach allows for an accurate registration of the
sensor data without requiring an explicit data association or model-specific
adaptations to datasets or sensors. Our approach exploits the different cues in
a natural and consistent way and the registration can be done at framerate for
a typical range or imaging sensor.Comment: 8 page
LocNet: Global localization in 3D point clouds for mobile vehicles
Global localization in 3D point clouds is a challenging problem of estimating
the pose of vehicles without any prior knowledge. In this paper, a solution to
this problem is presented by achieving place recognition and metric pose
estimation in the global prior map. Specifically, we present a semi-handcrafted
representation learning method for LiDAR point clouds using siamese LocNets,
which states the place recognition problem to a similarity modeling problem.
With the final learned representations by LocNet, a global localization
framework with range-only observations is proposed. To demonstrate the
performance and effectiveness of our global localization system, KITTI dataset
is employed for comparison with other algorithms, and also on our long-time
multi-session datasets for evaluation. The result shows that our system can
achieve high accuracy.Comment: 6 pages, IV 2018 accepte
- …