33 research outputs found
Learning a Bias Correction for Lidar-only Motion Estimation
This paper presents a novel technique to correct for bias in a classical
estimator using a learning approach. We apply a learned bias correction to a
lidar-only motion estimation pipeline. Our technique trains a Gaussian process
(GP) regression model using data with ground truth. The inputs to the model are
high-level features derived from the geometry of the point-clouds, and the
outputs are the predicted biases between poses computed by the estimator and
the ground truth. The predicted biases are applied as a correction to the poses
computed by the estimator.
Our technique is evaluated on over 50km of lidar data, which includes the
KITTI odometry benchmark and lidar datasets collected around the University of
Toronto campus. After applying the learned bias correction, we obtained
significant improvements to lidar odometry in all datasets tested. We achieved
around 10% reduction in errors on all datasets from an already accurate lidar
odometry algorithm, at the expense of only less than 1% increase in
computational cost at run-time.Comment: 15th Conference on Computer and Robot Vision (CRV 2018
Four years of multi-modal odometry and mapping on the rail vehicles
Precise, seamless, and efficient train localization as well as long-term
railway environment monitoring is the essential property towards reliability,
availability, maintainability, and safety (RAMS) engineering for railroad
systems. Simultaneous localization and mapping (SLAM) is right at the core of
solving the two problems concurrently. In this end, we propose a
high-performance and versatile multi-modal framework in this paper, targeted
for the odometry and mapping task for various rail vehicles. Our system is
built atop an inertial-centric state estimator that tightly couples light
detection and ranging (LiDAR), visual, optionally satellite navigation and
map-based localization information with the convenience and extendibility of
loosely coupled methods. The inertial sensors IMU and wheel encoder are treated
as the primary sensor, which achieves the observations from subsystems to
constrain the accelerometer and gyroscope biases. Compared to point-only
LiDAR-inertial methods, our approach leverages more geometry information by
introducing both track plane and electric power pillars into state estimation.
The Visual-inertial subsystem also utilizes the environmental structure
information by employing both lines and points. Besides, the method is capable
of handling sensor failures by automatic reconfiguration bypassing failure
modules. Our proposed method has been extensively tested in the long-during
railway environments over four years, including general-speed, high-speed and
metro, both passenger and freight traffic are investigated. Further, we aim to
share, in an open way, the experience, problems, and successes of our group
with the robotics community so that those that work in such environments can
avoid these errors. In this view, we open source some of the datasets to
benefit the research community
3D-SeqMOS: A Novel Sequential 3D Moving Object Segmentation in Autonomous Driving
For the SLAM system in robotics and autonomous driving, the accuracy of
front-end odometry and back-end loop-closure detection determine the whole
intelligent system performance. But the LiDAR-SLAM could be disturbed by
current scene moving objects, resulting in drift errors and even loop-closure
failure. Thus, the ability to detect and segment moving objects is essential
for high-precision positioning and building a consistent map. In this paper, we
address the problem of moving object segmentation from 3D LiDAR scans to
improve the odometry and loop-closure accuracy of SLAM. We propose a novel 3D
Sequential Moving-Object-Segmentation (3D-SeqMOS) method that can accurately
segment the scene into moving and static objects, such as moving and static
cars. Different from the existing projected-image method, we process the raw 3D
point cloud and build a 3D convolution neural network for MOS task. In
addition, to make full use of the spatio-temporal information of point cloud,
we propose a point cloud residual mechanism using the spatial features of
current scan and the temporal features of previous residual scans. Besides, we
build a complete SLAM framework to verify the effectiveness and accuracy of
3D-SeqMOS. Experiments on SemanticKITTI dataset show that our proposed
3D-SeqMOS method can effectively detect moving objects and improve the accuracy
of LiDAR odometry and loop-closure detection. The test results show our
3D-SeqMOS outperforms the state-of-the-art method by 12.4%. We extend the
proposed method to the SemanticKITTI: Moving Object Segmentation competition
and achieve the 2nd in the leaderboard, showing its effectiveness
LIDAR-INERTIAL LOCALIZATION WITH GROUND CONSTRAINT IN A POINT CLOUD MAP
Real-time localization is a crucial task in various applications, such as automatic vehicles (AV), robotics, and smart city. This study proposes a framework for map-aided LiDAR-inertial localization, with the objective of accurately estimating the trajectory in a point clouds map. The proposed framework addresses the localization problem through a factor graph optimization (FGO), enabling the fusion of homogenous measurements for sensor fusion and designed absolute and relative constraints. Specifically, the framework estimates the light detection and ranging (LiDAR) odometry by leveraging inertial measurement unit (IMU) and registering corresponding featured points. To eliminate the accumulative error, this paper employs a ground plane distance and a map matching error to constraint the positioning error along the trajectory. Finally, local odometry and constraints are integrated using a FGO, including LiDAR odometry, IMU pre-integration, and ground constraints, map matching constraints, and loop closure. Experimental results were evaluated on an open-source dataset, UrbanNav, with an overall localization accuracy of 2.29 m (root mean square error, RMSE)
SegMap: 3D Segment Mapping using Data-Driven Descriptors
When performing localization and mapping, working at the level of structure
can be advantageous in terms of robustness to environmental changes and
differences in illumination. This paper presents SegMap: a map representation
solution to the localization and mapping problem based on the extraction of
segments in 3D point clouds. In addition to facilitating the computationally
intensive task of processing 3D point clouds, working at the level of segments
addresses the data compression requirements of real-time single- and
multi-robot systems. While current methods extract descriptors for the single
task of localization, SegMap leverages a data-driven descriptor in order to
extract meaningful features that can also be used for reconstructing a dense 3D
map of the environment and for extracting semantic information. This is
particularly interesting for navigation tasks and for providing visual feedback
to end-users such as robot operators, for example in search and rescue
scenarios. These capabilities are demonstrated in multiple urban driving and
search and rescue experiments. Our method leads to an increase of area under
the ROC curve of 28.3% over current state of the art using eigenvalue based
features. We also obtain very similar reconstruction capabilities to a model
specifically trained for this task. The SegMap implementation will be made
available open-source along with easy to run demonstrations at
www.github.com/ethz-asl/segmap. A video demonstration is available at
https://youtu.be/CMk4w4eRobg