659 research outputs found
Radar-on-Lidar: metric radar localization on prior lidar maps
Radar and lidar, provided by two different range sensors, each has pros and
cons of various perception tasks on mobile robots or autonomous driving. In
this paper, a Monte Carlo system is used to localize the robot with a rotating
radar sensor on 2D lidar maps. We first train a conditional generative
adversarial network to transfer raw radar data to lidar data, and achieve
reliable radar points from generator. Then an efficient radar odometry is
included in the Monte Carlo system. Combining the initial guess from odometry,
a measurement model is proposed to match the radar data and prior lidar maps
for final 2D positioning. We demonstrate the effectiveness of the proposed
localization framework on the public multi-session dataset. The experimental
results show that our system can achieve high accuracy for long-term
localization in outdoor scenes
LocNet: Global localization in 3D point clouds for mobile vehicles
Global localization in 3D point clouds is a challenging problem of estimating
the pose of vehicles without any prior knowledge. In this paper, a solution to
this problem is presented by achieving place recognition and metric pose
estimation in the global prior map. Specifically, we present a semi-handcrafted
representation learning method for LiDAR point clouds using siamese LocNets,
which states the place recognition problem to a similarity modeling problem.
With the final learned representations by LocNet, a global localization
framework with range-only observations is proposed. To demonstrate the
performance and effectiveness of our global localization system, KITTI dataset
is employed for comparison with other algorithms, and also on our long-time
multi-session datasets for evaluation. The result shows that our system can
achieve high accuracy.Comment: 6 pages, IV 2018 accepte
One RING to Rule Them All: Radon Sinogram for Place Recognition, Orientation and Translation Estimation
LiDAR-based global localization is a fundamental problem for mobile robots.
It consists of two stages, place recognition and pose estimation, and yields
the current orientation and translation, using only the current scan as query
and a database of map scans. Inspired by the definition of a recognized place,
we consider that a good global localization solution should keep the pose
estimation accuracy with a lower place density. Following this idea, we propose
a novel framework towards sparse place-based global localization, which
utilizes a unified and learning-free representation, Radon sinogram (RING), for
all sub-tasks. Based on the theoretical derivation, a translation invariant
descriptor and an orientation invariant metric are proposed for place
recognition, achieving certifiable robustness against arbitrary orientation and
large translation between query and map scan. In addition, we also utilize the
property of RING to propose a global convergent solver for both orientation and
translation estimation, arriving at global localization. Evaluation of the
proposed RING based framework validates the feasibility and demonstrates a
superior performance even under a lower place density
Communication constrained cloud-based long-term visual localization in real time
Visual localization is one of the primary capabilities for mobile robots.
Long-term visual localization in real time is particularly challenging, in
which the robot is required to efficiently localize itself using visual data
where appearance may change significantly over time. In this paper, we propose
a cloud-based visual localization system targeting at long-term localization in
real time. On the robot, we employ two estimators to achieve accurate and
real-time performance. One is a sliding-window based visual inertial odometry,
which integrates constraints from consecutive observations and self-motion
measurements, as well as the constraints induced by localization on the cloud.
This estimator builds a local visual submap as the virtual observation which is
then sent to the cloud as new localization constraints. The other one is a
delayed state Extended Kalman Filter to fuse the pose of the robot localized
from the cloud, the local odometry and the high-frequency inertial
measurements. On the cloud, we propose a longer sliding-window based
localization method to aggregate the virtual observations for larger field of
view, leading to more robust alignment between virtual observations and the
map. Under this architecture, the robot can achieve drift-free and real-time
localization using onboard resources even in a network with limited bandwidth,
high latency and existence of package loss, which enables the autonomous
navigation in real-world environment. We evaluate the effectiveness of our
system on a dataset with challenging seasonal and illuminative variations. We
further validate the robustness of the system under challenging network
conditions
- …