2,474 research outputs found
Communication constrained cloud-based long-term visual localization in real time
Visual localization is one of the primary capabilities for mobile robots.
Long-term visual localization in real time is particularly challenging, in
which the robot is required to efficiently localize itself using visual data
where appearance may change significantly over time. In this paper, we propose
a cloud-based visual localization system targeting at long-term localization in
real time. On the robot, we employ two estimators to achieve accurate and
real-time performance. One is a sliding-window based visual inertial odometry,
which integrates constraints from consecutive observations and self-motion
measurements, as well as the constraints induced by localization on the cloud.
This estimator builds a local visual submap as the virtual observation which is
then sent to the cloud as new localization constraints. The other one is a
delayed state Extended Kalman Filter to fuse the pose of the robot localized
from the cloud, the local odometry and the high-frequency inertial
measurements. On the cloud, we propose a longer sliding-window based
localization method to aggregate the virtual observations for larger field of
view, leading to more robust alignment between virtual observations and the
map. Under this architecture, the robot can achieve drift-free and real-time
localization using onboard resources even in a network with limited bandwidth,
high latency and existence of package loss, which enables the autonomous
navigation in real-world environment. We evaluate the effectiveness of our
system on a dataset with challenging seasonal and illuminative variations. We
further validate the robustness of the system under challenging network
conditions
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
CARLA-Loc: Synthetic SLAM Dataset with Full-stack Sensor Setup in Challenging Weather and Dynamic Environments
The robustness of SLAM algorithms in challenging environmental conditions is
crucial for autonomous driving, but the impact of these conditions are unknown
while given the difficulty of arbitrarily changing the relevant environmental
parameters of the same environment in the real world. Therefore, we propose
CARLA-Loc, a synthetic dataset of challenging and dynamic environments built on
CARLA simulator. We integrate multiple sensors into the dataset with strict
calibration, synchronization and precise timestamping. 7 maps and 42 sequences
are posed in our dataset with different dynamic levels and weather conditions.
Objects in both stereo images and point clouds are well-segmented with their
class labels. We evaluate 5 visual-based and 4 LiDAR-based approaches on varies
sequences and analyze the effect of challenging environmental factors on the
localization accuracy, showing the applicability of proposed dataset for
validating SLAM algorithms
Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression
Visual-inertial localization is a key problem in computer vision and robotics
applications such as virtual reality, self-driving cars, and aerial vehicles.
The goal is to estimate an accurate pose of an object when either the
environment or the dynamics are known. Recent methods directly regress the pose
using convolutional and spatio-temporal networks. Absolute pose regression
(APR) techniques predict the absolute camera pose from an image input in a
known scene. Odometry methods perform relative pose regression (RPR) that
predicts the relative pose from a known object dynamic (visual or inertial
inputs). The localization task can be improved by retrieving information of
both data sources for a cross-modal setup, which is a challenging problem due
to contradictory tasks. In this work, we conduct a benchmark to evaluate deep
multimodal fusion based on PGO and attention networks. Auxiliary and Bayesian
learning are integrated for the APR task. We show accuracy improvements for the
RPR-aided APR task and for the RPR-RPR task for aerial vehicles and hand-held
devices. We conduct experiments on the EuRoC MAV and PennCOSYVIO datasets, and
record a novel industry dataset.Comment: Under revie
- …