117 research outputs found
A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots
Legged robots are becoming popular not only in research, but also in
industry, where they can demonstrate their superiority over wheeled machines in
a variety of applications. Either when acting as mobile manipulators or just as
all-terrain ground vehicles, these machines need to precisely track the desired
base and end-effector trajectories, perform Simultaneous Localization and
Mapping (SLAM), and move in challenging environments, all while keeping
balance. A crucial aspect for these tasks is that all onboard sensors must be
properly calibrated and synchronized to provide consistent signals for all the
software modules they feed. In this paper, we focus on the problem of
calibrating the relative pose between a set of cameras and the base link of a
quadruped robot. This pose is fundamental to successfully perform sensor
fusion, state estimation, mapping, and any other task requiring visual
feedback. To solve this problem, we propose an approach based on factor graphs
that jointly optimizes the mutual position of the cameras and the robot base
using kinematics and fiducial markers. We also quantitatively compare its
performance with other state-of-the-art methods on the hydraulic quadruped
robot HyQ. The proposed approach is simple, modular, and independent from
external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic
Computing (IEEE IRC 2019)
Incremental Visual-Inertial 3D Mesh Generation with Structural Regularities
Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud
representation of the scene that does not model the topology of the
environment. A 3D mesh instead offers a richer, yet lightweight, model.
Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks
triangulated by a VIO algorithm often results in a mesh that does not fit the
real scene. In order to regularize the mesh, previous approaches decouple state
estimation from the 3D mesh regularization step, and either limit the 3D mesh
to the current frame or let the mesh grow indefinitely. We propose instead to
tightly couple mesh regularization and state estimation by detecting and
enforcing structural regularities in a novel factor-graph formulation. We also
propose to incrementally build the mesh by restricting its extent to the
time-horizon of the VIO optimization; the resulting 3D mesh covers a larger
portion of the scene than a per-frame approach while its memory usage and
computational complexity remain bounded. We show that our approach successfully
regularizes the mesh, while improving localization accuracy, when structural
regularities are present, and remains operational in scenes without
regularities.Comment: 7 pages, 5 figures, ICRA accepte
Incrementally Learned Mixture Models for GNSS Localization
GNSS localization is an important part of today's autonomous systems,
although it suffers from non-Gaussian errors caused by non-line-of-sight
effects. Recent methods are able to mitigate these effects by including the
corresponding distributions in the sensor fusion algorithm. However, these
approaches require prior knowledge about the sensor's distribution, which is
often not available. We introduce a novel sensor fusion algorithm based on
variational Bayesian inference, that is able to approximate the true
distribution with a Gaussian mixture model and to learn its parametrization
online. The proposed Incremental Variational Mixture algorithm automatically
adapts the number of mixture components to the complexity of the measurement's
error distribution. We compare the proposed algorithm against current
state-of-the-art approaches using a collection of open access real world
datasets and demonstrate its superior localization accuracy.Comment: 8 pages, 5 figures, published in proceedings of IEEE Intelligent
Vehicles Symposium (IV) 201
Factor graph fusion of raw GNSS Sensing with IMU and Lidar for precise robot localization without a base station
Accurate localization is a core component of a robot's navigation system. To this end, global navigation satellite systems (GNSS) can provide absolute measurements outdoors and, therefore, eliminate long-term drift. However, fusing GNSS data with other sensor data is not trivial, especially when a robot moves between areas with and without sky view. We propose a robust approach that tightly fuses raw GNSS receiver data with inertial measurements and, optionally, lidar observations for precise and smooth mobile robot localization. A factor graph with two types of GNSS factors is proposed. First, factors based on pseudoranges, which allow for global localization on Earth. Second, factors based on carrier phases, which enable highly accurate relative localization, which is useful when other sensing modalities are challenged. Unlike traditional differential GNSS, this approach does not require a connection to a base station. On a public urban driving dataset, our approach achieves accuracy comparable to a state-of-the-art algorithm that fuses visual inertial odometry with GNSS data-despite our approach not using the camera, just inertial and GNSS data. We also demonstrate the robustness of our approach using data from a car and a quadruped robot moving in environments with little sky visibility, such as a forest. The accuracy in the global Earth frame is still 1–2 m, while the estimated trajectories are discontinuity-free and smooth. We also show how lidar measurements can be tightly integrated. We believe this is the first system that fuses raw GNSS observations (as opposed to fixes) with lidar in a factor graph
Integrating Visual Foundation Models for Enhanced Robot Manipulation and Motion Planning: A Layered Approach
This paper presents a novel layered framework that integrates visual
foundation models to improve robot manipulation tasks and motion planning. The
framework consists of five layers: Perception, Cognition, Planning, Execution,
and Learning. Using visual foundation models, we enhance the robot's perception
of its environment, enabling more efficient task understanding and accurate
motion planning. This approach allows for real-time adjustments and continual
learning, leading to significant improvements in task execution. Experimental
results demonstrate the effectiveness of the proposed framework in various
robot manipulation tasks and motion planning scenarios, highlighting its
potential for practical deployment in dynamic environments.Comment: 3 pages, 2 figures, IEEE Worksho
Efficient Global Occupancy Mapping for Mobile Robots using OpenVDB
In this work we present a fast occupancy map building approach based on the
VDB datastructure. Existing log-odds based occupancy mapping systems are often
not able to keep up with the high point densities and framerates of modern
sensors. Therefore, we suggest a highly optimized approach based on a modern
datastructure coming from a computer graphic background. A multithreaded
insertion scheme allows occupancy map building at unprecedented speed. Multiple
optimizations allow for a customizable tradeoff between runtime and map
quality. We first demonstrate the effectiveness of the approach quantitatively
on a set of ablation studies and typical benchmark sets, before we practically
demonstrate the system using a legged robot and a UAV.Comment: 6 pages, presented in Agile Robotics Workshop at IROS202
GTP-SLAM: Game-Theoretic Priors for Simultaneous Localization and Mapping in Multi-Agent Scenarios
Robots operating in complex, multi-player settings must simultaneously model
the environment and the behavior of human or robotic agents who share that
environment. Environmental modeling is often approached using Simultaneous
Localization and Mapping (SLAM) techniques; however, SLAM algorithms usually
neglect multi-player interactions. In contrast, a recent branch of the motion
planning literature uses dynamic game theory to explicitly model noncooperative
interactions of multiple agents in a known environment with perfect
localization. In this work, we fuse ideas from these disparate communities to
solve SLAM problems with game theoretic priors. We present GTP-SLAM, a novel,
iterative best response-based SLAM algorithm that accurately performs state
localization and map reconstruction in an uncharted scene, while capturing the
inherent game-theoretic interactions among multiple agents in that scene. By
formulating the underlying SLAM problem as a potential game, we inherit a
strong convergence guarantee. Empirical results indicate that, when deployed in
a realistic traffic simulation, our approach performs localization and mapping
more accurately than a standard bundle adjustment algorithm across a wide range
of noise levels.Comment: 6 pages, 3 figure
- …