63,369 research outputs found
Increasing the Efficiency of 6-DoF Visual Localization Using Multi-Modal Sensory Data
Localization is a key requirement for mobile robot autonomy and human-robot
interaction. Vision-based localization is accurate and flexible, however, it
incurs a high computational burden which limits its application on many
resource-constrained platforms. In this paper, we address the problem of
performing real-time localization in large-scale 3D point cloud maps of
ever-growing size. While most systems using multi-modal information reduce
localization time by employing side-channel information in a coarse manner (eg.
WiFi for a rough prior position estimate), we propose to inter-weave the map
with rich sensory data. This multi-modal approach achieves two key goals
simultaneously. First, it enables us to harness additional sensory data to
localise against a map covering a vast area in real-time; and secondly, it also
allows us to roughly localise devices which are not equipped with a camera. The
key to our approach is a localization policy based on a sequential Monte Carlo
estimator. The localiser uses this policy to attempt point-matching only in
nodes where it is likely to succeed, significantly increasing the efficiency of
the localization process. The proposed multi-modal localization system is
evaluated extensively in a large museum building. The results show that our
multi-modal approach not only increases the localization accuracy but
significantly reduces computational time.Comment: Presented at IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 201
AlphaPilot: Autonomous Drone Racing
This paper presents a novel system for autonomous, vision-based drone racing
combining learned data abstraction, nonlinear filtering, and time-optimal
trajectory planning. The system has successfully been deployed at the first
autonomous drone racing world championship: the 2019 AlphaPilot Challenge.
Contrary to traditional drone racing systems, which only detect the next gate,
our approach makes use of any visible gate and takes advantage of multiple,
simultaneous gate detections to compensate for drift in the state estimate and
build a global map of the gates. The global map and drift-compensated state
estimate allow the drone to navigate through the race course even when the
gates are not immediately visible and further enable to plan a near
time-optimal path through the race course in real time based on approximate
drone dynamics. The proposed system has been demonstrated to successfully guide
the drone through tight race courses reaching speeds up to 8m/s and ranked
second at the 2019 AlphaPilot Challenge.Comment: Accepted at Robotics: Science and Systems 2020, associated video at
https://youtu.be/DGjwm5PZQT
Flight Dynamics-based Recovery of a UAV Trajectory using Ground Cameras
We propose a new method to estimate the 6-dof trajectory of a flying object
such as a quadrotor UAV within a 3D airspace monitored using multiple fixed
ground cameras. It is based on a new structure from motion formulation for the
3D reconstruction of a single moving point with known motion dynamics. Our main
contribution is a new bundle adjustment procedure which in addition to
optimizing the camera poses, regularizes the point trajectory using a prior
based on motion dynamics (or specifically flight dynamics). Furthermore, we can
infer the underlying control input sent to the UAV's autopilot that determined
its flight trajectory.
Our method requires neither perfect single-view tracking nor appearance
matching across views. For robustness, we allow the tracker to generate
multiple detections per frame in each video. The true detections and the data
association across videos is estimated using robust multi-view triangulation
and subsequently refined during our bundle adjustment procedure. Quantitative
evaluation on simulated data and experiments on real videos from indoor and
outdoor scenes demonstrates the effectiveness of our method
Real-Time Motion Planning of Legged Robots: A Model Predictive Control Approach
We introduce a real-time, constrained, nonlinear Model Predictive Control for
the motion planning of legged robots. The proposed approach uses a constrained
optimal control algorithm known as SLQ. We improve the efficiency of this
algorithm by introducing a multi-processing scheme for estimating value
function in its backward pass. This pass has been often calculated as a single
process. This parallel SLQ algorithm can optimize longer time horizons without
proportional increase in its computation time. Thus, our MPC algorithm can
generate optimized trajectories for the next few phases of the motion within
only a few milliseconds. This outperforms the state of the art by at least one
order of magnitude. The performance of the approach is validated on a quadruped
robot for generating dynamic gaits such as trotting.Comment: 8 page
Tightly Coupled 3D Lidar Inertial Odometry and Mapping
Ego-motion estimation is a fundamental requirement for most mobile robotic
applications. By sensor fusion, we can compensate the deficiencies of
stand-alone sensors and provide more reliable estimations. We introduce a
tightly coupled lidar-IMU fusion method in this paper. By jointly minimizing
the cost derived from lidar and IMU measurements, the lidar-IMU odometry (LIO)
can perform well with acceptable drift after long-term experiment, even in
challenging cases where the lidar measurements can be degraded. Besides, to
obtain more reliable estimations of the lidar poses, a rotation-constrained
refinement algorithm (LIO-mapping) is proposed to further align the lidar poses
with the global map. The experiment results demonstrate that the proposed
method can estimate the poses of the sensor pair at the IMU update rate with
high precision, even under fast motion conditions or with insufficient
features.Comment: Accepted by ICRA 201
A bayesian approach to simultaneously recover camera pose and non-rigid shape from monocular images
© . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/In this paper we bring the tools of the Simultaneous Localization and Map Building (SLAM) problem from a rigid to a deformable domain and use them to simultaneously recover the 3D shape of non-rigid surfaces and the sequence of poses of a moving camera. Under the assumption that the surface shape may be represented as a weighted sum of deformation modes, we show that the problem of estimating the modal weights along with the camera poses, can be probabilistically formulated as a maximum a posteriori estimate and solved using an iterative least squares optimization. In addition, the probabilistic formulation we propose is very general and allows introducing different constraints without requiring any extra complexity. As a proof of concept, we show that local inextensibility constraints that prevent the surface from stretching can be easily integrated.
An extensive evaluation on synthetic and real data, demonstrates that our method has several advantages over current non-rigid shape from motion approaches. In particular, we show that our solution is robust to large amounts of noise and outliers and that it does not need to track points over the whole sequence nor to use an initialization close from the ground truth.Peer ReviewedPostprint (author's final draft
- …