5,261 research outputs found
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
We address the problem of making human motion capture in the wild more
practical by using a small set of inertial sensors attached to the body. Since
the problem is heavily under-constrained, previous methods either use a large
number of sensors, which is intrusive, or they require additional video input.
We take a different approach and constrain the problem by: (i) making use of a
realistic statistical body model that includes anthropometric constraints and
(ii) using a joint optimization framework to fit the model to orientation and
acceleration measurements over multiple frames. The resulting tracker Sparse
Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors
(attached to the wrists, lower legs, back and head) and works for arbitrary
human motions. Experiments on the recently released TNT15 dataset show that,
using the same number of sensors, SIP achieves higher accuracy than the dataset
baseline without using any video data. We further demonstrate the effectiveness
of SIP on newly recorded challenging motions in outdoor scenarios such as
climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201
On-Manifold Preintegration for Real-Time Visual-Inertial Odometry
Current approaches for visual-inertial odometry (VIO) are able to attain
highly accurate state estimation via nonlinear optimization. However, real-time
optimization quickly becomes infeasible as the trajectory grows over time, this
problem is further emphasized by the fact that inertial measurements come at
high rate, hence leading to fast growth of the number of variables in the
optimization. In this paper, we address this issue by preintegrating inertial
measurements between selected keyframes into single relative motion
constraints. Our first contribution is a \emph{preintegration theory} that
properly addresses the manifold structure of the rotation group. We formally
discuss the generative measurement model as well as the nature of the rotation
noise and derive the expression for the \emph{maximum a posteriori} state
estimator. Our theoretical development enables the computation of all necessary
Jacobians for the optimization and a-posteriori bias correction in analytic
form. The second contribution is to show that the preintegrated IMU model can
be seamlessly integrated into a visual-inertial pipeline under the unifying
framework of factor graphs. This enables the application of
incremental-smoothing algorithms and the use of a \emph{structureless} model
for visual measurements, which avoids optimizing over the 3D points, further
accelerating the computation. We perform an extensive evaluation of our
monocular \VIO pipeline on real and simulated datasets. The results confirm
that our modelling effort leads to accurate state estimation in real-time,
outperforming state-of-the-art approaches.Comment: 20 pages, 24 figures, accepted for publication in IEEE Transactions
on Robotics (TRO) 201
The wake dynamics and flight forces of the fruit fly Drosophila melanogaster
We have used flow visualizations and instantaneous force measurements of tethered fruit flies (Drosophila melanogaster) to study the dynamics of force generation during flight. During each complete stroke cycle, the flies generate one single vortex loop consisting of vorticity shed during the downstroke and ventral flip. This gross pattern of wake structure in Drosophila is similar to those described for hovering birds and some other insects. The wake structure differed from those previously described, however, in that the vortex filaments shed during ventral stroke reversal did not fuse to complete a circular ring, but rather attached temporarily to the body to complete an inverted heart-shaped vortex loop. The attached ventral filaments of the loop subsequently slide along the length of the body and eventually fuse at the tip of the abdomen. We found no evidence for the shedding of wing-tip vorticity during the upstroke, and argue that this is due to an extreme form of the Wagner effect acting at that time. The flow visualizations predicted that maximum flight forces would be generated during the downstroke and ventral reversal, with little or no force generated during the upstroke. The instantaneous force measurements using laser-interferometry verified the periodic nature of force generation. Within each stroke cycle, there was one plateau of high force generation followed by a period of low force, which roughly correlated with the upstroke and downstroke periods. However, the fluctuations in force lagged behind their expected occurrence within the wing-stroke cycle by approximately 1 ms or one-fifth of the complete stroke cycle. This temporal discrepancy exceeds the range of expected inaccuracies and artifacts in the measurements, and we tentatively discuss the potential retarding effects within the underlying fluid mechanics
A Low Cost UWB Based Solution for Direct Georeferencing UAV Photogrammetry
Thanks to their flexibility and availability at reduced costs, Unmanned Aerial Vehicles (UAVs) have been recently used on a wide range of applications and conditions. Among these, they can play an important role in monitoring critical events (e.g., disaster monitoring) when the presence of humans close to the scene shall be avoided for safety reasons, in precision farming and surveying. Despite the very large number of possible applications, their usage is mainly limited by the availability of the Global Navigation Satellite System (GNSS) in the considered environment: indeed, GNSS is of fundamental importance in order to reduce positioning error derived by the drift of (low-cost) Micro-Electro-Mechanical Systems (MEMS) internal sensors. In order to make the usage of UAVs possible even in critical environments (when GNSS is not available or not reliable, e.g., close to mountains or in city centers, close to high buildings), this paper considers the use of a low cost Ultra Wide-Band (UWB) system as the positioning method. Furthermore, assuming the use of a calibrated camera, UWB positioning is exploited to achieve metric reconstruction on a local coordinate system. Once the georeferenced position of at least three points (e.g., positions of three UWB devices) is known, then georeferencing can be obtained, as well. The proposed approach is validated on a specific case study, the reconstruction of the façade of a university building. Average error on 90 check points distributed over the building façade, obtained by georeferencing by means of the georeferenced positions of four UWB devices at fixed positions, is 0.29 m. For comparison, the average error obtained by using four ground control points is 0.18 m
Surveyor spacecraft system - Surveyor 6 flight performance Final report
Surveyor 6 spacecraft flight performance characteristics, including data on television equipment, alpha scattering experiment, and powered flight translatio
Trajectory Representation and Landmark Projection for Continuous-Time Structure from Motion
This paper revisits the problem of continuous-time structure from motion, and
introduces a number of extensions that improve convergence and efficiency. The
formulation with a -continuous spline for the trajectory
naturally incorporates inertial measurements, as derivatives of the sought
trajectory. We analyse the behaviour of split interpolation on
and on , and a joint interpolation on , and show
that the latter implicitly couples the direction of translation and rotation.
Such an assumption can make good sense for a camera mounted on a robot arm, but
not for hand-held or body-mounted cameras. Our experiments show that split
interpolation on and on is preferable over
interpolation in all tested cases. Finally, we investigate the
problem of landmark reprojection on rolling shutter cameras, and show that the
tested reprojection methods give similar quality, while their computational
load varies by a factor of 2.Comment: Submitted to IJR
- …