104 research outputs found
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
Inertial navigation aided by simultaneous loacalization and mapping
Unmanned aerial vehicles technologies are getting smaller and cheaper
to use and the challenges of payload limitation in unmanned aerial
vehicles are being overcome. Integrated navigation system design requires
selection of set of sensors and computation power that provides
reliable and accurate navigation parameters (position, velocity
and attitude) with high update rates and bandwidth in small and
cost effective manner. Many of today’s operational unmanned aerial
vehicles navigation systems rely on inertial sensors as a primary measurement
source. Inertial Navigation alone however suffers from slow
divergence with time. This divergence is often compensated for by
employing some additional source of navigation information external
to Inertial Navigation. From the 1990’s to the present day Global
Positioning System has been the dominant navigation aid for Inertial
Navigation. In a number of scenarios, Global Positioning System measurements
may be completely unavailable or they simply may not be
precise (or reliable) enough to be used to adequately update the Inertial
Navigation hence alternative methods have seen great attention.
Aiding Inertial Navigation with vision sensors has been the favoured
solution over the past several years. Inertial and vision sensors with
their complementary characteristics have the potential to answer the
requirements for reliable and accurate navigation parameters.
In this thesis we address Inertial Navigation position divergence. The
information for updating the position comes from combination of vision
and motion. When using such a combination many of the difficulties
of the vision sensors (relative depth, geometry and size of objects,
image blur and etc.) can be circumvented. Motion grants the vision
sensors with many cues that can help better to acquire information
about the environment, for instance creating a precise map of the environment
and localize within the environment.
We propose changes to the Simultaneous Localization and Mapping
augmented state vector in order to take repeated measurements of
the map point. We show that these repeated measurements with certain
manoeuvres (motion) around or by the map point are crucial for
constraining the Inertial Navigation position divergence (bounded estimation
error) while manoeuvring in vicinity of the map point. This
eliminates some of the uncertainty of the map point estimates i.e.
it reduces the covariance of the map points estimates. This concept
brings different parameterization (feature initialisation) of the map
points in Simultaneous Localization and Mapping and we refer to it
as concept of aiding Inertial Navigation by Simultaneous Localization
and Mapping.
We show that making such an integrated navigation system requires
coordination with the guidance and control measurements and the vehicle
task itself for performing the required vehicle manoeuvres (motion)
and achieving better navigation accuracy. This fact brings new
challenges to the practical design of these modern jam proof Global
Positioning System free autonomous navigation systems.
Further to the concept of aiding Inertial Navigation by Simultaneous
Localization and Mapping we have investigated how a bearing only
sensor such as single camera can be used for aiding Inertial Navigation.
The results of the concept of Inertial Navigation aided by
Simultaneous Localization and Mapping were used. New parameterization
of the map point in Bearing Only Simultaneous Localization
and Mapping is proposed. Because of the number of significant problems
that appear when implementing the Extended Kalman Filter in
Inertial Navigation aided by Bearing Only Simultaneous Localization
and Mapping other algorithms such as Iterated Extended Kalman Filter,
Unscented Kalman Filter and Particle Filters were implemented.
From the results obtained, the conclusion can be drawn that the nonlinear
filters should be the choice of estimators for this application
A factorization approach to inertial affine structure from motion
We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives
A factorization approach to inertial affine structure from motion
We consider the problem of reconstructing a 3-D scene from a moving camera with high frame rate using the affine projection model. This problem is traditionally known as Affine Structure from Motion (Affine SfM), and can be solved using an elegant low-rank factorization formulation. In this paper, we assume that an accelerometer and gyro are rigidly mounted with the camera, so that synchronized linear acceleration and angular velocity measurements are available together with the image measurements. We extend the standard Affine SfM algorithm to integrate these measurements through the use of image derivatives
Attention and Anticipation in Fast Visual-Inertial Navigation
We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to
estimate its state using an on-board camera and an inertial sensor, without any
prior knowledge of the external environment. We consider the case in which the
robot can allocate limited resources to VIN, due to tight computational
constraints. Therefore, we answer the following question: under limited
resources, what are the most relevant visual cues to maximize the performance
of visual-inertial navigation? Our approach has four key ingredients. First, it
is task-driven, in that the selection of the visual cues is guided by a metric
quantifying the VIN performance. Second, it exploits the notion of
anticipation, since it uses a simplified model for forward-simulation of robot
dynamics, predicting the utility of a set of visual cues over a future time
horizon. Third, it is efficient and easy to implement, since it leads to a
greedy algorithm for the selection of the most relevant visual cues. Fourth, it
provides formal performance guarantees: we leverage submodularity to prove that
the greedy selection cannot be far from the optimal (combinatorial) selection.
Simulations and real experiments on agile drones show that our approach ensures
state-of-the-art VIN performance while maintaining a lean processing time. In
the easy scenarios, our approach outperforms appearance-based feature selection
in terms of localization errors. In the most challenging scenarios, it enables
accurate visual-inertial navigation while appearance-based feature selection
fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
- …