3,125 research outputs found
Keyframe-based visual–inertial odometry using nonlinear optimization
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Single and multiple stereo view navigation for planetary rovers
© Cranfield UniversityThis thesis deals with the challenge of autonomous navigation of the ExoMars rover.
The absence of global positioning systems (GPS) in space, added to the limitations
of wheel odometry makes autonomous navigation based on these two techniques - as
done in the literature - an inviable solution and necessitates the use of other approaches.
That, among other reasons, motivates this work to use solely visual data to solve the
robot’s Egomotion problem.
The homogeneity of Mars’ terrain makes the robustness of the low level image
processing technique a critical requirement. In the first part of the thesis, novel solutions
are presented to tackle this specific problem. Detection of robust features against
illumination changes and unique matching and association of features is a sought after
capability. A solution for robustness of features against illumination variation is proposed
combining Harris corner detection together with moment image representation.
Whereas the first provides a technique for efficient feature detection, the moment images
add the necessary brightness invariance. Moreover, a bucketing strategy is used
to guarantee that features are homogeneously distributed within the images. Then, the
addition of local feature descriptors guarantees the unique identification of image cues.
In the second part, reliable and precise motion estimation for the Mars’s robot is
studied. A number of successful approaches are thoroughly analysed. Visual Simultaneous
Localisation And Mapping (VSLAM) is investigated, proposing enhancements
and integrating it with the robust feature methodology. Then, linear and nonlinear optimisation
techniques are explored. Alternative photogrammetry reprojection concepts
are tested. Lastly, data fusion techniques are proposed to deal with the integration of
multiple stereo view data.
Our robust visual scheme allows good feature repeatability. Because of this,
dimensionality reduction of the feature data can be used without compromising the
overall performance of the proposed solutions for motion estimation. Also, the developed
Egomotion techniques have been extensively validated using both simulated and
real data collected at ESA-ESTEC facilities. Multiple stereo view solutions for robot
motion estimation are introduced, presenting interesting benefits. The obtained results
prove the innovative methods presented here to be accurate and reliable approaches
capable to solve the Egomotion problem in a Mars environment
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
Flexible Stereo: Constrained, Non-rigid, Wide-baseline Stereo Vision for Fixed-wing Aerial Platforms
This paper proposes a computationally efficient method to estimate the
time-varying relative pose between two visual-inertial sensor rigs mounted on
the flexible wings of a fixed-wing unmanned aerial vehicle (UAV). The estimated
relative poses are used to generate highly accurate depth maps in real-time and
can be employed for obstacle avoidance in low-altitude flights or landing
maneuvers. The approach is structured as follows: Initially, a wing model is
identified by fitting a probability density function to measured deviations
from the nominal relative baseline transformation. At run-time, the prior
knowledge about the wing model is fused in an Extended Kalman filter~(EKF)
together with relative pose measurements obtained from solving a relative
perspective N-point problem (PNP), and the linear accelerations and angular
velocities measured by the two inertial measurement units (IMU) which are
rigidly attached to the cameras. Results obtained from extensive synthetic
experiments demonstrate that our proposed framework is able to estimate highly
accurate baseline transformations and depth maps.Comment: Accepted for publication in IEEE International Conference on Robotics
and Automation (ICRA), 2018, Brisban
- …