206 research outputs found
A Comprehensive Introduction of Visual-Inertial Navigation
In this article, a tutorial introduction to visual-inertial navigation(VIN)
is presented. Visual and inertial perception are two complementary sensing
modalities. Cameras and inertial measurement units (IMU) are the corresponding
sensors for these two modalities. The low cost and light weight of camera-IMU
sensor combinations make them ubiquitous in robotic navigation. Visual-inertial
Navigation is a state estimation problem, that estimates the ego-motion and
local environment of the sensor platform. This paper presents visual-inertial
navigation in the classical state estimation framework, first illustrating the
estimation problem in terms of state variables and system models, including
related quantities representations (Parameterizations), IMU dynamic and camera
measurement models, and corresponding general probabilistic graphical models
(Factor Graph). Secondly, we investigate the existing model-based estimation
methodologies, these involve filter-based and optimization-based frameworks and
related on-manifold operations. We also discuss the calibration of some
relevant parameters, also initialization of state of interest in
optimization-based frameworks. Then the evaluation and improvement of VIN in
terms of accuracy, efficiency, and robustness are discussed. Finally, we
briefly mention the recent development of learning-based methods that may
become alternatives to traditional model-based methods.Comment: 35 pages, 10 figure
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed
Control and visual navigation for unmanned underwater vehicles
Ph. D. Thesis.Control and navigation systems are key for any autonomous robot. Due to environmental
disturbances, model uncertainties and nonlinear dynamic systems, reliable functional control is
essential and improvements in the controller design can significantly benefit the overall
performance of Unmanned Underwater Vehicles (UUVs). Analogously, due to electromagnetic
attenuation in underwater environments, the navigation of UUVs is always a challenging
problem.
In this thesis, control and navigation systems for UUVs are investigated. In the control field,
four different control strategies have been considered: Proportional-Integral-Derivative Control
(PID), Improved Sliding Mode Control (SMC), Backstepping Control (BC) and customised
Fuzzy Logic Control (FLC). The performances of these four controllers were initially simulated
and subsequently evaluated by practical experiments in different conditions using an underwater
vehicle in a tank. The results show that the improved SMC is more robust than the others with
small settling time, overshoot, and error.
In the navigation field, three underwater visual navigation systems have been developed in the
thesis: ArUco Underwater Navigation systems, a novel Integrated Visual Odometry with
Monocular camera (IVO-M), and a novel Integrated Visual Odometry with Stereo camera
(IVO-S). Compared with conventional underwater navigation, these methods are relatively
low-cost solutions and unlike other visual or inertial-visual navigation methods, they are able to
work well in an underwater sparse-feature environment. The results show the following: the
ArUco underwater navigation system does not suffer from cumulative error, but some segments
in the estimated trajectory are not consistent; IVO-M suffers from cumulative error (error ratio is
about 3 - 4%) and is limited by the assumption that the “seabed is locally flat”; IVO-S suffers
from small cumulative errors (error ratio is less than 2%).
Overall, this thesis contributes to the control and navigation systems of UUVs, presenting the
comparison between controllers, the improved SMC, and low-cost underwater visual navigation
methods
Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
An Online Self-calibrating Refractive Camera Model with Application to Underwater Odometry
This work presents a camera model for refractive media such as water and its
application in underwater visual-inertial odometry. The model is
self-calibrating in real-time and is free of known correspondences or
calibration targets. It is separable as a distortion model (dependent on
refractive index and radial pixel coordinate) and a virtual pinhole model
(as a function of ). We derive the self-calibration formulation leveraging
epipolar constraints to estimate the refractive index and subsequently correct
for distortion. Through experimental studies using an underwater robot
integrating cameras and inertial sensing, the model is validated regarding the
accurate estimation of the refractive index and its benefits for robust
odometry estimation in an extended envelope of conditions. Lastly, we show the
transition between media and the estimation of the varying refractive index
online, thus allowing computer vision tasks across refractive media.Comment: 7 pages, 6 figures, Submitted to the IEEE International Conference on
Robotics and Automation, 202
- …