34 research outputs found
Visual 3-D SLAM from UAVs
The aim of the paper is to present, test and discuss the implementation of Visual SLAM techniques to images taken from Unmanned Aerial Vehicles (UAVs) outdoors, in partially structured environments. Every issue of the whole process is discussed in order to obtain more accurate localization and mapping from UAVs flights. Firstly, the issues related to the visual features of objects in the scene, their distance to the UAV, and the related image acquisition system and their calibration are evaluated for improving the whole process. Other important, considered issues are related to the image processing techniques, such as interest point detection, the matching procedure and the scaling factor. The whole system has been tested using the COLIBRI mini UAV in partially structured environments. The results that have been obtained for localization, tested against the GPS information of the flights, show that Visual SLAM delivers reliable localization and mapping that makes it suitable for some outdoors applications when flying UAVs
Mixed marker-based/marker-less visual odometry system for mobile robots
When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test
Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
Robust UAV Position and Attitude Estimation using Multiple GNSS Receivers for Laser-based 3D Mapping
Small-sized unmanned aerial vehicles (UAVs) have been widely investigated for
use in a variety of applications such as remote sensing and aerial surveying.
Direct three-dimensional (3D) mapping using a small-sized UAV equipped with a
laser scanner is required for numerous remote sensing applications. In direct
3D mapping, the precise information about the position and attitude of the UAV
is necessary for constructing 3D maps. In this study, we propose a novel and
robust technique for estimating the position and attitude of small-sized UAVs
by employing multiple low-cost and light-weight global navigation satellite
system (GNSS) antennas/receivers. Using the "redundancy" of multiple GNSS
receivers, we enhance the performance of real-time kinematic (RTK)-GNSS by
employing single-frequency GNSS receivers. This method consists of two
approaches: hybrid GNSS fix solutions and consistency examination of the GNSS
signal strength. The fix rate of RTK-GNSS using single-frequency GNSS receivers
can be highly enhanced to combine multiple RTK-GNSS to fix solutions in the
multiple antennas. In addition, positioning accuracy and fix rate can be
further enhanced to detect multipath signals by using multiple GNSS antennas.
In this study, we developed a prototype UAV that is equipped with six GNSS
antennas/receivers. From the static test results, we conclude that the proposed
technique can enhance the accuracy of the position and attitude estimation in
multipath environments. From the flight test, the proposed system could
generate a 3D map with an accuracy of 5 cm.Comment: Proceedings of IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS) 201
On-board and Ground Visual Pose Estimation Techniques for UAV Control
In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter¿s position and using the extracted information to control the UAV
Recent advances in monocular model-based tracking: a systematic literature review
In this paper, we review the advances of monocular model-based tracking for
last ten years period until 2014. In 2005, Lepetit, et. al, [19] reviewed the status
of monocular model based rigid body tracking. Since then, direct 3D tracking has
become quite popular research area, but monocular model-based tracking should
still not be forgotten. We mainly focus on tracking, which could be applied to aug-
mented reality, but also some other applications are covered. Given the wide subject
area this paper tries to give a broad view on the research that has been conducted,
giving the reader an introduction to the different disciplines that are tightly related
to model-based tracking. The work has been conducted by searching through well
known academic search databases in a systematic manner, and by selecting certain
publications for closer examination. We analyze the results by dividing the found
papers into different categories by their way of implementation. The issues which
have not yet been solved are discussed. We also discuss on emerging model-based
methods such as fusing different types of features and region-based pose estimation
which could show the way for future research in this subject.Siirretty Doriast
Uncertainty Estimation of Dense Optical Flow for Robust Visual Navigation.
This paper presents a novel dense optical-flow algorithm to solve the monocular simultaneous localisation and mapping (SLAM) problem for ground or aerial robots. Dense optical flow can effectively provide the ego-motion of the vehicle while enabling collision avoidance with the potential obstacles. Existing research has not fully utilised the uncertainty of the optical flow-at most, an isotropic Gaussian density model has been used. We estimate the full uncertainty of the optical flow and propose a new eight-point algorithm based on the statistical Mahalanobis distance. Combined with the pose-graph optimisation, the proposed method demonstrates enhanced robustness and accuracy for the public autonomous car dataset (KITTI) and aerial monocular dataset