908 research outputs found
Survey of computer vision algorithms and applications for unmanned aerial vehicles
This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)
Vision-Based navigation system for unmanned aerial vehicles
MenciĂłn Internacional en el tĂtulo de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles
(UAVs) with a robust navigation system; in order to allow the UAVs to perform
complex tasks autonomously and in real-time. The proposed algorithms deal with
solving the navigation problem for outdoor as well as indoor environments, mainly
based on visual information that is captured by monocular cameras. In addition,
this dissertation presents the advantages of using the visual sensors as the main
source of data, or complementing other sensors in providing useful information; in
order to improve the accuracy and the robustness of the sensing purposes.
The dissertation mainly covers several research topics based on computer vision
techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of
the UAV. This algorithm is based on the combination of SIFT detector and FREAK
descriptor; which maintains the performance of the feature points matching and decreases
the computational time. Thereafter, the pose estimation problem is solved
based on the decomposition of the world-to-frame and frame-to-frame homographies.
(II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to
sense and detect the frontal obstacles that are situated in its path. The detection
algorithm mimics the human behaviors for detecting the approaching obstacles; by
analyzing the size changes of the detected feature points, combined with the expansion
ratios of the convex hull constructed around the detected feature points
from consecutive frames. Then, by comparing the area ratio of the obstacle and the
position of the UAV, the method decides if the detected obstacle may cause a collision.
Finally, the algorithm extracts the collision-free zones around the obstacle,
and combining with the tracked waypoints, the UAV performs the avoidance maneuver.
(III) Navigation Guidance, which generates the waypoints to determine
the flight path based on environment and the situated obstacles. Then provide
a strategy to follow the path segments and in an efficient way and perform the
flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in
order to achieve the flight stability as well as to perform the correct maneuver; to
avoid the possible collisions and track the waypoints.
All the proposed algorithms have been verified with real flights in both indoor
and outdoor environments, taking into consideration the visual conditions; such as
illumination and textures. The obtained results have been validated against other
systems; such as VICON motion capture system, DGPS in the case of pose estimate
algorithm. In addition, the proposed algorithms have been compared with several
previous works in the state of the art, and are results proves the improvement in
the accuracy and the robustness of the proposed algorithms.
Finally, this dissertation concludes that the visual sensors have the advantages
of lightweight and low consumption and provide reliable information, which is
considered as a powerful tool in the navigation systems to increase the autonomy
of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados
(UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar
tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos
tratan de resolver problemas de la navegacion tanto en ambientes interiores como
al aire libre basandose principalmente en la informacion visual captada por las camaras
monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores
visuales bien como fuente principal de datos o complementando a otros sensores
en el suministro de informacion util, con el fin de mejorar la precision y la
robustez de los procesos de deteccion.
La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas
de vision por computador: (I) Estimacion de la Posicion y la Orientacion
(Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion
en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el
descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia
y disminuye el tiempo computacional. De esta manera, se soluciona el
problema de la estimacion de la posicion basandose en la descomposicion de las
homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion
colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales
que se encuentran en su camino. El algoritmo de deteccion imita comportamientos
humanos para detectar los obstaculos que se acercan, mediante el analisis de la
magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado
con los ratios de expansion de los contornos convexos construidos alrededor
de los puntos caracteristicos detectados en frames consecutivos. A continuacion,
comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo
decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo
extrae las zonas libres de colision alrededor del obstaculo y combinandolo
con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de
vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona
una estrategia para seguir los segmentos del trazado de una manera eficiente
y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer
soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en
la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como
realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de
referencia.
Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes
exteriores e interiores, tomando en consideracion condiciones visuales como
la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros
sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del
algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos
han sido comparados con trabajos anteriores recogidos en el estado del arte
con resultados que demuestran una mejora de la precision y la robustez de los algoritmos
propuestos.
Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener
un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo
hace una poderosa herramienta en los sistemas de navegacion para aumentar la
autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en IngenierĂa ElĂ©ctrica, ElectrĂłnica y AutomĂĄticaPresidente: Carlo Regazzoni.- Secretario: Fernando GarcĂa FernĂĄndez.- Vocal: Pascual Campoy Cerver
Inertial Navigation Meets Deep Learning: A Survey of Current Trends and Future Directions
Inertial sensing is used in many applications and platforms, ranging from
day-to-day devices such as smartphones to very complex ones such as autonomous
vehicles. In recent years, the development of machine learning and deep
learning techniques has increased significantly in the field of inertial
sensing and sensor fusion. This is due to the development of efficient
computing hardware and the accessibility of publicly available sensor data.
These data-driven approaches mainly aim to empower model-based inertial sensing
algorithms. To encourage further research in integrating deep learning with
inertial navigation and fusion and to leverage their capabilities, this paper
provides an in-depth review of deep learning methods for inertial sensing and
sensor fusion. We discuss learning methods for calibration and denoising as
well as approaches for improving pure inertial navigation and sensor fusion.
The latter is done by learning some of the fusion filter parameters. The
reviewed approaches are classified by the environment in which the vehicles
operate: land, air, and sea. In addition, we analyze trends and future
directions in deep learning-based navigation and provide statistical data on
commonly used approaches
Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments
This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version
A novel distributed architecture for UAV indoor navigation
Abstract In the last decade, different indoor flight navigation systems for small Unmanned Aerial Vehicles (UAVs) have been investigated, with a special focus on different configurations and on sensor technologies. The main idea of this paper is to propose a distributed Guidance Navigation and Control (GNC) system architecture, based on Robotic Operation System (ROS) for light weight UAV autonomous indoor flight. The proposed framework is shown to be more robust and flexible than common configurations. A flight controller and companion computer running ROS for control and navigation are also included in the section. Both hardware and software diagrams are given to show the complete architecture. Further works will be based on the experimental validation of the proposed configuration by indoor flight tests
Collaborative navigation as a solution for PNT applications in GNSS challenged environments: report on field trials of a joint FIG / IAG working group
PNT stands for Positioning, Navigation, and Timing. Space-based PNT refers to the capabilities enabled by GNSS, and enhanced by Ground and Space-based Augmentation Systems (GBAS and SBAS), which provide position, velocity, and timing information to an unlimited number of users around the world, allowing every user to operate in the same reference system and timing standard. Such information has become increasingly critical to the security, safety, prosperity, and overall qualityof-life of many citizens. As a result, space-based PNT is now widely recognized as an essential element of the global information infrastructure. This paper discusses the importance of the availability and continuity of PNT information, whose application, scope and significance have exploded in the past 10â15 years. A paradigm shift in the navigation solution has been observed in recent years. It has been manifested by an evolution from traditional single sensor-based solutions, to multiple sensor-based solutions and ultimately to collaborative navigation and layered sensing, using non-traditional sensors and techniques â so called signals of opportunity. A joint working group under the auspices of the International Federation of Surveyors (FIG) and the International Association of Geodesy (IAG), entitled âUbiquitous Positioning Systemsâ investigated the use of Collaborative Positioning (CP) through several field trials over the past four years. In this paper, the concept of CP is discussed in detail and selected results of these experiments are presented. It is demonstrated here, that CP is a viable solution if a ânetworkâ or âneighbourhoodâ of users is to be positionedâ/ânavigated together, as it increases the accuracy, integrity, availability, and continuity of the PNT information for all users
Monocular Vision Localization Using a Gimbaled Laser Range Sensor
There have been great advances in recent years in the area of indoor navigation. Many of these new navigation systems rely on digital images to aid an inertial navigation estimates. The Air Force Institute of Technology (AFIT) has been conducting research in this area for a number of years. The image-aiding techniques are centered around tracking stationary features in order to improve inertial navigation estimates. Previous research has used stereo vision systems or terrain constraints with monocular systems to estimate feature locations. While these methods have shown good results, they do have drawbacks. First, as unmanned exploration vehicles become smaller in size the distance available to create a baseline between two cameras decreases resulting in a decrease of distancing accuracy. Second, if using a monocular system, terrain data might not be known in an unexplored environment. This research explores the use of a small gimbaled laser range sensor and monocular camera to estimate feature locations. The gimbaled system consists of a commercial off-the-shelf range sensor, a pair of hobby-style servos, and a micro controller that accepts azimuth and elevation commands. The system is approximately 15x8x12 cm and weighs less than 120 grams. This novel approach, called laser-aided image inertial navigation, provides precise depth measurements to key features. The location of these key features are then calculated based on the current state estimates of an Extended Kalman filter. This method of estimating feature locations is tested both by simulation and real world imagery. Navigation experiments are presented which compare this method with previous image-aided filters. While only a limited number of tests were conducted, simulated and real world flight tests show that the monocular laser-aided filter can accurately estimate the trajectory of a vehicle to within a few tenths of a meter. This is done without terrain constraints or any prior knowledge of the operational area
- âŠ