338 research outputs found
Evaluation of machine vision techniques for use within flight control systems
In this thesis, two of the main technical limitations for a massive deployment of Unmanned Aerial Vehicle (UAV) have been considered.;The Aerial Refueling problem is analyzed in the first section. A solution based on the integration of \u27conventional\u27 GPS/INS and Machine Vision sensor is proposed with the purpose of measuring the relative distance between a refueling tanker and UAV. In this effort, comparisons between Point Matching (PM) algorithms and Pose Estimation (PE) algorithms have been developed in order to improve the performance of the Machine Vision sensor. A method of integration based on Extended Kalman Filter (EKF) between GPS/INS and Machine Vision system is also developed with the goal of reducing the tracking error in the \u27pre-contact\u27 to contact and refueling phases.;In the second section of the thesis the issue of Collision Identification (CI) is addressed. A proposed solution consists on the use of Optical Flow (OF) algorithms for the detection of possible collisions in the range of vision of a single camera. The effort includes a study of the performance of different Optical Flow algorithms in different scenarios as well as a method to compute the ideal optical flow with the aim of evaluating the algorithms. An analysis on the suitability for a future real time implementation is also performed for all the analyzed algorithms.;Results of the tests show that the Machine Vision technology can be used to improve the performance in the Aerial Refueling problem. In the Collision Identification problem, the Machine Vision has to be integrated with standard sensors in order to be used inside the Flight Control System
Stereo Vision: A Comparison of Synthetic Imagery vs. Real World Imagery for the Automated Aerial Refueling Problem
Missions using unmanned aerial vehicles have increased in the past decade. Currently, there is no way to refuel these aircraft. Accomplishing automated aerial refueling can be made possible using the stereo vision system on a tanker. Real world experiments for the automated aerial refueling problem are expensive and time consuming. Currently, simulations performed in a virtual world have shown promising results using computer vision. It is possible to use the virtual world as a substitute environment for the real world. This research compares the performance of stereo vision algorithms on synthetic and real world imagery
Addressing corner detection issues for machine vision based UAV aerial refueling
The need for developing autonomous aerial refueling capabilities for an Unmanned Aerial Vehicle (UAV) has risen out of the growing importance of UAVs in military and non-military applications. The AAR capabilities would improve the range and the loiter time capabilities of UAVs. A number of AAR techniques have been proposed, based on GPS based measurements and Machine Vision based measurements. The GPS based measurements suffer from distorted data in the wake of the tanker. The MV based techniques proposed the use of optical markers which---when detected---were used to determine relative orientation and position of the tanker and the UAV. The drawback of the MV based techniques is the assumption that all the optical markers are always visible and functional. This research effort proposes an alternative approach where the pose estimation does not depend on optical markers but on Feature Extraction methods. The thesis describes the results of the analysis of specific \u27corner detection\u27 algorithms within a Machine Vision---based approach for the problem of Aerial Refueling for Unmanned Aerial Vehicles. Specifically, the performances of the SUSAN and the Harris corner detection algorithms have been compared. Special emphasis was placed on evaluating their accuracy, the required computational effort, and the robustness of both methods to different sources of noise. Closed loop simulations were performed using a detailed SimulinkRTM -based simulation environment to reproduce docking maneuvers, using the US Air Force refueling boom
Mitigating the Effects of Boom Occlusion on Automated Aerial Refueling through Shadow Volumes
In flight refueling of Unmanned Aerial Vehicles (UAVs) is critical to the United States Air Force (USAF). However, the large communication latency between a ground-based operator and his/her remote UAV makes docking with a refueling tanker unsafe. This latency may be mitigated by leveraging a tanker-centric stereo vision system. The vision system observes and computes an approaching receiver\u27s relative position and orientation offering a low-latency, high frequency docking solution. Unfortunately, the boom -- an articulated refueling arm responsible for physically pumping fuel into the receiver -- occludes large portions of the receiver especially as the receiver approaches and docks with the tanker. The vision system must be able to compensate for the boom\u27s occlusion of the receiver aircraft. We present a novel algorithm for mitigating the negative effects of boom occlusion in stereo-based aerial environments. Our algorithm dynamically compensates for occluded receiver geometry by transforming the occluded areas into shadow volumes. These shadow volumes are then used to cull hidden geometry that is traditionally consumed, in error, by the vision processing and point registration pipeline. Our algorithm improves computer-vision pose estimates by an average of 74% over a naive approach without shadow volume culling
Infrared and Electro-Optical Stereo Vision for Automated Aerial Refueling
Currently, Unmanned Aerial Vehicles are unsafe to refuel in-flight due to the communication latency between the UAVs ground operator and the UAV. Providing UAVs with an in-flight refueling capability would improve their functionality by extending their flight duration and increasing their flight payload. Our solution to this problem is Automated Aerial Refueling (AAR) using stereo vision from stereo electro-optical and infrared cameras on a refueling tanker. To simulate a refueling scenario, we use ground vehicles to simulate a pseudo tanker and pseudo receiver UAV. Imagery of the receiver is collected by the cameras on the tanker and processed by a stereo block matching algorithm to calculate a position and orientation estimate of the receiver. GPS and IMU truth data is then used to validate these results
Vision-Based navigation system for unmanned aerial vehicles
Mención Internacional en el tÃtulo de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles
(UAVs) with a robust navigation system; in order to allow the UAVs to perform
complex tasks autonomously and in real-time. The proposed algorithms deal with
solving the navigation problem for outdoor as well as indoor environments, mainly
based on visual information that is captured by monocular cameras. In addition,
this dissertation presents the advantages of using the visual sensors as the main
source of data, or complementing other sensors in providing useful information; in
order to improve the accuracy and the robustness of the sensing purposes.
The dissertation mainly covers several research topics based on computer vision
techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of
the UAV. This algorithm is based on the combination of SIFT detector and FREAK
descriptor; which maintains the performance of the feature points matching and decreases
the computational time. Thereafter, the pose estimation problem is solved
based on the decomposition of the world-to-frame and frame-to-frame homographies.
(II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to
sense and detect the frontal obstacles that are situated in its path. The detection
algorithm mimics the human behaviors for detecting the approaching obstacles; by
analyzing the size changes of the detected feature points, combined with the expansion
ratios of the convex hull constructed around the detected feature points
from consecutive frames. Then, by comparing the area ratio of the obstacle and the
position of the UAV, the method decides if the detected obstacle may cause a collision.
Finally, the algorithm extracts the collision-free zones around the obstacle,
and combining with the tracked waypoints, the UAV performs the avoidance maneuver.
(III) Navigation Guidance, which generates the waypoints to determine
the flight path based on environment and the situated obstacles. Then provide
a strategy to follow the path segments and in an efficient way and perform the
flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in
order to achieve the flight stability as well as to perform the correct maneuver; to
avoid the possible collisions and track the waypoints.
All the proposed algorithms have been verified with real flights in both indoor
and outdoor environments, taking into consideration the visual conditions; such as
illumination and textures. The obtained results have been validated against other
systems; such as VICON motion capture system, DGPS in the case of pose estimate
algorithm. In addition, the proposed algorithms have been compared with several
previous works in the state of the art, and are results proves the improvement in
the accuracy and the robustness of the proposed algorithms.
Finally, this dissertation concludes that the visual sensors have the advantages
of lightweight and low consumption and provide reliable information, which is
considered as a powerful tool in the navigation systems to increase the autonomy
of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados
(UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar
tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos
tratan de resolver problemas de la navegacion tanto en ambientes interiores como
al aire libre basandose principalmente en la informacion visual captada por las camaras
monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores
visuales bien como fuente principal de datos o complementando a otros sensores
en el suministro de informacion util, con el fin de mejorar la precision y la
robustez de los procesos de deteccion.
La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas
de vision por computador: (I) Estimacion de la Posicion y la Orientacion
(Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion
en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el
descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia
y disminuye el tiempo computacional. De esta manera, se soluciona el
problema de la estimacion de la posicion basandose en la descomposicion de las
homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion
colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales
que se encuentran en su camino. El algoritmo de deteccion imita comportamientos
humanos para detectar los obstaculos que se acercan, mediante el analisis de la
magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado
con los ratios de expansion de los contornos convexos construidos alrededor
de los puntos caracteristicos detectados en frames consecutivos. A continuacion,
comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo
decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo
extrae las zonas libres de colision alrededor del obstaculo y combinandolo
con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de
vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona
una estrategia para seguir los segmentos del trazado de una manera eficiente
y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer
soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en
la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como
realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de
referencia.
Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes
exteriores e interiores, tomando en consideracion condiciones visuales como
la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros
sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del
algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos
han sido comparados con trabajos anteriores recogidos en el estado del arte
con resultados que demuestran una mejora de la precision y la robustez de los algoritmos
propuestos.
Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener
un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo
hace una poderosa herramienta en los sistemas de navegacion para aumentar la
autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en IngenierÃa Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando GarcÃa Fernández.- Vocal: Pascual Campoy Cerver
Towards Automated Aerial Refueling: Real Time Position Estimation with Stereo Vision
Aerial refueling is essential to the United States Air Force (USAF) core mission of rapid global mobility. However, in-flight refueling is not available to remotely piloted aircraft (RPA) or unmanned aerial systems (UAS). As reliance on drones for intelligence, surveillance, and reconnaissance (ISR) and other USAF core missions grows, the ability to automate aerial refueling for such systems becomes increasingly critical. New refueling platforms include sensors that could be used to estimate the relative position of an approaching aircraft. Relative position estimation is a key component to solving the automated aerial refueling (AAR) problem. Analysis of data from a one-seventh scale, real world refueling scenario demonstrates that the relative position of an approaching aircraft can be estimated at rates between 10 Hz and 30 Hz using stereo vision. Linear regression models on position estimate accuracies predict results reported by other research in the simulation domain, suggesting that real world accuracies are comparable to simulation domain accuracies reported by others. Further, by seeding the position estimation algorithm with previous position estimates, subsequent errors in position estimation are reduce
Use of LiDAR in Automated Aerial Refueling To Improve Stereo Vision Systems
The United States Air Force (USAF) executes five Core Missions, four of which depend on increased aircraft range. To better achieve global strike and reconnaissance, unmanned aerial vehicles (UAVs) require aerial refueling for extended missions. However, current aerial refueling capabilities are limited to manned aircraft due to technical difficulties to refuel UAVs mid-flight. The latency between a UAV operator and the UAV is too large to adequately respond for such an operation. To overcome this limitation, the USAF wants to create a capability to guide the refueling boom into the refueling receptacle. This research explores the use of light detection and ranging (LiDAR) to create a relative pose estimation of the UAV and compares it to previous stereo vision results. Researchers at the Air Force Institute of Technology (AFIT) developed an algorithm to automate the refueling operation based on a stereo-vision system. While the system works, it requires a large amount of processing; it must detect an aircraft, compose an image between the two cameras\u27 points of view, create a point cloud of the image, and run a point cloud alignment algorithm to match the point cloud to a reference model. These complex steps require a large amount of processing power and are subject to noise and processing artifacts
- …