73 research outputs found

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Infrared and Electro-Optical Stereo Vision for Automated Aerial Refueling

    Get PDF
    Currently, Unmanned Aerial Vehicles are unsafe to refuel in-flight due to the communication latency between the UAVs ground operator and the UAV. Providing UAVs with an in-flight refueling capability would improve their functionality by extending their flight duration and increasing their flight payload. Our solution to this problem is Automated Aerial Refueling (AAR) using stereo vision from stereo electro-optical and infrared cameras on a refueling tanker. To simulate a refueling scenario, we use ground vehicles to simulate a pseudo tanker and pseudo receiver UAV. Imagery of the receiver is collected by the cameras on the tanker and processed by a stereo block matching algorithm to calculate a position and orientation estimate of the receiver. GPS and IMU truth data is then used to validate these results

    Optical Tracking for Relative Positioning in Automated Aerial Refueling

    Get PDF
    An algorithm is designed to extract features from video of an air refueling tanker for use in determining the precise relative position of a receiver aircraft. The algorithm is based on receiving a known estimate of the tanker aircraft\u27s position and attitude. The algorithm then uses a known feature model of the tanker to predict the location of those features on a video frame. A corner detector is used to extract features from the video. The measured corners are then associated with known features and tracked from frame to frame. For each frame, the associated features are used to calculate three dimensional pointing vectors to the features of the tanker. These vectors are passed to a navigation algorithm which uses extended Kalman filters, as well as data-linked INS data to solve for the relative position of the tanker. The algorithms were tested using data from a flight test accomplished by the USAF Test Pilot School using a C-12C as a simulated tanker and a Learjet LJ-24 as the simulated receiver. The system was able to provide at least a dozen useful measurements per frame, with and without projection error

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver

    Toward Automated Aerial Refueling: Relative Navigation with Structure from Motion

    Get PDF
    The USAF\u27s use of UAS has expanded from reconnaissance to hunter/killer missions. As the UAS mission further expands into aerial combat, better performance and larger payloads will have a negative correlation with range and loiter times. Additionally, the Air Force Future Operating Concept calls for \formations of uninhabited refueling aircraft...[that] enable refueling operations partway inside threat areas. However, a lack of accurate relative positioning information prevents the ability to safely maintain close formation flight and contact between a tanker and a UAS. The inclusion of cutting edge vision systems on present refueling platforms may provide the information necessary to support a AAR mission by estimating the position of a trailing aircraft to provide inputs to a UAS controller capable of maintaining a given position. This research examines the ability of SfM to generate relative navigation information. Previous AAR research efforts involved the use of differential GPS, LiDAR, and vision systems. This research aims to leverage current and future imaging technology to compliment these solutions. The algorithm used in this thesis generates a point cloud by determining 3D structure from a sequence of 2D images. The algorithm then utilizes PCA to register the point cloud to a reference model. The algorithm was tested in a real world environment using a 1:7 scale F-15 model. Additionally, this thesis studies common 3D rigid registration algorithms in an effort characterize their performance in the AAR domain. Three algorithms are tested for runtime and registration accuracy with four data sets

    Object Detection with Deep Learning to Accelerate Pose Estimation for Automated Aerial Refueling

    Get PDF
    Remotely piloted aircraft (RPAs) cannot currently refuel during flight because the latency between the pilot and the aircraft is too great to safely perform aerial refueling maneuvers. However, an AAR system removes this limitation by allowing the tanker to directly control the RP A. The tanker quickly finding the relative position and orientation (pose) of the approaching aircraft is the first step to create an AAR system. Previous work at AFIT demonstrates that stereo camera systems provide robust pose estimation capability. This thesis first extends that work by examining the effects of the cameras\u27 resolution on the quality of pose estimation. Next, it demonstrates a deep learning approach to accelerate the pose estimation process. The results show that this pose estimation process is precise and fast enough to safely perform AAR

    Use of LiDAR in Automated Aerial Refueling To Improve Stereo Vision Systems

    Get PDF
    The United States Air Force (USAF) executes five Core Missions, four of which depend on increased aircraft range. To better achieve global strike and reconnaissance, unmanned aerial vehicles (UAVs) require aerial refueling for extended missions. However, current aerial refueling capabilities are limited to manned aircraft due to technical difficulties to refuel UAVs mid-flight. The latency between a UAV operator and the UAV is too large to adequately respond for such an operation. To overcome this limitation, the USAF wants to create a capability to guide the refueling boom into the refueling receptacle. This research explores the use of light detection and ranging (LiDAR) to create a relative pose estimation of the UAV and compares it to previous stereo vision results. Researchers at the Air Force Institute of Technology (AFIT) developed an algorithm to automate the refueling operation based on a stereo-vision system. While the system works, it requires a large amount of processing; it must detect an aircraft, compose an image between the two cameras\u27 points of view, create a point cloud of the image, and run a point cloud alignment algorithm to match the point cloud to a reference model. These complex steps require a large amount of processing power and are subject to noise and processing artifacts

    Estimation algorithm for autonomous aerial refueling using a vision based relative navigation system

    Get PDF
    A new impetus to develop autonomous aerial refueling has arisen out of the growing demand to expand the capabilities of unmanned aerial vehicles (UAVs). With autonomous aerial refueling, UAVs can retain the advantages of being small, inexpensive, and expendable, while offering superior range and loiter-time capabilities. VisNav, a vision based sensor, offers the accuracy and reliability needed in order to provide relative navigation information for autonomous probe and drogue aerial refueling for UAVs. This thesis develops a Kalman filter to be used in combination with the VisNav sensor to improve the quality of the relative navigation solution during autonomous probe and drogue refueling. The performance of the Kalman filter is examined in a closed-loop autonomous aerial refueling simulation which includes models of the receiver aircraft, VisNav sensor, Reference Observer-based Tracking Controller (ROTC), and atmospheric turbulence. The Kalman filter is tuned and evaluated for four aerial refueling scenarios which simulate docking behavior in the absence of turbulence, and with light, moderate, and severe turbulence intensity. The docking scenarios demonstrate that, for a sample rate of 100 Hz, the tuning and performance of the filter do not depend on the intensity of the turbulence, and the Kalman filter improves the relative navigation solution from VisNav by as much as 50% during the early stages of the docking maneuver. For the aerial refueling scenarios modeledin this thesis, the addition of the Kalman filter to the VisNav/ROTC structure resulted in a small improvement in the docking accuracy and precision. The Kalman filter did not, however, significantly improve the probability of a successful docking in turbulence for the simulated aerial refueling scenarios

    INVESTIGATION INTO SPEED VS ACCURACY FOR AN AUTOMATED VEHICLE CHARGING SYSTEM

    Get PDF
    Recent advances in energy storage technology have finally allowed Electric Vehicles to enter the mainstream market. The suite of electronics used on these vehicles for power management and driving assistance opens the possibility of these vehicles operating autonomously. An autonomous vehicle must be recharged for it to drive to a destination beyond the range of battery system or for it to operate continuously. To extend autonomous operation, an autonomous charging system was developed. A design requirement was that the system be made using consumer grade components that are common to the DIY IOT movement to decrease system cost. The design and manufacture of the autonomous charging system will be briefly discussed but is not the focus of this thesis. The focus of this thesis is the investigation into the relationship between the operating speed and the accuracy of the automation algorithm. Initial development focused on delivering the best performance, but the run time of the automation algorithm was more than ten minutes, which was too lengthy. The only portions of the code that could be improved were the hunt cycles for the port cover and the port detent. During the hunt cycles, the algorithm uses closed loop feedback between a vision system and the kinematics of the robot. The feedback loop compares the BB centroid to the center of the camera’s FOV. The hunt is completed when the comparison drops below a defined threshold. For the hunts, the accuracy was decreased by increasing the threshold. Three thresholds were chosen for the Port hunt and the Detent hunt and those thresholds represented high, medium, and low accuracy. An experiment was conducted using different combinations of accuracy for each hunt. The hypothesis was that it was possible for the cycle time to be reduced by decreasing accuracy without sacrificing system performance. Test results validated the hypothesis and the cycle time was reduced by 16% without impacting system performance. This was done by using the lowest accuracy parameter for the charging Port hunt and using the medium accuracy for the Detent hunt. During the process of conducting the DOE, additional areas of improvement were identified for both the software and the mechanical systems. The proposed improvements were developed and implemented prior to outdoor, full-cycle testing. Outdoor tests were then completed and verified that the implemented improvements along with the accuracy parameters that were the outputs from the test results decreased the full cycle time by 16%.M.S
    corecore