10 research outputs found

    Infrared and Electro-Optical Stereo Vision for Automated Aerial Refueling

    Get PDF
    Currently, Unmanned Aerial Vehicles are unsafe to refuel in-flight due to the communication latency between the UAVs ground operator and the UAV. Providing UAVs with an in-flight refueling capability would improve their functionality by extending their flight duration and increasing their flight payload. Our solution to this problem is Automated Aerial Refueling (AAR) using stereo vision from stereo electro-optical and infrared cameras on a refueling tanker. To simulate a refueling scenario, we use ground vehicles to simulate a pseudo tanker and pseudo receiver UAV. Imagery of the receiver is collected by the cameras on the tanker and processed by a stereo block matching algorithm to calculate a position and orientation estimate of the receiver. GPS and IMU truth data is then used to validate these results

    Toward Automated Aerial Refueling: Relative Navigation with Structure from Motion

    Get PDF
    The USAF\u27s use of UAS has expanded from reconnaissance to hunter/killer missions. As the UAS mission further expands into aerial combat, better performance and larger payloads will have a negative correlation with range and loiter times. Additionally, the Air Force Future Operating Concept calls for \formations of uninhabited refueling aircraft...[that] enable refueling operations partway inside threat areas. However, a lack of accurate relative positioning information prevents the ability to safely maintain close formation flight and contact between a tanker and a UAS. The inclusion of cutting edge vision systems on present refueling platforms may provide the information necessary to support a AAR mission by estimating the position of a trailing aircraft to provide inputs to a UAS controller capable of maintaining a given position. This research examines the ability of SfM to generate relative navigation information. Previous AAR research efforts involved the use of differential GPS, LiDAR, and vision systems. This research aims to leverage current and future imaging technology to compliment these solutions. The algorithm used in this thesis generates a point cloud by determining 3D structure from a sequence of 2D images. The algorithm then utilizes PCA to register the point cloud to a reference model. The algorithm was tested in a real world environment using a 1:7 scale F-15 model. Additionally, this thesis studies common 3D rigid registration algorithms in an effort characterize their performance in the AAR domain. Three algorithms are tested for runtime and registration accuracy with four data sets

    Docking control for probe-drogue refueling: An additive-state-decomposition-based output feedback iterative learning control method

    Get PDF
    Designing a controller for the docking maneuver in Probe-Drogue Refueling (PDR) is an important but challenging task, due to the complex system model and the high precision requirement. In order to overcome the disadvantage of only feedback control, a feedforward control scheme known as Iterative Learning Control (ILC) is adopted in this paper. First, Additive State Decomposition (ASD) is used to address the tight coupling of input saturation, nonlinearity and the property of NonMinimum Phase (NMP) by separating these features into two subsystems (a primary system and a secondary system). After system decomposition, an adjoint-type ILC is applied to the Linear Time-Invariant (LTI) primary system with NMP to achieve entire output trajectory tracking, whereas state feedback is used to stabilize the secondary system with input saturation. The two controllers designed for the two subsystems can be combined to achieve the original control goal of the PDR system. Furthermore, to compensate for the receiver-independent uncertainties, a correction action is proposed by using the terminal docking error, which can lead to a smaller docking error at the docking moment. Simulation tests have been carried out to demonstrate the performance of the proposed control method, which has some advantages over the traditional derivative-type ILC and adjoint-type ILC in the docking control of PDR

    Cooperative Virtual Sensor for Fault Detection and Identification in Multi-UAV Applications

    Get PDF
    This paper considers the problem of fault detection and identification (FDI) in applications carried out by a group of unmanned aerial vehicles (UAVs) with visual cameras. In many cases, the UAVs have cameras mounted onboard for other applications, and these cameras can be used as bearing-only sensors to estimate the relative orientation of another UAV. The idea is to exploit the redundant information provided by these sensors onboard each of the UAVs to increase safety and reliability, detecting faults on UAV internal sensors that cannot be detected by the UAVs themselves. Fault detection is based on the generation of residuals which compare the expected position of a UAV, considered as target, with the measurements taken by one or more UAVs acting as observers that are tracking the target UAV with their cameras. Depending on the available number of observers and the way they are used, a set of strategies and policies for fault detection are defined. When the target UAV is being visually tracked by two or more observers, it is possible to obtain an estimation of its 3D position that could replace damaged sensors. Accuracy and reliability of this vision-based cooperative virtual sensor (CVS) have been evaluated experimentally in a multivehicle indoor testbed with quadrotors, injecting faults on data to validate the proposed fault detection methods.Comisi贸n Europea H2020 644271Comisi贸n Europea FP7 288082Ministerio de Economia, Industria y Competitividad DPI2015-71524-RMinisterio de Economia, Industria y Competitividad DPI2014-5983-C2-1-RMinisterio de Educaci贸n, Cultura y Deporte FP

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
    corecore