59 research outputs found

    Object Detection with Deep Learning to Accelerate Pose Estimation for Automated Aerial Refueling

    Get PDF
    Remotely piloted aircraft (RPAs) cannot currently refuel during flight because the latency between the pilot and the aircraft is too great to safely perform aerial refueling maneuvers. However, an AAR system removes this limitation by allowing the tanker to directly control the RP A. The tanker quickly finding the relative position and orientation (pose) of the approaching aircraft is the first step to create an AAR system. Previous work at AFIT demonstrates that stereo camera systems provide robust pose estimation capability. This thesis first extends that work by examining the effects of the cameras\u27 resolution on the quality of pose estimation. Next, it demonstrates a deep learning approach to accelerate the pose estimation process. The results show that this pose estimation process is precise and fast enough to safely perform AAR

    Stereo Camera Calibrations with Optical Flow

    Get PDF
    Remotely Piloted Aircraft (RPA) are currently unable to refuel mid-air due to the large communication delays between their operators and the aircraft. AAR seeks to address this problem by reducing the communication delay to a fast line-of-sight signal between the tanker and the RPA. Current proposals for AAR utilize stereo cameras to estimate where the receiving aircraft is relative to the tanker, but require accurate calibrations for accurate location estimates of the receiver. This paper improves the accuracy of this calibration by improving three components of it: increasing the quantity of intrinsic calibration data with CNN preprocessing, improving the quality of the intrinsic calibration data through a novel linear regression filter, and reducing the epipolar error of the stereo calibration with optical flow for feature matching and alignment. A combination of all three approaches resulted in significant epipolar error improvements over OpenCV\u27s stereo calibration while also providing significant precision improvements

    A dataset for autonomous aircraft refueling on the ground (AGR)

    Get PDF
    Automatic aircraft ground refueling (AAGR) can improve the safety, efficiency, and cost-effectiveness of aircraft ground refueling (AGR), a critical and frequent operation on almost all aircraft. Recent AAGR relies on machine vision, artificial intelligence, and robotics to implement automation. An essential step for automation is AGR scene recognition, which can support further component detection, tracking, process monitoring, and environmental awareness. As in many practical and commercial applications, aircraft refueling data is usually confidential, and no standardized workflow or definition is available. These are the prerequisites and critical challenges to deploying and benefitting advanced data-driven AGR. This study presents a dataset (the AGR Dataset) for AGR scene recognition using image crawling, augmentation, and classification, which has been made available to the community. The AGR dataset crawled over 3k images from 13 databases (over 26k images after augmentation), and different aircraft, illumination, and environmental conditions were included. The ground-truth labeling is conducted manually using a proposed tree-formed decision workflow and six specific AGR tags. Various professionals have independently reviewed the AGR dataset to keep it no-bias. This study proposes the first aircraft refueling image dataset, and an image labeling software with a UI to automate the labeling workflow

    Ship recognition on the sea surface using aerial images taken by Uav : a deep learning approach

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesOceans are very important for mankind, because they are a very important source of food, they have a very large impact on the global environmental equilibrium, and it is over the oceans that most of the world commerce is done. Thus, maritime surveillance and monitoring, in particular identifying the ships used, is of great importance to oversee activities like fishing, marine transportation, navigation in general, illegal border encroachment, and search and rescue operations. In this thesis, we used images obtained with Unmanned Aerial Vehicles (UAVs) over the Atlantic Ocean to identify what type of ship (if any) is present in a given location. Images generated from UAV cameras suffer from camera motion, scale variability, variability in the sea surface and sun glares. Extracting information from these images is challenging and is mostly done by human operators, but advances in computer vision technology and development of deep learning techniques in recent years have made it possible to do so automatically. We used four of the state-of-art pretrained deep learning network models, namely VGG16, Xception, ResNet and InceptionResNet trained on ImageNet dataset, modified their original structure using transfer learning based fine tuning techniques and then trained them on our dataset to create new models. We managed to achieve very high accuracy (99.6 to 99.9% correct classifications) when classifying the ships that appear on the images of our dataset. With such a high success rate (albeit at the cost of high computing power), we can proceed to implement these algorithms on maritime patrol UAVs, and thus improve Maritime Situational Awareness

    Improving Deep Learning with Generic Data Augmentation

    Get PDF
    Deep artificial neural networks require a large corpus of training data in order to effectively learn, where collection of such training data is often expensive and laborious. Data augmentation overcomes this issue by artificially inflating the training set with label preserving transformations. Recently there has been extensive use of generic data augmentation to improve Convolutional Neural Network (CNN) task performance. This study benchmarks various popular data augmentation schemes to allow researchers to make informed decisions as to which training methods are most appropriate for their data sets. Various geometric and photometric schemes are evaluated on a coarse grained data set using a relatively simple CNN. Experimental results, run using 4-fold cross-validation and reported in terms of Top-1 and Top-5 accuracy, indicate that cropping in geometric augmentation significantly increases CNN task performance

    Virtual Testbed for Monocular Visual Navigation of Small Unmanned Aircraft Systems

    Full text link
    Monocular visual navigation methods have seen significant advances in the last decade, recently producing several real-time solutions for autonomously navigating small unmanned aircraft systems without relying on GPS. This is critical for military operations which may involve environments where GPS signals are degraded or denied. However, testing and comparing visual navigation algorithms remains a challenge since visual data is expensive to gather. Conducting flight tests in a virtual environment is an attractive solution prior to committing to outdoor testing. This work presents a virtual testbed for conducting simulated flight tests over real-world terrain and analyzing the real-time performance of visual navigation algorithms at 31 Hz. This tool was created to ultimately find a visual odometry algorithm appropriate for further GPS-denied navigation research on fixed-wing aircraft, even though all of the algorithms were designed for other modalities. This testbed was used to evaluate three current state-of-the-art, open-source monocular visual odometry algorithms on a fixed-wing platform: Direct Sparse Odometry, Semi-Direct Visual Odometry, and ORB-SLAM2 (with loop closures disabled)

    Air Force Institute of Technology Research Report 2019

    Get PDF
    This Research Report presents the FY19 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document

    Air Force Institute of Technology Research Report 2020

    Get PDF
    This Research Report presents the FY20 research statistics and contributions of the Graduate School of Engineering and Management (EN) at AFIT. AFIT research interests and faculty expertise cover a broad spectrum of technical areas related to USAF needs, as reflected by the range of topics addressed in the faculty and student publications listed in this report. In most cases, the research work reported herein is directly sponsored by one or more USAF or DOD agencies. AFIT welcomes the opportunity to conduct research on additional topics of interest to the USAF, DOD, and other federal organizations when adequate manpower and financial resources are available and/or provided by a sponsor. In addition, AFIT provides research collaboration and technology transfer benefits to the public through Cooperative Research and Development Agreements (CRADAs). Interested individuals may discuss ideas for new research collaborations, potential CRADAs, or research proposals with individual faculty using the contact information in this document

    Semi-Automatic Data Annotation guided by Feature Space Projection

    Full text link
    Data annotation using visual inspection (supervision) of each training sample can be laborious. Interactive solutions alleviate this by helping experts propagate labels from a few supervised samples to unlabeled ones based solely on the visual analysis of their feature space projection (with no further sample supervision). We present a semi-automatic data annotation approach based on suitable feature space projection and semi-supervised label estimation. We validate our method on the popular MNIST dataset and on images of human intestinal parasites with and without fecal impurities, a large and diverse dataset that makes classification very hard. We evaluate two approaches for semi-supervised learning from the latent and projection spaces, to choose the one that best reduces user annotation effort and also increases classification accuracy on unseen data. Our results demonstrate the added-value of visual analytics tools that combine complementary abilities of humans and machines for more effective machine learning.Comment: 28 pages, 10 figure
    • …
    corecore