7 research outputs found

    Towards Explainability of UAV-Based Convolutional Neural Networks for Object Classification

    Get PDF
    f autonomous systems using trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR), a new NASA Convergent Aeronautical Solutions (CAS) Project. One critical research element of ATTRACTOR is explainability of the decision-making across relevant subsystems of an autonomous system. The ability to explain why an autonomous system makes a decision is needed to establish a basis of trustworthiness to safely complete a mission. Convolutional Neural Networks (CNNs) are popular visual object classifiers that have achieved high levels of classification performances without clear insight into the mechanisms of the internal layers and features. To explore the explainability of the internal components of CNNs, we reviewed three feature visualization methods in a layer-by-layer approach using aviation related images as inputs. Our approach to this is to analyze the key components of a classification event in order to generate component labels for features of the classified image at different layers of depths. For example, an airplane has wings, engines, and landing gear. These could possibly be identified somewhere in the hidden layers from the classification and these descriptive labels could be provided to a human or machine teammate while conducting a shared mission and to engender trust. Each descriptive feature may also be decomposed to a combination of primitives such as shapes and lines. We expect that knowing the combination of shapes and parts that create a classification will enable trust in the system and insight into creating better structures for the CNN

    Optical Geolocation for Small Unmanned Aerial Systems

    Get PDF
    This paper presents an airborne optical geolocation system using four optical targets to provide position and attitude estimation for a sUAS supporting the NASA Acoustic Research Mission (ARM), where the goal is to reduce nuisance airframe noise during approach and landing. A large precision positioned microphone array captures the airframe noise for multiple passes of a Gulfstream III aircraft. For health monitoring of the microphone array, the Acoustic Calibration Vehicle (ACV) sUAS completes daily flights with an onboard speaker emitting tones at frequencies optimized for determining microphone functionality. An accurate position estimate of the ACV relative to the array is needed for microphone health monitoring. To this end, an optical geolocation system using a downward facing camera mounted to the ACV was developed. The 3D positioning of the ACV is computed using the pinhole camera model. A novel optical geolocation algorithm first detects the targets, then a recursive algorithm tightens the localization of the targets. Finally, the position of the sUAS is computed using the image coordinates of the targets, the 3D world coordinates of the targets, and the camera matrix. A Real-Time Kinematic GPS system is used to compare the optical geolocation system

    Sense and Avoid for Small Unmanned Aircraft Systems

    Get PDF
    The ability for small Unmanned Aircraft Systems (sUAS) to safely operate beyond line of sight is of great interest to consumers, businesses, and scientific research. In this work, we investigate Sense and Avoid (SAA) algorithms for sUAS encounters using three 4k cameras for separation distances between 200m and 2000m. Video is recorded of different sUAS platforms designed to appear similar to expected air traffic, under varying weather conditions and flight encounter scenarios. University partners and NASA both developed SAA methods presented in this report

    Safe2Ditch Autonomous Crash Management System for Small Unmanned Aerial Systems: Concept Definition and Flight Test Results

    Get PDF
    Small unmanned aerial systems (sUAS) have the potential for a large array of highly-beneficial applications. These applications are too numerous to comprehensively list, but include search and rescue, fire spotting, precision agriculture, etc. to name a few. Typically sUAS vehicles weigh less than 55 lbs and will be performing flight operations in the National Air Space (NAS). Certain sUAS applications, such as package delivery, will include operations in the close proximity of the general public. The full benefit from sUAS is contingent upon the resolution of several technological areas in order to provide an acceptable level of risk for widespread sUAS operations. Operations of sUAS vehicles pose risks to people and property on the ground as well as manned aviation. Several of the more significant sUAS technological areas include, but are not limited to: autonomous sense and avoid and deconfliction of sUAS from other sUAS and manned aircraft, communications and interfaces between the vehicle and human operators, and the overall reliability of the sUAS and constituent subsystems. While all of the technological areas listed contribute significantly to the safe execution of the sUAS flight operations, contingency or emergency systems can greatly contribute to sUAS risk mitigations to manage situations where the vehicle is in distress. The Safe2Ditch (S2D) system is an autonomous crash management system for sUAS. Its function is to enable sUAS to execute emergency landings and avoid injuring people on the ground, damaging property, and lastly preserving the sUAS and payload. A sUAS flight test effort was performed to test the integration of sub-elements of the S2D system with a representative sUAS multi-rotor

    Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: The CADDementia challenge

    Get PDF
    Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org

    Monocular Ranging for Small Unmanned Aerial Systems in the Far-Field

    No full text
    Recent proliferation of small Unmanned Aerial Systems (sUAS) applications requires onboard collision avoidance systems to mitigate the risk of collision with non-cooperative aircraft and manned aircraft, which may not see sUAS in time to perform an avoidance maneuver. An attractive avenue for onboard collision avoidance is the utilization of machine vision cameras due to their low size, weight and power (SWaP) requirements. In this paper, we characterize the range performance of a machine vision system developed in-house and mounted onto an sUAS. The technique was designed to estimate the performance of a sense-and-avoid system to ensure that the sensing components meet the well-clear requirements for the chosen platform and avoidance strategy. Experimental flight-test data was acquired from test-flights flown along multiple collision geometries for two intruders: a general Aviation (GA) aircraft and a fixed-wing sUAS. The ownship and both intruders were instrumented with inertial navigation systems (INS) recording position and attitude information. The range at first detection, R_0, was extracted from in-flight imagery of head-on collision course geometry synchronized with INS data from both aircraft and ground-truth values extracted from the raw imagery. This initial detection distance, R_0, scales with atmospheric attenuation. Therefore, under clear sky conditions, the derived R_0 value represents the upper bound on the detection range achievable by the test configuration of the detector. Results indicate that the maximum initial detection distance for a 4k resolution action camera fitted with a 41º Field of View (FOV) lens is 2.763 ± 0.037 km for a GA aircraft and 0.881 ± 0.061 km for a fixed-wing sUAS, respectively. The in results this study suggest that a vision-based detect and track system may be analyzed using the sensor characterization and contextualized within aircraft well-clear volumes

    Asymptotic Methods in the Theory of Ordinary Differential Equations

    No full text
    corecore