7 research outputs found

    Below Horizon Aircraft Detection Using Deep Learning for Vision-Based Sense and Avoid

    Full text link
    Commercial operation of unmanned aerial vehicles (UAVs) would benefit from an onboard ability to sense and avoid (SAA) potential mid-air collision threats. In this paper we present a new approach for detection of aircraft below the horizon. We address some of the challenges faced by existing vision-based SAA methods such as detecting stationary aircraft (that have no relative motion to the background), rejecting moving ground vehicles, and simultaneous detection of multiple aircraft. We propose a multi-stage, vision-based aircraft detection system which utilises deep learning to produce candidate aircraft that we track over time. We evaluate the performance of our proposed system on real flight data where we demonstrate detection ranges comparable to the state of the art with the additional capability of detecting stationary aircraft, rejecting moving ground vehicles, and tracking multiple aircraft

    Radar/electro-optical data fusion for non-cooperative UAS sense and avoid

    Get PDF
    Abstract This paper focuses on hardware/software implementation and flight results relevant to a multi-sensor obstacle detection and tracking system based on radar/electro-optical (EO) data fusion. The sensing system was installed onboard an optionally piloted very light aircraft (VLA). Test flights with a single intruder plane of the same class were carried out to evaluate the level of achievable situational awareness and the capability to support autonomous collision avoidance. System architecture is presented and special emphasis is given to adopted solutions regarding real time integration of sensors and navigation measurements and high accuracy estimation of sensors alignment. On the basis of Global Positioning System (GPS) navigation data gathered simultaneously with multi-sensor tracking flight experiments, potential of radar/EO fusion is compared with standalone radar tracking. Flight results demonstrate a significant improvement of collision detection performance, mostly due to the change in angular rate estimation accuracy, and confirm data fusion effectiveness for facing EO detection issues. Relative sensors alignment, performance of the navigation unit, and cross-sensor cueing are found to be key factors to fully exploit the potential of multi-sensor architectures

    Characterization of sky-region morphological-temporal airborne collision detection

    No full text
    Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data

    Characterization of Sky‐region Morphological‐temporal Airborne Collision Detection

    No full text
    Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data
    corecore