499 research outputs found

    Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System

    Get PDF
    The paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared to existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.Comment: 8 pages, 7 figure

    Vision based strategies for implementing Sense and Avoid capabilities onboard Unmanned Aerial Systems

    Get PDF
    Current research activities are worked out to develop fully autonomous unmanned platform systems, provided with Sense and Avoid technologies in order to achieve the access to the National Airspace System (NAS), flying with manned airplanes. The TECVOl project is set in this framework, aiming at developing an autonomous prototypal Unmanned Aerial Vehicle which performs Detect Sense and Avoid functionalities, by means of an integrated sensors package, composed by a pulsed radar and four electro-optical cameras, two visible and two Infra-Red. This project is carried out by the Italian Aerospace Research Center in collaboration with the Department of Aerospace Engineering of the University of Naples “Federico II”, which has been involved in the developing of the Obstacle Detection and IDentification system. Thus, this thesis concerns the image processing technique customized for the Sense and Avoid applications in the TECVOL project, where the EO system has an auxiliary role to radar, which is the main sensor. In particular, the panchromatic camera performs the aiding function of object detection, in order to increase accuracy and data rate performance of radar system. Therefore, the thesis describes the implemented steps to evaluate the most suitable panchromatic camera image processing technique for our applications, the test strategies adopted to study its performance and the analysis conducted to optimize it in terms of false alarms, missed detections and detection range. Finally, results from the tests will be explained, and they will demonstrate that the Electro-Optical sensor is beneficial to the overall Detect Sense and Avoid system; in fact it is able to improve upon it, in terms of object detection and tracking performance

    LIDAR obstacle warning and avoidance system for unmanned aerial vehicle sense-and-avoid

    Get PDF
    The demand for reliable obstacle warning and avoidance capabilities to ensure safe low-level flight operations has led to the development of various practical systems suitable for fixed and rotary wing aircraft. State-of-the-art Light Detection and Ranging (LIDAR) technology employing eye-safe laser sources, advanced electro-optics and mechanical beam-steering components delivers the highest angular resolution and accuracy performances in a wide range of operational conditions. LIDAR Obstacle Warning and Avoidance System (LOWAS) is thus becoming a mature technology with several potential applications to manned and unmanned aircraft. This paper addresses specifically its employment in Unmanned Aircraft Systems (UAS) Sense-and-Avoid (SAA). Small-to-medium size Unmanned Aerial Vehicles (UAVs) are particularly targeted since they are very frequently operated in proximity of the ground and the possibility of a collision is further aggravated by the very limited see-and-avoid capabilities of the remote pilot. After a brief description of the system architecture, mathematical models and algorithms for avoidance trajectory generation are provided. Key aspects of the Human Machine Interface and Interaction (HMI2) design for the UAS obstacle avoidance system are also addressed. Additionally, a comprehensive simulation case study of the avoidance trajectory generation algorithms is presented. It is concluded that LOWAS obstacle detection and trajectory optimisation algorithms can ensure a safe avoidance of all classes of obstacles (i.e., wire, extended and point objects) in a wide range of weather and geometric conditions, providing a pathway for possible integration of this technology into future UAS SAA architectures

    Assessing avionics-based GNSS integrity augmentation performance in UAS mission- and safety-critical tasks

    Get PDF
    The integration of Global Navigation Satellite System (GNSS) integrity augmentation functionalities in Unmanned Aerial Systems (UAS) has the potential to provide an integrity-augmented Sense-and-Avoid (SAA) solution suitable for cooperative and non-cooperative scenarios. In this paper, we evaluate the opportunities offered by this integration, proposing a novel approach that maximizes the synergies between Avionics Based Integrity Augmentation (ABIA) and UAS cooperative/non-cooperative SAA architectures. When the specified collision risk thresholds are exceeded, an avoidance manoeuvre is performed by implementing a heading-based differential geometry or pseudospectral optimization to generate a set of optimal trajectory solutions free of mid-air conflicts. The optimal trajectory is selected using a cost function with minimum time constraints and fuel penalty criteria weighted for separation distance. The optimal avoidance trajectory also considers the constraints imposed by the ABIA in terms of UAS platform dynamics and GNSS satellite elevation angles (plus jamming avoidance when applicable), thus preventing degradation or loss of navigation data during the Track, Decision and Avoidance (TDA) process. The performance of this Integrity-Augmented SAA (IAS) architecture was evaluated by simulation case studies involving cooperative and non-cooperative platforms. Simulation results demonstrate that the proposed IAS architecture is capable of performing high-integrity conflict detection and resolution when GNSS is used as the primary source of navigation data

    Avionics-based GNSS integrity augmentation for unmanned aerial systems sense-and-avoid

    Get PDF
    This paper investigates the synergies between a GNSS Avionics Based Integrity Augmentation (ABIA) system and a novel Unmanned Aerial System (UAS) Sense-and-Avoid (SAA) architecture for cooperative and non-cooperative scenarios. The integration of ABIA with SAA has the potential to provide an integrity-augmented SAA solution that will allow the safe and unrestricted access of UAS to commercial airspace. The candidate SAA system uses Forward-Looking Sensors (FLS) for the non-cooperative case and Automatic Dependent Surveillance-Broadcast (ADS-B) for the cooperative case. In the non-cooperative scenario, the system employs navigation-based image stabilization with image morphology operations and a multi-branch Viterbi filter for obstacle detection, which allows heading estimation. The system utilizes a Track-to-Track (T3) algorithm for data fusion that allows combining data from different tracks obtained with FLS and/or ADS-B depending on the scenario. Successively, it utilizes an Interacting Multiple Model (IMM) algorithm to estimate the state vector allowing a prediction of the intruder trajectory over a specified time horizon. Both in the cooperative and non-cooperative cases, the risk of collision is evaluated by setting a threshold on the Probability Density Function (PDF) of a Near Mid-Air Collision (NMAC) event over the separation area. So, if the specified threshold is exceeded, an avoidance manoeuver is performed based on a heading-based Differential Geometry (DG) algorithm and optimized utilizing a cost function with minimum time constraints and fuel penalty criteria weighted as a function of separation distance. Additionally, the optimised avoidance trajectory considers the constraints imposed by the ABIA in terms of GNSS constellation satellite elevation angles, preventing degradation or losses of navigation data during the whole SAA loop. This integration scheme allows real-time trajectory corrections to re-establish the Required Navigation Performance (RNP) when actual GNSS accuracy degradations and/or data losses take place (e.g., due to aircraft-satellite relative geometry, GNSS receiver tracking, interference, jamming or other external factors). Various simulation case studies were accomplished to evaluate the performance of this Integrity-Augmented SAA (IAS) architecture. The selected host platform was the AEROSONDE Unmanned Aerial Vehicle (UAV) and the simulation cases addressed a variety of cooperative and non-cooperative scenarios in a representative cross-section of the AEROSONDE operational flight envelope. The simulation results show that the proposed IAS architecture is an excellent candidate to perform high-integrity Collision Detection and Resolution (CD&R) utilizing GNSS as the primary source of navigation data, providing solid foundation for future research and developments in this domain

    Avionics-based GNSS integrity augmentation for unmanned aerial systems sense-and-avoid

    Get PDF
    This paper investigates the synergies between a GNSS Avionics Based Integrity Augmentation (ABIA) system and a novel Unmanned Aerial System (UAS) Sense-and-Avoid (SAA) architecture for cooperative and non-cooperative scenarios. The integration of ABIA with SAA has the potential to provide an integrity-augmented SAA solution that will allow the safe and unrestricted access of UAS to commercial airspace. The candidate SAA system uses Forward-Looking Sensors (FLS) for the non-cooperative case and Automatic Dependent Surveillance-Broadcast (ADS-B) for the cooperative case. In the non-cooperative scenario, the system employs navigation-based image stabilization with image morphology operations and a multi-branch Viterbi filter for obstacle detection, which allows heading estimation. The system utilizes a Track-to-Track (T3) algorithm for data fusion that allows combining data from different tracks obtained with FLS and/or ADS-B depending on the scenario. Successively, it utilizes an Interacting Multiple Model (IMM) algorithm to estimate the state vector allowing a prediction of the intruder trajectory over a specified time horizon. Both in the cooperative and non-cooperative cases, the risk of collision is evaluated by setting a threshold on the Probability Density Function (PDF) of a Near Mid-Air Collision (NMAC) event over the separation area. So, if the specified threshold is exceeded, an avoidance manoeuver is performed based on a heading-based Differential Geometry (DG) algorithm and optimized utilizing a cost function with minimum time constraints and fuel penalty criteria weighted as a function of separation distance. Additionally, the optimised avoidance trajectory considers the constraints imposed by the ABIA in terms of GNSS constellation satellite elevation angles, preventing degradation or losses of navigation data during the whole SAA loop. This integration scheme allows real-time trajectory corrections to re-establish the Required Navigation Performance (RNP) when actual GNSS accuracy degradations and/or data losses take place (e.g., due to aircraft-satellite relative geometry, GNSS receiver tracking, interference, jamming or other external factors). Various simulation case studies were accomplished to evaluate the performance of this Integrity-Augmented SAA (IAS) architecture. The selected host platform was the AEROSONDE Unmanned Aerial Vehicle (UAV) and the simulation cases addressed a variety of cooperative and non-cooperative scenarios in a representative cross-section of the AEROSONDE operational flight envelope. The simulation results show that the proposed IAS architecture is an excellent candidate to perform high-integrity Collision Detection and Resolution (CD&R) utilizing GNSS as the primary source of navigation data, providing solid foundation for future research and developments in this domain

    Multi-sensor data fusion techniques for RPAS detect, track and avoid

    Get PDF
    Accurate and robust tracking of objects is of growing interest amongst the computer vision scientific community. The ability of a multi-sensor system to detect and track objects, and accurately predict their future trajectory is critical in the context of mission- and safety-critical applications. Remotely Piloted Aircraft System (RPAS) are currently not equipped to routinely access all classes of airspace since certified Detect-and-Avoid (DAA) systems are yet to be developed. Such capabilities can be achieved by incorporating both cooperative and non-cooperative DAA functions, as well as providing enhanced communications, navigation and surveillance (CNS) services. DAA is highly dependent on the performance of CNS systems for Detection, Tacking and avoiding (DTA) tasks and maneuvers. In order to perform an effective detection of objects, a number of high performance, reliable and accurate avionics sensors and systems are adopted including non-cooperative sensors (visual and thermal cameras, Laser radar (LIDAR) and acoustic sensors) and cooperative systems (Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS)). In this paper the sensors and system information candidates are fully exploited in a Multi-Sensor Data Fusion (MSDF) architecture. An Unscented Kalman Filter (UKF) and a more advanced Particle Filter (PF) are adopted to estimate the state vector of the objects based for maneuvering and non-maneuvering DTA tasks. Furthermore, an artificial neural network is conceptualised/adopted to exploit the use of statistical learning methods, which acts to combined information obtained from the UKF and PF. After describing the MSDF architecture, the key mathematical models for data fusion are presented. Conceptual studies are carried out on visual and thermal image fusion architectures

    Automated taxiing for unmanned aircraft systems

    Get PDF
    Over the last few years, the concept of civil Unmanned Aircraft System(s) (UAS) has been realised, with small UASs commonly used in industries such as law enforcement, agriculture and mapping. With increased development in other areas, such as logistics and advertisement, the size and range of civil UAS is likely to grow. Taken to the logical conclusion, it is likely that large scale UAS will be operating in civil airspace within the next decade. Although the airborne operations of civil UAS have already gathered much research attention, work is also required to determine how UAS will function when on the ground. Motivated by the assumption that large UAS will share ground facilities with manned aircraft, this thesis describes the preliminary development of an Automated Taxiing System(ATS) for UAS operating at civil aerodromes. To allow the ATS to function on the majority of UAS without the need for additional hardware, a visual sensing approach has been chosen, with the majority of work focusing on monocular image processing techniques. The purpose of the computer vision system is to provide direct sensor data which can be used to validate the vehicle s position, in addition to detecting potential collision risks. As aerospace regulations require the most robust and reliable algorithms for control, any methods which are not fully definable or explainable will not be suitable for real-world use. Therefore, non-deterministic methods and algorithms with hidden components (such as Artificial Neural Network (ANN)) have not been used. Instead, the visual sensing is achieved through a semantic segmentation, with separate segmentation and classification stages. Segmentation is performed using superpixels and reachability clustering to divide the image into single content clusters. Each cluster is then classified using multiple types of image data, probabilistically fused within a Bayesian network. The data set for testing has been provided by BAE Systems, allowing the system to be trained and tested on real-world aerodrome data. The system has demonstrated good performance on this limited dataset, accurately detecting both collision risks and terrain features for use in navigation

    Radar/electro-optical data fusion for non-cooperative UAS sense and avoid

    Get PDF
    Abstract This paper focuses on hardware/software implementation and flight results relevant to a multi-sensor obstacle detection and tracking system based on radar/electro-optical (EO) data fusion. The sensing system was installed onboard an optionally piloted very light aircraft (VLA). Test flights with a single intruder plane of the same class were carried out to evaluate the level of achievable situational awareness and the capability to support autonomous collision avoidance. System architecture is presented and special emphasis is given to adopted solutions regarding real time integration of sensors and navigation measurements and high accuracy estimation of sensors alignment. On the basis of Global Positioning System (GPS) navigation data gathered simultaneously with multi-sensor tracking flight experiments, potential of radar/EO fusion is compared with standalone radar tracking. Flight results demonstrate a significant improvement of collision detection performance, mostly due to the change in angular rate estimation accuracy, and confirm data fusion effectiveness for facing EO detection issues. Relative sensors alignment, performance of the navigation unit, and cross-sensor cueing are found to be key factors to fully exploit the potential of multi-sensor architectures
    • …
    corecore