10 research outputs found

    An Evaluation of Cots-Based Radar for Very Small Drone Sense and Avoid Application

    Get PDF
    The use of very small unmanned aerial vehicles (UAVs) are increasingly common these days but its applications are limited to the pilot line-of-sight view. To extend its use beyond the pilot view, UAVs need to be equipped sense and avoid (SAA) system to avoid potential collisions. However, the development of SAA for very small drones is still in the infancy stage mainly due to the high cost of design and development for reliable range sensors. Recent developments of very small size and lightweight commercial off-the-shelf (COTS)-based radar systems may become a crucial element in very small drone applications. These types of radars are primarily developed for industrial sensing but can be adapted for applications such SAA. Thus, this paper contributes to the survey of a miniature and lightweight radar sensor to assist the SAA development. The focus of this paper is to analyse the eligibility of a COTS-based radar in detecting very small drones. For this purpose, we used a frequency-modulated continuous radar (FMCW) developed by Infineon Technologies.  Field test results show the real-time capability of the radar sensor to detect the very small drones within ± 0.5 meters in static and dynamic conditions. &nbsp

    Error analysis of algorithms for camera rotation calculation in GPS/IMU/camera fusion for UAV sense and avoid systems

    Get PDF
    In this paper four camera pose estimation algorithms are investigated in simulations. The aim of the investigation is to show the strengths and weaknesses of these algorithms in the aircraft attitude estimation task. The work is part of a research project where a low cost UAV is developed which can be integrated into the national airspace. Two main issues are addressed with these measurements, one is the sense-and-avoid capability of the aircraft and the other is sensor redundancy. Both parts can benefit from a good attitude estimate. Thus, it is important to use the appropriate algorithm for the camera rotation estimation. Results show that many times even the simplest algorithm can perform at an acceptable level of precision for the sensor fusion. © 2014 IEEE

    Flying Objects Detection from a Single Moving Camera

    Get PDF
    We propose an approach to detect flying objects such as UAVs and aircrafts when they occupy a small portion of the field of view, possibly moving against complex backgrounds, and are filmed by a camera that itself moves. Solving such a difficult problem requires combining both appearance and motion cues. To this end we propose a regression-based approach to motion stabilization of local image patches that allows us to achieve effective classification on spatio-temporal image cubes and outperform state-of-the-art techniques. As the problem is relatively new, we collected two challenging datasets for UAVs and Aircrafts, which can be used as benchmarks for flying objects detection and vision-guided collision avoidance

    An evaluation of cots-based radar for very small drone sense and avoid application

    Get PDF
    The use of very small unmanned aerial vehicles (UAVs) are increasingly common these days but its applications are limited to the pilot line-of-sight view. To extend its use beyond the pilot view, UAVs need to be equipped sense and avoid (SAA) system to avoid potential collisions. However, the development of SAA for very small drones is still in the infancy stage mainly due to the high cost of design and development for reliable range sensors. Recent developments of very small size and lightweight commercial off-the-shelf (COTS)-based radar systems may become a crucial element in very small drone applications. These types of radars are primarily developed for industrial sensing but can be adapted for applications such SAA. Thus, this paper contributes to the survey of a miniature and lightweight radar sensor to assist the SAA development. The focus of this paper is to analyse the eligibility of a COTS-based radar in detecting very small drones. For this purpose, we used a frequency-modulated continuous radar (FMCW) developed by Infineon Technologies. Field test results show the real-time capability of the radar sensor to detect the very small drones within ± 0.5 meters in static and dynamic conditions

    Vision-Based Unmanned Aerial Vehicle Detection and Tracking for Sense and Avoid Systems

    Get PDF
    We propose an approach for on-line detection of small Unmanned Aerial Vehicles (UAVs) and estimation of their relative positions and velocities in the 3D environment from a single moving camera in the context of sense and avoid systems. This problem is challenging both from a detection point of view, as there are no markers on the targets available, and from a tracking perspective, due to misdetection and false positives. Furthermore, the methods need to be computationally light, despite the complexity of computer vision algorithms, to be used on UAVs with limited payload. To address these issues we propose a multi-staged framework that incorporates fast object detection using an AdaBoost-based approach, coupled with an on-line visual-based tracking algorithm and a recent sensor fusion and state estimation method. Our framework allows for achieving real-time performance with accurate object detection and tracking without any need of markers and customized, high-performing hardware resources

    Deep Learning-Based Detection of Pipes in Industrial Environments

    Get PDF
    Robust perception is generally produced through complex multimodal perception pipelines, but these kinds of methods are unsuitable for autonomous UAV deployment, given the restriction found on the platforms. This chapter describes developments and experimental results produced to develop new deep learning (DL) solutions for industrial perception problems. An earlier solution combining camera, LiDAR, GPS, and IMU sensors to produce high rate, accurate, robust detection, and positioning of pipes in industrial environments is to be replaced by a single camera computationally lightweight convolutional neural network (CNN) perception technique. In order to develop DL solutions, large image datasets with ground truth labels are required, so the previous multimodal technique is modified to be used to capture and label datasets. The labeling method developed automatically computes the labels when possible for the images captured with the UAV platform. To validate the automated dataset generator, a dataset is produced and used to train a lightweight AlexNet-based full convolutional network (FCN). To produce a comparison point, a weakened version of the multimodal approach—without using prior data—is evaluated with the same DL-based metrics

    Detecting Flying Objects using a Single Moving Camera

    Get PDF
    We propose an approach for detecting flying objects such as Unmanned Aerial Vehicles (UAVs) and aircrafts when they occupy a small portion of the field of view, possibly moving against complex backgrounds, and are filmed by a camera that itself moves. We argue that solving such a difficult problem requires combining both appearance and motion cues. To this end we propose a regression-based approach for object-centric motion stabilization of image patches that allows us to achieve effective classification on spatio-temporal image cubes and outperform state-of-the-art techniques. As this problem has not yet been extensively studied, no test datasets are publicly available. We therefore built our own, both for UAVs and aircrafts, and will make them publicly available so they can be used to benchmark future flying object detection and collision avoidance algorithms

    Vision-based detection of aircrafts and UAVs

    Get PDF
    Unmanned Aerial Vehicles are becoming increasingly popular for a broad variety of tasks ranging from aerial imagery to objects delivery. With the expansion of the areas, where drones can be efficiently used, the collision risk with other flying objects increases. Avoiding such collisions would be a relatively easy task, if all the aircrafts in the neighboring airspace could communicate with each other and share their location information. However, it is often the case that either location information is unavailable (e.g. flying in GPS-denied environments) or communication is not possible (e.g. different communication channels or non-cooperative flight scenario). To ensure flight safety in this kind of situations drones need a way to autonomously detect other objects that are intruding the neighboring airspace. Visual-based collision avoidance is of particular interest as cameras generally consume less power and are more lightweight than active sensor alternatives such as radars and lasers. We have therefore developed a set of increasingly sophisticated algorithms to provide drones with a visual collision avoidance capability. First, we present a novel method for detecting flying objects such as drones and planes that occupy a small part of the camera field of view, possibly move in front of complex backgrounds, and are filmed by a moving camera. In order to be solved this problem requires combining motion and appearance information, as neither of the two alone is capable of providing reliable enough detections. We therefore propose a machine learning technique that operates on spatio- temporal cubes of image intensities where individual patches are aligned using an object-centric regression-based motion stabilization algorithm. Second, in order to reduce the need to collect a large training dataset and to manual annotate it, we introduce a way to generate realistic synthetic images. Given only a small set of real examples and a coarse 3D model of the object, synthetic data can be generated in arbitrary quantities and further used to supplement real examples for training a detector. The key ingredient of our method is that the synthetically generated images need to be as close as possible to the real ones not in terms of image quality, but according to the features, used by a machine learning algorithm. Third, though the aforementioned approach yields a substantial increase in performance when using Adaboost and DPM detectors, it does not generalize well to Convolutional Neural Networks, which have become the state-of-the-art. This happens because, as we add more and more synthetic data, the CNNs begin to overfit to the synthetic images at the expense of the real ones. We therefore propose a novel deep domain adaptation technique that allows efficiently combining real and synthetic images without overfitting to either of the two. While most of the adaptation techniques aim at learning features that are invariant to the possible difference of the images, coming from different sources (real and synthetic). Unlike those methods, we suggest modeling this difference with a special two-stream architecture. We evaluate our approach on three different datasets and show its effectiveness for various classification and regression tasks

    Visual detection and implementation aspects of a UAV see and avoid system

    No full text
    1. ; , , 2011, Linköping – Swede
    corecore