5 research outputs found

    Vision-Based Unmanned Aerial Vehicle Detection and Tracking for Sense and Avoid Systems

    Get PDF
    We propose an approach for on-line detection of small Unmanned Aerial Vehicles (UAVs) and estimation of their relative positions and velocities in the 3D environment from a single moving camera in the context of sense and avoid systems. This problem is challenging both from a detection point of view, as there are no markers on the targets available, and from a tracking perspective, due to misdetection and false positives. Furthermore, the methods need to be computationally light, despite the complexity of computer vision algorithms, to be used on UAVs with limited payload. To address these issues we propose a multi-staged framework that incorporates fast object detection using an AdaBoost-based approach, coupled with an on-line visual-based tracking algorithm and a recent sensor fusion and state estimation method. Our framework allows for achieving real-time performance with accurate object detection and tracking without any need of markers and customized, high-performing hardware resources

    A review of UAV Visual Detection and Tracking Methods

    Full text link
    This paper presents a review of techniques used for the detection and tracking of UAVs or drones. There are different techniques that depend on collecting measurements of the position, velocity, and image of the UAV and then using them in detection and tracking. Hybrid detection techniques are also presented. The paper is a quick reference for a wide spectrum of methods that are used in the drone detection process.Comment: 10 page

    Localization of UAVs from Camera Image

    Get PDF
    Tématem této práce je využití neuronových sítí pro lokalizaci kvadrokoptér. Řešení práce spočívá v důkladném prostudování látky a návrhu algoritmu, který bude schopen správně detekovat a lokalizovat UAV z kamerového výstupu pomocí neuronových sítí. Práce se zabývá rozborem neuronových sítí, vhodným výběrem sítě pro řešení problematiky, jejím návrhem a vytvořením funkčního algoritmu schopného detekovat a označit objekty v reálném čase. Kromě přípravy algoritmu pro živý vstup je jeho funkcionalita ozkoušena na testovací sadě obrazů pro získání ucelené informace o přesnosti. Výsledky testů jsou následně rozebrány a z nich jsou vyvozeny možné návrhy na zlepšení.The goal of this work is to test the possibility of using neural networks to localize UAVs. Solution for this problem lies in an extensive research of given subject and in the development of algorithm, which will be able to detect and localize flying quadrocopters from video stream. This work will provide a thorough analysis of neural networks, proper network design and the development of functional algorithm, capable of live stream object marking. Apart from preparing the algorithm for live feed input, the functionality of this program will be tested on a set of pictures to properly analyze the precision. Results of these tests will be discussed in final suggestion for future upgrades

    Vision-based detection of aircrafts and UAVs

    Get PDF
    Unmanned Aerial Vehicles are becoming increasingly popular for a broad variety of tasks ranging from aerial imagery to objects delivery. With the expansion of the areas, where drones can be efficiently used, the collision risk with other flying objects increases. Avoiding such collisions would be a relatively easy task, if all the aircrafts in the neighboring airspace could communicate with each other and share their location information. However, it is often the case that either location information is unavailable (e.g. flying in GPS-denied environments) or communication is not possible (e.g. different communication channels or non-cooperative flight scenario). To ensure flight safety in this kind of situations drones need a way to autonomously detect other objects that are intruding the neighboring airspace. Visual-based collision avoidance is of particular interest as cameras generally consume less power and are more lightweight than active sensor alternatives such as radars and lasers. We have therefore developed a set of increasingly sophisticated algorithms to provide drones with a visual collision avoidance capability. First, we present a novel method for detecting flying objects such as drones and planes that occupy a small part of the camera field of view, possibly move in front of complex backgrounds, and are filmed by a moving camera. In order to be solved this problem requires combining motion and appearance information, as neither of the two alone is capable of providing reliable enough detections. We therefore propose a machine learning technique that operates on spatio- temporal cubes of image intensities where individual patches are aligned using an object-centric regression-based motion stabilization algorithm. Second, in order to reduce the need to collect a large training dataset and to manual annotate it, we introduce a way to generate realistic synthetic images. Given only a small set of real examples and a coarse 3D model of the object, synthetic data can be generated in arbitrary quantities and further used to supplement real examples for training a detector. The key ingredient of our method is that the synthetically generated images need to be as close as possible to the real ones not in terms of image quality, but according to the features, used by a machine learning algorithm. Third, though the aforementioned approach yields a substantial increase in performance when using Adaboost and DPM detectors, it does not generalize well to Convolutional Neural Networks, which have become the state-of-the-art. This happens because, as we add more and more synthetic data, the CNNs begin to overfit to the synthetic images at the expense of the real ones. We therefore propose a novel deep domain adaptation technique that allows efficiently combining real and synthetic images without overfitting to either of the two. While most of the adaptation techniques aim at learning features that are invariant to the possible difference of the images, coming from different sources (real and synthetic). Unlike those methods, we suggest modeling this difference with a special two-stream architecture. We evaluate our approach on three different datasets and show its effectiveness for various classification and regression tasks
    corecore