6 research outputs found

    Evidential Detection and Tracking Collaboration: New Problem, Benchmark and Algorithm for Robust Anti-UAV System

    Full text link
    Unmanned Aerial Vehicles (UAVs) have been widely used in many areas, including transportation, surveillance, and military. However, their potential for safety and privacy violations is an increasing issue and highly limits their broader applications, underscoring the critical importance of UAV perception and defense (anti-UAV). Still, previous works have simplified such an anti-UAV task as a tracking problem, where the prior information of UAVs is always provided; such a scheme fails in real-world anti-UAV tasks (i.e. complex scenes, indeterminate-appear and -reappear UAVs, and real-time UAV surveillance). In this paper, we first formulate a new and practical anti-UAV problem featuring the UAVs perception in complex scenes without prior UAVs information. To benchmark such a challenging task, we propose the largest UAV dataset dubbed AntiUAV600 and a new evaluation metric. The AntiUAV600 comprises 600 video sequences of challenging scenes with random, fast, and small-scale UAVs, with over 723K thermal infrared frames densely annotated with bounding boxes. Finally, we develop a novel anti-UAV approach via an evidential collaboration of global UAVs detection and local UAVs tracking, which effectively tackles the proposed problem and can serve as a strong baseline for future research. Extensive experiments show our method outperforms SOTA approaches and validate the ability of AntiUAV600 to enhance UAV perception performance due to its large scale and complexity. Our dataset, pretrained models, and source codes will be released publically

    Drone model classification using convolutional neural network trained on synthetic data

    Get PDF
    We present a convolutional neural network (CNN) that identifies drone models in real-life videos. The neural network is trained on synthetic images and tested on a real-life dataset of drone videos. To create the training and validation datasets, we show a method of generating synthetic drone images. Domain randomization is used to vary the simulation parameters such as model textures, background images, and orientation. Three common drone models are classified: DJI Phantom, DJI Mavic, and DJI Inspire. To test the performance of the neural network model, Anti-UAV, a real-life dataset of flying drones is used. The proposed method reduces the time-cost associated with manually labelling drones, and we prove that it is transferable to real-life videos. The CNN achieves an overall accuracy of 92.4%, a precision of 88.8%, a recall of 88.6%, and an f1 score of 88.7% when tested on the real-life dataset

    Quantifying the Simulation-Reality Gap for Deep Learning-Based Drone Detection

    Get PDF
    The detection of drones or unmanned aerial vehicles is a crucial component in protecting safety-critical infrastructures and maintaining privacy for individuals and organizations. The widespread use of optical sensors for perimeter surveillance has made optical sensors a popular choice for data collection in the context of drone detection. However, efficiently processing the obtained sensor data poses a significant challenge. Even though deep learning-based object detection models have shown promising results, their effectiveness depends on large amounts of annotated training data, which is time consuming and resource intensive to acquire. Therefore, this work investigates the applicability of synthetically generated data obtained through physically realistic simulations based on three-dimensional environments for deep learning-based drone detection. Specifically, we introduce a novel three-dimensional simulation approach built on Unreal Engine and Microsoft AirSim for generating synthetic drone data. Furthermore, we quantify the respective simulation-reality gap and evaluate established techniques for mitigating this gap by systematically exploring different compositions of real and synthetic data. Additionally, we analyze the adaptation of the simulation setup as part of a feedback loop-based training strategy and highlight the benefits of a simulation-based training setup for image-based drone detection, compared to a training strategy relying exclusively on real-world data

    A Novel Approach to Detect Drones Using Deep Convolutional Neural Network Architecture

    Get PDF
    Over the past decades, drones have become more attainable by the public due to their widespread availability at affordable prices. Nevertheless, this situation sparks serious concerns in both the cyber and physical security domains, as drones can be employed for malicious activities with public safety threats. However, detecting drones instantly and efficiently is a very difficult task due to their tiny size and swift flights. This paper presents a novel drone detection method using deep convolutional learning and deep transfer learning. The proposed algorithm employs a new feature extraction network, which is added to the modified YOU ONLY LOOK ONCE version2 (YOLOv2) network. The feature extraction model uses bypass connections to learn features from the training sets and solves the “vanishing gradient” problem caused by the increasing depth of the network. The structure of YOLOv2 is modified by replacing the rectified linear unit (relu) with a leaky-relu activation function and adding an extra convolutional layer with a stride of 2 to improve the small object detection accuracy. Using leaky-relu solves the “dying relu” problem. The additional convolution layer with a stride of 2 reduces the spatial dimensions of the feature maps and helps the network to focus on larger contextual information while still preserving the ability to detect small objects. The model is trained with a custom dataset that contains various types of drones, airplanes, birds, and helicopters under various weather conditions. The proposed model demonstrates a notable performance, achieving an accuracy of 77% on the test images with only 5 million learnable parameters in contrast to the Darknet53 + YOLOv3 model, which exhibits a 54% accuracy on the same test set despite employing 62 million learnable parameters

    A New Approach to Classify Drones Using a Deep Convolutional Neural Network

    Get PDF
    In recent years, the widespread adaptation of Unmanned Aerial Vehicles (UAVs), commonly known as drones, among the public has led to significant security concerns, prompting intense research into drones’ classification methodologies. The swift and accurate classification of drones poses a considerable challenge due to their diminutive size and rapid movements. To address this challenge, this paper introduces (i) a novel drone classification approach utilizing deep convolution and deep transfer learning techniques. The model incorporates bypass connections and Leaky ReLU activation functions to mitigate the ‘vanishing gradient problem’ and the ‘dying ReLU problem’, respectively, associated with deep networks and is trained on a diverse dataset. This study employs (ii) a custom dataset comprising both audio and visual data of drones as well as analogous objects like an airplane, birds, a helicopter, etc., to enhance classification accuracy. The integration of audio–visual information facilitates more precise drone classification. Furthermore, (iii) a new Finite Impulse Response (FIR) low-pass filter is proposed to convert audio signals into spectrogram images, reducing susceptibility to noise and interference. The proposed model signifies a transformative advancement in convolutional neural networks’ design, illustrating the compatibility of efficacy and efficiency without compromising on complexity and learnable properties. A notable performance was demonstrated by the proposed model, with an accuracy of 100% achieved on the test images using only four million learnable parameters. In contrast, the Resnet50 and Inception-V3 models exhibit 90% accuracy each on the same test set, despite the employment of 23.50 million and 21.80 million learnable parameters, respectively
    corecore