565 research outputs found

    Countering a drone in a 3D space: Analyzing deep reinforcement learning methods

    Get PDF
    Unmanned aerial vehicles (UAV), also known as drones have been used for a variety of reasons and the commercial drone market growth is expected to reach remarkable levels in the near future. However, some drone users can mistakenly or intentionally fly into flight paths at major airports, flying too close to commercial aircraft or invading people’s privacy. In order to prevent these unwanted events, counter-drone technology is needed to eliminate threats from drones and hopefully they can be integrated into the skies safely. There are various counter-drone methods available in the industry. However, a counter-drone system supported by an artificial intelligence (AI) method can be an efficient way to fight against drones instead of human intervention. In this paper, a deep reinforcement learning (DRL) method has been proposed to counter a drone in a 3D space by using another drone. In a 2D space it is already shown that the deep reinforcement learning method is an effective way to counter a drone. However, countering a drone in a 3D space with another drone is a very challenging task considering the time required to train and avoid obstacles at the same time. A Deep Q-Network (DQN) algorithm with dueling network architecture and prioritized experience replay is presented to catch another drone in the environment provided by an Airsim simulator. The models have been trained and tested with different scenarios to analyze the learning progress of the drone. Experiences from previous training are also transferred before starting a new training by pre-processing the previous experiences and eliminating those considered as bad experiences. The results show that the best models are obtained with transfer learning and the drone learning progress has been increased dramatically. Additionally, an algorithm which combines imitation learning and reinforcement learning is implemented to catch the target drone. In this algorithm, called deep q-learning from demonstrations (DQfD), expert demonstrations data and self-generated data by the agent are sampled and the agent continues learning without overwriting the demonstration data. The main advantage of this algorithm is to accelerate the learning process even if there is a small amount of demonstration data.This work was funded partially by the AGAUR under grant 2020PANDE00141, the Ministry of Science and Innovation of Spain under grant PID2020-116377RB-C21 and the SESAR Joint Undertaking (JU) project CORUS-XUAM, under grant SESAR-VLD2 101017682. The JU receives support from the European Union’s Horizon 2020 research and innovation program and SESAR JU members other than the Union.Peer ReviewedPostprint (published version

    Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations

    Full text link
    Modeling of real-world biological multi-agents is a fundamental problem in various scientific and engineering fields. Reinforcement learning (RL) is a powerful framework to generate flexible and diverse behaviors in cyberspace; however, when modeling real-world biological multi-agents, there is a domain gap between behaviors in the source (i.e., real-world data) and the target (i.e., cyberspace for RL), and the source environment parameters are usually unknown. In this paper, we propose a method for adaptive action supervision in RL from real-world demonstrations in multi-agent scenarios. We adopt an approach that combines RL and supervised learning by selecting actions of demonstrations in RL based on the minimum distance of dynamic time warping for utilizing the information of the unknown source dynamics. This approach can be easily applied to many existing neural network architectures and provide us with an RL model balanced between reproducibility as imitation and generalization ability to obtain rewards in cyberspace. In the experiments, using chase-and-escape and football tasks with the different dynamics between the unknown source and target environments, we show that our approach achieved a balance between the reproducibility and the generalization ability compared with the baselines. In particular, we used the tracking data of professional football players as expert demonstrations in football and show successful performances despite the larger gap between behaviors in the source and target environments than the chase-and-escape task.Comment: 14 pages, 5 figure

    Learning an Industrial Dross Skimming Task Using LfD Framework

    Get PDF

    Deep learning techniques for visual object tracking

    Get PDF
    Visual object tracking plays a crucial role in various vision systems, including biometric analysis, medical imaging, smart traffic systems, and video surveillance. Despite notable advancements in visual object tracking over the past few decades, many tracking algorithms still face challenges due to factors like illumination changes, deformation, and scale variations. This thesis is divided into three parts. The first part introduces the visual object tracking problem and discusses the traditional approaches that have been used to study it. We then propose a novel method called Tracking by Iterative Multi-Refinements, which addresses the issue of locating the target by redefining the search for the ideal bounding box. This method utilizes an iterative process to forecast a sequence of bounding box adjustments, enabling the tracking algorithm to handle multiple non-conflicting transformations simultaneously. As a result, it achieves faster tracking and can handle a higher number of composite transformations. In the second part of this thesis we explore the application of reinforcement learning (RL) to visual tracking. Presenting a general RL framework applicable to problems that require a sequence of decisions. We discuss various families of popular RL approaches, including value-based methods, policy gradient approaches, and Actor-Critic Methods. Furthermore, we delve into the application of RL to visual tracking, where an RL agent predicts the target's location, selects hyperparameters, correlation filters, or target appearance. A comprehensive comparison of these approaches is provided, along with a taxonomy of state-of-the-art methods. The third part presents a novel method that addresses the need for online tuning of offline-trained tracking models. Typically, offline-trained models, whether through supervised learning or reinforcement learning, require additional tuning during online tracking to achieve optimal performance. The duration of this tuning process depends on the number of layers that need training for the new target. However, our thesis proposes a pioneering approach that expedites the training of convolutional neural networks (CNNs) while preserving their high performance levels. In summary, this thesis extensively explores the area of visual object tracking and its related domains, covering traditional approaches, novel methodologies like Tracking by Iterative Multi-Refinements, the application of reinforcement learning, and a pioneering method for accelerating CNN training. By addressing the challenges faced by existing tracking algorithms, this research aims to advance the field of visual object tracking and contributes to the development of more robust and efficient tracking systems
    • …
    corecore