32 research outputs found

    Autonomous Unmanned Aerial Vehicle Navigation using Reinforcement Learning: A Systematic Review

    Get PDF
    There is an increasing demand for using Unmanned Aerial Vehicle (UAV), known as drones, in different applications such as packages delivery, traffic monitoring, search and rescue operations, and military combat engagements. In all of these applications, the UAV is used to navigate the environment autonomously --- without human interaction, perform specific tasks and avoid obstacles. Autonomous UAV navigation is commonly accomplished using Reinforcement Learning (RL), where agents act as experts in a domain to navigate the environment while avoiding obstacles. Understanding the navigation environment and algorithmic limitations plays an essential role in choosing the appropriate RL algorithm to solve the navigation problem effectively. Consequently, this study first identifies the main UAV navigation tasks and discusses navigation frameworks and simulation software. Next, RL algorithms are classified and discussed based on the environment, algorithm characteristics, abilities, and applications in different UAV navigation problems, which will help the practitioners and researchers select the appropriate RL algorithms for their UAV navigation use cases. Moreover, identified gaps and opportunities will drive UAV navigation research

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Sub-optimal Policy Aided Multi-Agent Reinforcement Learning for Flocking Control

    Full text link
    Flocking control is a challenging problem, where multiple agents, such as drones or vehicles, need to reach a target position while maintaining the flock and avoiding collisions with obstacles and collisions among agents in the environment. Multi-agent reinforcement learning has achieved promising performance in flocking control. However, methods based on traditional reinforcement learning require a considerable number of interactions between agents and the environment. This paper proposes a sub-optimal policy aided multi-agent reinforcement learning algorithm (SPA-MARL) to boost sample efficiency. SPA-MARL directly leverages a prior policy that can be manually designed or solved with a non-learning method to aid agents in learning, where the performance of the policy can be sub-optimal. SPA-MARL recognizes the difference in performance between the sub-optimal policy and itself, and then imitates the sub-optimal policy if the sub-optimal policy is better. We leverage SPA-MARL to solve the flocking control problem. A traditional control method based on artificial potential fields is used to generate a sub-optimal policy. Experiments demonstrate that SPA-MARL can speed up the training process and outperform both the MARL baseline and the used sub-optimal policy.Comment: Accepted by IEEE International Conference on Systems, Man, and Cybernetics (SMC) 202

    Deep Reinforcement Learning Attitude Control of Fixed-Wing UAVs Using Proximal Policy Optimization

    Full text link
    Contemporary autopilot systems for unmanned aerial vehicles (UAVs) are far more limited in their flight envelope as compared to experienced human pilots, thereby restricting the conditions UAVs can operate in and the types of missions they can accomplish autonomously. This paper proposes a deep reinforcement learning (DRL) controller to handle the nonlinear attitude control problem, enabling extended flight envelopes for fixed-wing UAVs. A proof-of-concept controller using the proximal policy optimization (PPO) algorithm is developed, and is shown to be capable of stabilizing a fixed-wing UAV from a large set of initial conditions to reference roll, pitch and airspeed values. The training process is outlined and key factors for its progression rate are considered, with the most important factor found to be limiting the number of variables in the observation vector, and including values for several previous time steps for these variables. The trained reinforcement learning (RL) controller is compared to a proportional-integral-derivative (PID) controller, and is found to converge in more cases than the PID controller, with comparable performance. Furthermore, the RL controller is shown to generalize well to unseen disturbances in the form of wind and turbulence, even in severe disturbance conditions.Comment: 11 pages, 3 figures, 2019 International Conference on Unmanned Aircraft Systems (ICUAS

    Actor-critic continuous state reinforcement learning for wind-turbine control robust optimization

    Get PDF
    [EN] The control of Variable-Speed Wind-Turbines (VSWT) extracting electrical power from the wind kinetic energy are composed of subsystems that need to be controlled jointly, namely the blade pitch and the generator torque controllers. Previous state of the art approaches decompose the joint control problem into independent control subproblems, each with its own control subgoal, carrying out separately the design and tuning of a parameterized controller for each subproblem. Such approaches neglect interactions among subsystems which can introduce significant effects. This paper applies Actor-Critic Reinforcement Learning (ACRL) for the joint control problem as a whole, carrying out the simultaneous control parameter optimization of both subsystems without neglecting their interactions, aiming for a globally optimal control of the whole system. The innovative control architecture uses an augmented input space so that the parameters can be fine-tuned for each working condition. Validation results conducted on simulation experiments using the state-of-the-art OpenFAST simulator show a significant efficiency improvement relative to the best state of the art controllers used as benchmarks, up to a 22% improvement in the average power error performance after ACRL training.This work has been partially supported by FEDER funds through MINECO project TIN2017-85827-P, MCIN project PID2020-116346 GB-I00, and project KK-202000044 of the Elkartek 2020 funding program of the Basque Government

    Outdoor operations of multiple quadrotors in windy environment

    Get PDF
    Coordinated multiple small unmanned aerial vehicles (sUAVs) offer several advantages over a single sUAV platform. These advantages include improved task efficiency, reduced task completion time, improved fault tolerance, and higher task flexibility. However, their deployment in an outdoor environment is challenging due to the presence of wind gusts. The coordinated motion of a multi-sUAV system in the presence of wind disturbances is a challenging problem when considering collision avoidance (safety), scalability, and communication connectivity. Performing wind-agnostic motion planning for sUAVs may produce a sizeable cross-track error if the wind on the planned route leads to actuator saturation. In a multi-sUAV system, each sUAV has to locally counter the wind disturbance while maintaining the safety of the system. Such continuous manipulation of the control effort for multiple sUAVs under uncertain environmental conditions is computationally taxing and can lead to reduced efficiency and safety concerns. Additionally, modern day sUAV systems are susceptible to cyberattacks due to their use of commercial wireless communication infrastructure. This dissertation aims to address these multi-faceted challenges related to the operation of outdoor rotor-based multi-sUAV systems. A comprehensive review of four representative techniques to measure and estimate wind speed and direction using rotor-based sUAVs is discussed. After developing a clear understanding of the role wind gusts play in quadrotor motion, two decentralized motion planners for a multi-quadrotor system are implemented and experimentally evaluated in the presence of wind disturbances. The first planner is rooted in the reinforcement learning (RL) technique of state-action-reward-state-action (SARSA) to provide generalized path plans in the presence of wind disturbances. While this planner provides feasible trajectories for the quadrotors, it does not provide guarantees of collision avoidance. The second planner implements a receding horizon (RH) mixed-integer nonlinear programming (MINLP) model that is integrated with control barrier functions (CBFs) to guarantee collision-free transit of the multiple quadrotors in the presence of wind disturbances. Finally, a novel communication protocol using Ethereum blockchain-based smart contracts is presented to address the challenge of secure wireless communication. The U.S. sUAV market is expected to be worth $92 Billion by 2030. The Association for Unmanned Vehicle Systems International (AUVSI) noted in its seminal economic report that UAVs would be responsible for creating 100,000 jobs by 2025 in the U.S. The rapid proliferation of drone technology in various applications has led to an increasing need for professionals skilled in sUAV piloting, designing, fabricating, repairing, and programming. Engineering educators have recognized this demand for certified sUAV professionals. This dissertation aims to address this growing sUAV-market need by evaluating two active learning-based instructional approaches designed for undergraduate sUAV education. The two approaches leverages the interactive-constructive-active-passive (ICAP) framework of engagement and explores the use of Competition based Learning (CBL) and Project based Learning (PBL). The CBL approach is implemented through a drone building and piloting competition that featured 97 students from undergraduate and graduate programs at NJIT. The competition focused on 1) drone assembly, testing, and validation using commercial off-the-shelf (COTS) parts, 2) simulation of drone flight missions, and 3) manual and semi-autonomous drone piloting were implemented. The effective student learning experience from this competition served as the basis of a new undergraduate course on drone science fundamentals at NJIT. This undergraduate course focused on the three foundational pillars of drone careers: 1) drone programming using Python, 2) designing and fabricating drones using Computer-Aided Design (CAD) and rapid prototyping, and 3) the US Federal Aviation Administration (FAA) Part 107 Commercial small Unmanned Aerial Vehicles (sUAVs) pilot test. Multiple assessment methods are applied to examine the students’ gains in sUAV skills and knowledge and student attitudes towards an active learning-based approach for sUAV education. The use of active learning techniques to address these challenges lead to meaningful student engagement and positive gains in the learning outcomes as indicated by quantitative and qualitative assessments

    Reinforcement Learning Agents acquire Flocking and Symbiotic Behaviour in Simulated Ecosystems

    Get PDF
    In nature, group behaviours such as flocking as well as cross-species symbiotic partnerships are observed in vastly different forms and circumstances. We hypothesize that such strategies can arise in response to generic predator-prey pressures in a spatial environment with range-limited sensation and action. We evaluate whether these forms of coordination can emerge by independent multi-agent reinforcement learning in simple multiple-species ecosystems. In contrast to prior work, we avoid hand-crafted shaping rewards, specific actions, or dynamics that would directly encourage coordination across agents. Instead we test whether coordination emerges as a consequence of adaptation without encouraging these specific forms of coordination, which only has indirect benefit. Our simulated ecosystems consist of a generic food chain involving three trophic levels: apex predator, mid-level predator, and prey. We conduct experiments on two different platforms, a 3D physics engine with tens of agents as well as in a 2D grid world with up to thousands. The results clearly confirm our hypothesis and show substantial coordination both within and across species. To obtain these results, we leverage and adapt recent advances in deep reinforcement learning within an ecosystem training protocol featuring homogeneous groups of independent agents from different species (sets of policies), acting in many different random combinations in parallel habitats. The policies utilize neural network architectures that are invariant to agent individuality but not type (species) and that generalize across varying numbers of observed other agents. While the emergence of complexity in artificial ecosystems have long been studied in the artificial life community, the focus has been more on individual complexity and genetic algorithms or explicit modelling, and less on group complexity and reinforcement learning emphasized in this article. Unlike what the name and intuition suggests, reinforcement learning adapts over evolutionary history rather than a life-time and is here addressing the sequential optimization of fitness that is usually approached by genetic algorithms in the artificial life community. We utilize a shift from procedures to objectives, allowing us to bring new powerful machinery to bare, and we see emergence of complex behaviour from a sequence of simple optimization problems

    White shark optimizer with optimal deep learning based effective unmanned aerial vehicles communication and scene classification.

    Get PDF
    Unmanned aerial vehicles (UAVs) become a promising enabler for the next generation of wireless networks with the tremendous growth in electronics and communications. The application of UAV communications comprises messages relying on coverage extension for transmission networks after disasters, Internet of Things (IoT) devices, and dispatching distress messages from the device positioned within the coverage hole to the emergency centre. But there are some problems in enhancing UAV clustering and scene classification using deep learning approaches for enhancing performance. This article presents a new White Shark Optimizer with Optimal Deep Learning based Effective Unmanned Aerial Vehicles Communication and Scene Classification (WSOODL-UAVCSC) technique. UAV clustering and scene categorization present many deep learning challenges in disaster management: scene understanding complexity, data variability and abundance, visual data feature extraction, nonlinear and high-dimensional data, adaptability and generalization, real-time decision making, UAV clustering optimization, sparse and incomplete data. the need to handle complex, high-dimensional data, adapt to changing environments, and make quick, correct decisions in critical situations drives deep learning in UAV clustering and scene categorization. The purpose of the WSOODL-UAVCSC technique is to cluster the UAVs for effective communication and scene classification. The WSO algorithm is utilized for the optimization of the UAV clustering process and enables to accomplish effective communication and interaction in the network. With dynamic adjustment of the clustering, the WSO algorithm improves the performance and robustness of the UAV system. For the scene classification process, the WSOODL-UAVCSC technique involves capsule network (CapsNet) feature extraction, marine predators algorithm (MPA) based hyperparameter tuning, and echo state network (ESN) classification. A wide-ranging simulation analysis was conducted to validate the enriched performance of the WSOODL-UAVCSC approach. Extensive result analysis pointed out the enhanced performance of the WSOODL-UAVCSC method over other existing techniques. The WSOODL-UAVCSC method achieved an accuracy of 99.12%, precision of 97.45%, recall of 98.90%, and F1-score of 98.10% when compared to other existing techniques

    Artificial Intelligence Applications for Drones Navigation in GPS-denied or degraded Environments

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore