853 research outputs found

    High fidelity progressive reinforcement learning for agile maneuvering UAVs

    Get PDF
    In this work, we present a high fidelity model based progressive reinforcement learning method for control system design for an agile maneuvering UAV. Our work relies on a simulation-based training and testing environment for doing software-in-the-loop (SIL), hardware-in-the-loop (HIL) and integrated flight testing within photo-realistic virtual reality (VR) environment. Through progressive learning with the high fidelity agent and environment models, the guidance and control policies build agile maneuvering based on fundamental control laws. First, we provide insight on development of high fidelity mathematical models using frequency domain system identification. These models are later used to design reinforcement learning based adaptive flight control laws allowing the vehicle to be controlled over a wide range of operating conditions covering model changes on operating conditions such as payload, voltage and damage to actuators and electronic speed controllers (ESCs). We later design outer flight guidance and control laws. Our current work and progress is summarized in this work

    Reinforcement Learning Adaptive PID Controller for an Under-actuated Robot Arm

    Get PDF
    Abstract: An adaptive PID controller is used to control of a two degrees of freedom under actuated manipulator. An actor-critic based reinforcement learning is employed for tuning of parameters of the adaptive PID controller. Reinforcement learning is an unsupervised scheme wherein no reference exists to which convergence of algorithm is anticipated. Thus, it is appropriate for real time applications. Controller structure and learning equations as well as update rules are provided. Simulations are performed in SIMULINK and performance of the controller is compared with NARMA-L2 controller. The results verified good performance of the controller in tracking and disturbance rejection tests

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Health Management and Adaptive Control of Distributed Spacecraft Systems

    Get PDF
    As the development of challenging missions like on-orbit construction and collaborative inspection that involve multi-spacecraft systems increases, the requirements needed to improve post-failure safety to maintain the mission performance also increases, especially when operating under uncertain conditions. In particular, space missions that involve Distributed Spacecraft Systems (e.g, inspection, repairing, assembling, or deployment of space assets) are susceptible to failures and threats that are detrimental to the overall mission performance. This research applies a distributed Health Management System that uses a bio-inspired mechanism based on the Artificial Immune System coupled with a Support Vector Machine to obtain an optimized health monitoring system capable of detecting nominal and off-nominal system conditions. A simulation environment is developed for a fleet of spacecraft performing a low-Earth orbit inspection within close proximity of a target space asset, where the spacecraft observers follow stable relative orbits with respect to the target asset, allowing dynamics to be expressed using the Clohessy-Wiltshire-Hill equations. Additionally, based on desired points of inspection, the observers have specific attitude requirements that are achieved using Reaction Wheels as the control moment device. An adaptive control based on Deep Reinforcement Learning using an Actor-Critic-Adverse architecture is implemented to achieve high levels of mission protection, especially under disturbances that might lead to performance degradation. Numerical simulations to evaluate the capabilities of the health management architecture when the spacecraft network is subjected to failures are performed. A comparison of different attitude controllers such as Nonlinear Dynamic Inversion and Pole Placement against Deep Reinforcement Learning based controller is presented. The Dynamic Inversion controller showed better tracking performance but large control effort, while the Deep Reinforcement controller showed satisfactory tracking performance with minimal control effort. Numerical simulations successfully demonstrated the potential of both the bioinspired Health Monitoring System architecture and the controller, to detect and identify failures and overcome bounded disturbances, respectively

    Self-Tuning PID Control via a Hybrid Actor-Critic-Based Neural Structure for Quadcopter Control

    Full text link
    Proportional-Integrator-Derivative (PID) controller is used in a wide range of industrial and experimental processes. There are a couple of offline methods for tuning PID gains. However, due to the uncertainty of model parameters and external disturbances, real systems such as Quadrotors need more robust and reliable PID controllers. In this research, a self-tuning PID controller using a Reinforcement-Learning-based Neural Network for attitude and altitude control of a Quadrotor has been investigated. An Incremental PID, which contains static and dynamic gains, has been considered and only the variable gains have been tuned. To tune dynamic gains, a model-free actor-critic-based hybrid neural structure was used that was able to properly tune PID gains, and also has done the best as an identifier. In both tunning and identification tasks, a Neural Network with two hidden layers and sigmoid activation functions has been learned using Adaptive Momentum (ADAM) optimizer and Back-Propagation (BP) algorithm. This method is online, able to tackle disturbance, and fast in training. In addition to robustness to mass uncertainty and wind gust disturbance, results showed that the proposed method had a better performance when compared to a PID controller with constant gains.Comment: 7 pages, 18 figures, The 30th Annual International Conference of Iranian Society of Mechanical Engineer

    Automating Vehicles by Deep Reinforcement Learning using Task Separation with Hill Climbing

    Full text link
    Within the context of autonomous driving a model-based reinforcement learning algorithm is proposed for the design of neural network-parameterized controllers. Classical model-based control methods, which include sampling- and lattice-based algorithms and model predictive control, suffer from the trade-off between model complexity and computational burden required for the online solution of expensive optimization or search problems at every short sampling time. To circumvent this trade-off, a 2-step procedure is motivated: first learning of a controller during offline training based on an arbitrarily complicated mathematical system model, before online fast feedforward evaluation of the trained controller. The contribution of this paper is the proposition of a simple gradient-free and model-based algorithm for deep reinforcement learning using task separation with hill climbing (TSHC). In particular, (i) simultaneous training on separate deterministic tasks with the purpose of encoding many motion primitives in a neural network, and (ii) the employment of maximally sparse rewards in combination with virtual velocity constraints (VVCs) in setpoint proximity are advocated.Comment: 10 pages, 6 figures, 1 tabl
    • …
    corecore