457 research outputs found
Recommended from our members
Evolving a Multiagent Controller for Micro Aerial Vehicles
Micro Aerial Vehicles (MAVs) are notoriously
difficult to control as they are light, susceptible to minor
fluctuations in the environment, and obey highly non-linear
dynamics. Indeed, traditional control methods, particularly
those relying on difficult to obtain models of the interaction
between an MAV and its environment have been unable
to provide adequate control beyond simple maneuvers. In
this paper, we address the problem of controlling an MAV
(which has segmented control surfaces) by evolving a neurocontroller
and fine-tuning it using multiagent coordination
techniques. This approach is based on a control strategy
that learns to map MAV states (position, velocity) to MAV
actions (e.g., actuator position) to achieve good performance
(e.g., flight time) by maximizing an objective function. The
main difficulty with this approach is defining the objective
functions at the MAV level that allow good performance.
In addition, to provide added robustness, we investigate a
multiagent approach to control where each control surface
aims to optimize a local objective. Our results show that this
approach not only provides good MAV control, but provides
robustness to (i) wind gusts by a factor of six; (ii) turbulence
by a factor of four; and (iii) hardware failures by a factor of
eight over a traditional control method.This is the author's peer-reviewed final manuscript, as accepted by the publisher. The published article is copyrighted by IEEE-Institute of Electrical and Electronics Engineers and can be found at: http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=5326. ©2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Keywords: Neuro-Evolution, Evolutionary Control, Multiagent Control, Micro Aerial Vehicle
Recommended from our members
A Neuro-evolutionary Approach to Control Surface Segmentation for Micro Aerial Vehicles
This paper addresses control surface segmentation in micro aerial vehicles (MAVs) by leveraging neuro-evolutionary techniques that allow the control of a higher number of control surfaces. Applying classical control methods to MAVs is a difficult process due to the complexity of the control laws with fast and highly non-linear dynamics. These methods are mostly based on models that are difficult to obtain for dynamic and stochastic environments. Moreover, these problems are exacerbated when both the number of control surfaces increases and the model’s accuracy in determining the impact of each control surface decreases. Instead, we focus on neuro-evolutionary techniques that have been successfully applied in many domains with limited models and highly non-linear dynamics. Wind tunnel simulations with AVL show that MAV performances are improved in terms of both reduced deflection angles and reduced drag (up to 5%) over a simplified model in two sets of experiments with different objective functions. We also show robustness to actuator failure with desired roll moment values still attained with failed actuators in the system through the neuro-controller.Keywords: Evolutionary algorithms, Micro Aerial Vehicles, Neural Network
Recommended from our members
Learning based methods applied to the MAV control problem
This thesis addresses Micro Aerial Vehicle (MAV) control by leveraging learning based techniques to improve robustness of the control system. Applying classical control methods to MAVs is a difficult process due to the complexity of the control laws with fast and highly non-linear dynamics. These methods are mostly based on models that are diffcult to obtain for dynamic and stochastic environments. Due to their size, MAVs are affected by wind gusts and perturbations that push the limits of model based controllers where the linear approximation no longer holds. Instead, we focus on a control strategy that learns to map MAV states (e.g., heading, altitude, velocity) to MAV actions (e.g., actuator positions) to achieve good performance (e.g., flight time, minimal altitude and heading error) by maximizing an objective function. The main difficulty with this approach is defining the objective function and tuning the
learning parameters to achieve the desired results. These learning based techniques have been used with great success in many domains with similar dynamics and are shown to improve MAV robustness with respect to wind gusts, perturbations, and actuator failure. Our results show significant improvements in
response times to minor altitude and heading corrections over a traditional PID controller. In addition, we show that the MAV response to maintaining altitude in the presence of wind gusts improves by a factor of five. Similarly, we show that the MAV response to maintaining heading in the presence of turbulence improves by factors of three. Finally, we show significant improvements in the case of control surface actuator failure when using a multiagent system. The multiagent control system performs up to 8 times better than the PID controller when
tracking a target heading
Recommended from our members
Learning-based control and coordination of autonomous UAVs
Uninhabited aerial vehicles, also called UAVs are currently controller by a combination of a human pilot at a remote location, and autopilot systems similar to those found on commercial aircraft. As UAVs transition from remote piloting to fully autonomous operation, control laws must be developed for the tasks to be performed. Flight control and navigation are low-level tasks that must be performed by the UAV in order to complete more useful missions. In the domain of persistent aerial surveillance, in which UAVs are responsible for locating and continually observing points of interest (POIs) in the environment, such a mission can be accomplished much more efficiently by groups of cooperating UAVs.
To develop the controller for a UAV, a discrete-time, physics-based simulator was developed in which an initially random neural network controller
could be evolved over successive generations to produce the desired output. Because of the inherent complexity of navigating and maintaining stable flight, a novel state space utilizing an approximation of the flight path length between the aircraft and its navigational waypoint is developed and implemented. In choosing the controller output as the net thrust of the aircraft from all control surfaces and impellers, a controller suitable for a wide range of UAV types is reached. To develop a controller for each aircraft to cooperate in the persistent aerial surveillance domain, a behavior-based simulator was developed. Using this simulator, constraints on the flight dynamics are approximated to speed computation. Each UAV agent trains a neural network controller through successive episodes using sensory data about other aircraft and POIs.
Testing of each controller was done by simulating in increasingly dynamic environments. The flight controller is shown to be able to successfully maintain heading and altitude and to make turns to ultimately reach a waypoint. The surveillance coordination controller is shown to coordinate UAVs well for both static and mobile POIs, and to scale well from systems of 3 agents to systems of 30 agents. Scaling of the controller to more agents is particularly effective when using a difference reward calculation in training the controllers
Evolution of Control Programs for a Swarm of Autonomous Unmanned Aerial Vehicles
Unmanned aerial vehicles (UAVs) are rapidly becoming a critical military asset. In the future, advances in miniaturization are going to drive the development of insect size UAVs. New approaches to controlling these swarms are required. The goal of this research is to develop a controller to direct a swarm of UAVs in accomplishing a given mission. While previous efforts have largely been limited to a two-dimensional model, a three-dimensional model has been developed for this project. Models of UAV capabilities including sensors, actuators and communications are presented. Genetic programming uses the principles of Darwinian evolution to generate computer programs to solve problems. A genetic programming approach is used to evolve control programs for UAV swarms. Evolved controllers are compared with a hand-crafted solution using quantitative and qualitative methods. Visualization and statistical methods are used to analyze solutions. Results indicate that genetic programming is capable of producing effective solutions to multi-objective control problems
A Framework for Automatic Behavior Generation in Multi-Function Swarms
17 USC 105 interim-entered record; under review.Multi-function swarms are swarms that solve multiple tasks at once. For example, a quadcopter swarm could be tasked with exploring an area of interest while simultaneously functioning as ad-hoc relays. With this type of multi-function comes the challenge of handling potentially conflicting requirements simultaneously. Using the Quality-Diversity algorithm MAP-elites in combination with a suitable controller structure, a framework for automatic behavior generation in multi-function swarms is proposed. The framework is tested on a scenario with three simultaneous tasks: exploration, communication network creation and geolocation of Radio Frequency (RF) emitters. A repertoire is evolved, consisting of a wide range of controllers, or behavior primitives, with different characteristics and trade-offs in the different tasks. This repertoire enables the swarm to online transition between behaviors featuring different trade-offs of applications depending on the situational requirements. Furthermore, the effect of noise on the behavior characteristics in MAP-elites is investigated. A moderate number of re-evaluations is found to increase the robustness while keeping the computational requirements relatively low. A few selected controllers are examined, and the dynamics of transitioning between these controllers are explored. Finally, the study investigates the importance of individual sensor or controller inputs. This is done through ablation, where individual inputs are disabled and their impact on the performance of the swarm controllers is assessed and analyzed
- …