Contemporary autopilot systems for unmanned aerial vehicles (UAVs) are far
more limited in their flight envelope as compared to experienced human pilots,
thereby restricting the conditions UAVs can operate in and the types of
missions they can accomplish autonomously. This paper proposes a deep
reinforcement learning (DRL) controller to handle the nonlinear attitude
control problem, enabling extended flight envelopes for fixed-wing UAVs. A
proof-of-concept controller using the proximal policy optimization (PPO)
algorithm is developed, and is shown to be capable of stabilizing a fixed-wing
UAV from a large set of initial conditions to reference roll, pitch and
airspeed values. The training process is outlined and key factors for its
progression rate are considered, with the most important factor found to be
limiting the number of variables in the observation vector, and including
values for several previous time steps for these variables. The trained
reinforcement learning (RL) controller is compared to a
proportional-integral-derivative (PID) controller, and is found to converge in
more cases than the PID controller, with comparable performance. Furthermore,
the RL controller is shown to generalize well to unseen disturbances in the
form of wind and turbulence, even in severe disturbance conditions.Comment: 11 pages, 3 figures, 2019 International Conference on Unmanned
Aircraft Systems (ICUAS