43,020 research outputs found
Continuous Control with a Combination of Supervised and Reinforcement Learning
This is the author accepted manuscript. The final version is available from the Institute of Electrical and Electronics Engineers via the DOI in this record.Reinforcement learning methods have recently achieved impressive results on a wide range of control problems. However, especially with complex inputs, they still require an extensive amount of training data in order to converge to a meaningful solution. This limits their applicability to complex input spaces such as video signals, and makes them impractical for use in complex real world problems, including many of those for video based control. Supervised learning, on the contrary, is capable of learning on a relatively limited number of samples, but relies on arbitrary hand-labelling of data rather than taskderived reward functions, and hence do not yield independent control policies. In this article we propose a novel, modelfree approach, which uses a combination of reinforcement and supervised learning for autonomous control and paves the way towards policy based control in real world environments. We use SpeedDreams/TORCS video game to demonstrate that our approach requires much less samples (hundreds of thousands against millions or tens of millions) comparing to the state-of-theart reinforcement learning techniques on similar data, and at the same time overcomes both supervised and reinforcement learning approaches in terms of quality. Additionally, we demonstrate applicability of the method to MuJoCo control problems.The authors are grateful for the support by the UK Engineering
and Physical Sciences Research Council (EPSRC) project
DEVA EP/N035399/1
Deep Reinforcement Learning from Self-Play in Imperfect-Information Games
Many real-world applications can be described as large-scale games of
imperfect information. To deal with these challenging domains, prior work has
focused on computing Nash equilibria in a handcrafted abstraction of the
domain. In this paper we introduce the first scalable end-to-end approach to
learning approximate Nash equilibria without prior domain knowledge. Our method
combines fictitious self-play with deep reinforcement learning. When applied to
Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium,
whereas common reinforcement learning methods diverged. In Limit Texas Holdem,
a poker game of real-world scale, NFSP learnt a strategy that approached the
performance of state-of-the-art, superhuman algorithms based on significant
domain expertise.Comment: updated version, incorporating conference feedbac
Learning Unmanned Aerial Vehicle Control for Autonomous Target Following
While deep reinforcement learning (RL) methods have achieved unprecedented
successes in a range of challenging problems, their applicability has been
mainly limited to simulation or game domains due to the high sample complexity
of the trial-and-error learning process. However, real-world robotic
applications often need a data-efficient learning process with safety-critical
constraints. In this paper, we consider the challenging problem of learning
unmanned aerial vehicle (UAV) control for tracking a moving target. To acquire
a strategy that combines perception and control, we represent the policy by a
convolutional neural network. We develop a hierarchical approach that combines
a model-free policy gradient method with a conventional feedback
proportional-integral-derivative (PID) controller to enable stable learning
without catastrophic failure. The neural network is trained by a combination of
supervised learning from raw images and reinforcement learning from games of
self-play. We show that the proposed approach can learn a target following
policy in a simulator efficiently and the learned behavior can be successfully
transferred to the DJI quadrotor platform for real-world UAV control
CrossNorm: Normalization for Off-Policy TD Reinforcement Learning
Off-policy temporal difference (TD) methods are a powerful class of
reinforcement learning (RL) algorithms. Intriguingly, deep off-policy TD
algorithms are not commonly used in combination with feature normalization
techniques, despite positive effects of normalization in other domains. We show
that naive application of existing normalization techniques is indeed not
effective, but that well-designed normalization improves optimization stability
and removes the necessity of target networks. In particular, we introduce a
normalization based on a mixture of on- and off-policy transitions, which we
call cross-normalization. It can be regarded as an extension of batch
normalization that re-centers data for two different distributions, as present
in off-policy learning. Applied to DDPG and TD3, cross-normalization improves
over the state of the art across a range of MuJoCo benchmark tasks
- …