8,808 research outputs found
Multi-task Deep Reinforcement Learning with PopArt
The reinforcement learning community has made great strides in designing
algorithms capable of exceeding human performance on specific tasks. These
algorithms are mostly trained one task at the time, each new task requiring to
train a brand new agent instance. This means the learning algorithm is general,
but each solution is not; each agent can only solve the one task it was trained
on. In this work, we study the problem of learning to master not one but
multiple sequential-decision tasks at once. A general issue in multi-task
learning is that a balance must be found between the needs of multiple tasks
competing for the limited resources of a single learning system. Many learning
algorithms can get distracted by certain tasks in the set of tasks to solve.
Such tasks appear more salient to the learning process, for instance because of
the density or magnitude of the in-task rewards. This causes the algorithm to
focus on those salient tasks at the expense of generality. We propose to
automatically adapt the contribution of each task to the agent's updates, so
that all tasks have a similar impact on the learning dynamics. This resulted in
state of the art performance on learning to play all games in a set of 57
diverse Atari games. Excitingly, our method learned a single trained policy -
with a single set of weights - that exceeds median human performance. To our
knowledge, this was the first time a single agent surpassed human-level
performance on this multi-task domain. The same approach also demonstrated
state of the art performance on a set of 30 tasks in the 3D reinforcement
learning platform DeepMind Lab
Deep Policies for Width-Based Planning in Pixel Domains
Width-based planning has demonstrated great success in recent years due to
its ability to scale independently of the size of the state space. For example,
Bandres et al. (2018) introduced a rollout version of the Iterated Width
algorithm whose performance compares well with humans and learning methods in
the pixel setting of the Atari games suite. In this setting, planning is done
on-line using the "screen" states and selecting actions by looking ahead into
the future. However, this algorithm is purely exploratory and does not leverage
past reward information. Furthermore, it requires the state to be factored into
features that need to be pre-defined for the particular task, e.g., the B-PROST
pixel features. In this work, we extend width-based planning by incorporating
an explicit policy in the action selection mechanism. Our method, called
-IW, interleaves width-based planning and policy learning using the
state-actions visited by the planner. The policy estimate takes the form of a
neural network and is in turn used to guide the planning step, thus reinforcing
promising paths. Surprisingly, we observe that the representation learned by
the neural network can be used as a feature space for the width-based planner
without degrading its performance, thus removing the requirement of pre-defined
features for the planner. We compare -IW with previous width-based methods
and with AlphaZero, a method that also interleaves planning and learning, in
simple environments, and show that -IW has superior performance. We also
show that -IW algorithm outperforms previous width-based methods in the
pixel setting of Atari games suite.Comment: In Proceedings of the 29th International Conference on Automated
Planning and Scheduling (ICAPS 2019). arXiv admin note: text overlap with
arXiv:1806.0589
Learning with Opponent-Learning Awareness
Multi-agent settings are quickly gathering importance in machine learning.
This includes a plethora of recent work on deep multi-agent reinforcement
learning, but also can be extended to hierarchical RL, generative adversarial
networks and decentralised optimisation. In all these settings the presence of
multiple learning agents renders the training problem non-stationary and often
leads to unstable training or undesired final results. We present Learning with
Opponent-Learning Awareness (LOLA), a method in which each agent shapes the
anticipated learning of the other agents in the environment. The LOLA learning
rule includes a term that accounts for the impact of one agent's policy on the
anticipated parameter update of the other agents. Results show that the
encounter of two LOLA agents leads to the emergence of tit-for-tat and
therefore cooperation in the iterated prisoners' dilemma, while independent
learning does not. In this domain, LOLA also receives higher payouts compared
to a naive learner, and is robust against exploitation by higher order
gradient-based methods. Applied to repeated matching pennies, LOLA agents
converge to the Nash equilibrium. In a round robin tournament we show that LOLA
agents successfully shape the learning of a range of multi-agent learning
algorithms from literature, resulting in the highest average returns on the
IPD. We also show that the LOLA update rule can be efficiently calculated using
an extension of the policy gradient estimator, making the method suitable for
model-free RL. The method thus scales to large parameter and input spaces and
nonlinear function approximators. We apply LOLA to a grid world task with an
embedded social dilemma using recurrent policies and opponent modelling. By
explicitly considering the learning of the other agent, LOLA agents learn to
cooperate out of self-interest. The code is at github.com/alshedivat/lola
- …