1,517 research outputs found
Count-Based Exploration with the Successor Representation
In this paper we introduce a simple approach for exploration in reinforcement
learning (RL) that allows us to develop theoretically justified algorithms in
the tabular case but that is also extendable to settings where function
approximation is required. Our approach is based on the successor
representation (SR), which was originally introduced as a representation
defining state generalization by the similarity of successor states. Here we
show that the norm of the SR, while it is being learned, can be used as a
reward bonus to incentivize exploration. In order to better understand this
transient behavior of the norm of the SR we introduce the substochastic
successor representation (SSR) and we show that it implicitly counts the number
of times each state (or feature) has been observed. We use this result to
introduce an algorithm that performs as well as some theoretically
sample-efficient approaches. Finally, we extend these ideas to a deep RL
algorithm and show that it achieves state-of-the-art performance in Atari 2600
games when in a low sample-complexity regime.Comment: This paper appears in the Proceedings of the 34th AAAI Conference on
Artificial Intelligence (AAAI 2020
Count-Based Exploration in Feature Space for Reinforcement Learning
We introduce a new count-based optimistic exploration algorithm for
Reinforcement Learning (RL) that is feasible in environments with
high-dimensional state-action spaces. The success of RL algorithms in these
domains depends crucially on generalisation from limited training experience.
Function approximation techniques enable RL agents to generalise in order to
estimate the value of unvisited states, but at present few methods enable
generalisation regarding uncertainty. This has prevented the combination of
scalable RL algorithms with efficient exploration strategies that drive the
agent to reduce its uncertainty. We present a new method for computing a
generalised state visit-count, which allows the agent to estimate the
uncertainty associated with any state. Our \phi-pseudocount achieves
generalisation by exploiting same feature representation of the state space
that is used for value function approximation. States that have less frequently
observed features are deemed more uncertain. The \phi-Exploration-Bonus
algorithm rewards the agent for exploring in feature space rather than in the
untransformed state space. The method is simpler and less computationally
expensive than some previous proposals, and achieves near state-of-the-art
results on high-dimensional RL benchmarks.Comment: Conference: Twenty-sixth International Joint Conference on Artificial
Intelligence (IJCAI-17), 8 pages, 1 figur
Deep reinforcement learning from human preferences
For sophisticated reinforcement learning (RL) systems to interact usefully
with real-world environments, we need to communicate complex goals to these
systems. In this work, we explore goals defined in terms of (non-expert) human
preferences between pairs of trajectory segments. We show that this approach
can effectively solve complex RL tasks without access to the reward function,
including Atari games and simulated robot locomotion, while providing feedback
on less than one percent of our agent's interactions with the environment. This
reduces the cost of human oversight far enough that it can be practically
applied to state-of-the-art RL systems. To demonstrate the flexibility of our
approach, we show that we can successfully train complex novel behaviors with
about an hour of human time. These behaviors and environments are considerably
more complex than any that have been previously learned from human feedback
- …