30,574 research outputs found
Expected Eligibility Traces
The question of how to determine which states and actions are responsible for
a certain outcome is known as the credit assignment problem and remains a
central research question in reinforcement learning and artificial
intelligence. Eligibility traces enable efficient credit assignment to the
recent sequence of states and actions experienced by the agent, but not to
counterfactual sequences that could also have led to the current state. In this
work, we introduce expected eligibility traces. Expected traces allow, with a
single update, to update states and actions that could have preceded the
current state, even if they did not do so on this occasion. We discuss when
expected traces provide benefits over classic (instantaneous) traces in
temporal-difference learning, and show that sometimes substantial improvements
can be attained. We provide a way to smoothly interpolate between instantaneous
and expected traces by a mechanism similar to bootstrapping, which ensures that
the resulting algorithm is a strict generalisation of TD(). Finally,
we discuss possible extensions and connections to related ideas, such as
successor features.Comment: AAAI, distinguished paper awar
Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of neoHebbian Three-Factor Learning Rules
Most elementary behaviors such as moving the arm to grasp an object or
walking into the next room to explore a museum evolve on the time scale of
seconds; in contrast, neuronal action potentials occur on the time scale of a
few milliseconds. Learning rules of the brain must therefore bridge the gap
between these two different time scales.
Modern theories of synaptic plasticity have postulated that the co-activation
of pre- and postsynaptic neurons sets a flag at the synapse, called an
eligibility trace, that leads to a weight change only if an additional factor
is present while the flag is set. This third factor, signaling reward,
punishment, surprise, or novelty, could be implemented by the phasic activity
of neuromodulators or specific neuronal inputs signaling special events. While
the theoretical framework has been developed over the last decades,
experimental evidence in support of eligibility traces on the time scale of
seconds has been collected only during the last few years.
Here we review, in the context of three-factor rules of synaptic plasticity,
four key experiments that support the role of synaptic eligibility traces in
combination with a third factor as a biological implementation of neoHebbian
three-factor learning rules
Truncating Temporal Differences: On the Efficient Implementation of TD(lambda) for Reinforcement Learning
Temporal difference (TD) methods constitute a class of methods for learning
predictions in multi-step prediction problems, parameterized by a recency
factor lambda. Currently the most important application of these methods is to
temporal credit assignment in reinforcement learning. Well known reinforcement
learning algorithms, such as AHC or Q-learning, may be viewed as instances of
TD learning. This paper examines the issues of the efficient and general
implementation of TD(lambda) for arbitrary lambda, for use with reinforcement
learning algorithms optimizing the discounted sum of rewards. The traditional
approach, based on eligibility traces, is argued to suffer from both
inefficiency and lack of generality. The TTD (Truncated Temporal Differences)
procedure is proposed as an alternative, that indeed only approximates
TD(lambda), but requires very little computation per action and can be used
with arbitrary function representation methods. The idea from which it is
derived is fairly simple and not new, but probably unexplored so far.
Encouraging experimental results are presented, suggesting that using lambda
> 0 with the TTD procedure allows one to obtain a significant learning
speedup at essentially the same cost as usual TD(0) learning.Comment: See http://www.jair.org/ for any accompanying file
Autonomous Reinforcement of Behavioral Sequences in Neural Dynamics
We introduce a dynamic neural algorithm called Dynamic Neural (DN)
SARSA(\lambda) for learning a behavioral sequence from delayed reward.
DN-SARSA(\lambda) combines Dynamic Field Theory models of behavioral sequence
representation, classical reinforcement learning, and a computational
neuroscience model of working memory, called Item and Order working memory,
which serves as an eligibility trace. DN-SARSA(\lambda) is implemented on both
a simulated and real robot that must learn a specific rewarding sequence of
elementary behaviors from exploration. Results show DN-SARSA(\lambda) performs
on the level of the discrete SARSA(\lambda), validating the feasibility of
general reinforcement learning without compromising neural dynamics.Comment: Sohrob Kazerounian, Matthew Luciw are Joint first author
Multi-step Reinforcement Learning: A Unifying Algorithm
Unifying seemingly disparate algorithmic ideas to produce better performing
algorithms has been a longstanding goal in reinforcement learning. As a primary
example, TD() elegantly unifies one-step TD prediction with Monte
Carlo methods through the use of eligibility traces and the trace-decay
parameter . Currently, there are a multitude of algorithms that can be
used to perform TD control, including Sarsa, -learning, and Expected Sarsa.
These methods are often studied in the one-step case, but they can be extended
across multiple time steps to achieve better performance. Each of these
algorithms is seemingly distinct, and no one dominates the others for all
problems. In this paper, we study a new multi-step action-value algorithm
called which unifies and generalizes these existing algorithms,
while subsuming them as special cases. A new parameter, , is introduced
to allow the degree of sampling performed by the algorithm at each step during
its backup to be continuously varied, with Sarsa existing at one extreme (full
sampling), and Expected Sarsa existing at the other (pure expectation).
is generally applicable to both on- and off-policy learning, but in
this work we focus on experiments in the on-policy case. Our results show that
an intermediate value of , which results in a mixture of the existing
algorithms, performs better than either extreme. The mixture can also be varied
dynamically which can result in even greater performance.Comment: Appeared at the Thirty-Second AAAI Conference on Artificial
Intelligence (AAAI-18
- …