9,745 research outputs found
Learning Classical Planning Strategies with Policy Gradient
A common paradigm in classical planning is heuristic forward search. Forward
search planners often rely on simple best-first search which remains fixed
throughout the search process. In this paper, we introduce a novel search
framework capable of alternating between several forward search approaches
while solving a particular planning problem. Selection of the approach is
performed using a trainable stochastic policy, mapping the state of the search
to a probability distribution over the approaches. This enables using policy
gradient to learn search strategies tailored to a specific distributions of
planning problems and a selected performance metric, e.g. the IPC score. We
instantiate the framework by constructing a policy space consisting of five
search approaches and a two-dimensional representation of the planner's state.
Then, we train the system on randomly generated problems from five IPC domains
using three different performance metrics. Our experimental results show that
the learner is able to discover domain-specific search strategies, improving
the planner's performance relative to the baselines of plain best-first search
and a uniform policy.Comment: Accepted for ICAPS 201
Truncating Temporal Differences: On the Efficient Implementation of TD(lambda) for Reinforcement Learning
Temporal difference (TD) methods constitute a class of methods for learning
predictions in multi-step prediction problems, parameterized by a recency
factor lambda. Currently the most important application of these methods is to
temporal credit assignment in reinforcement learning. Well known reinforcement
learning algorithms, such as AHC or Q-learning, may be viewed as instances of
TD learning. This paper examines the issues of the efficient and general
implementation of TD(lambda) for arbitrary lambda, for use with reinforcement
learning algorithms optimizing the discounted sum of rewards. The traditional
approach, based on eligibility traces, is argued to suffer from both
inefficiency and lack of generality. The TTD (Truncated Temporal Differences)
procedure is proposed as an alternative, that indeed only approximates
TD(lambda), but requires very little computation per action and can be used
with arbitrary function representation methods. The idea from which it is
derived is fairly simple and not new, but probably unexplored so far.
Encouraging experimental results are presented, suggesting that using lambda
> 0 with the TTD procedure allows one to obtain a significant learning
speedup at essentially the same cost as usual TD(0) learning.Comment: See http://www.jair.org/ for any accompanying file
- …