26,357 research outputs found
Global Optimization for Value Function Approximation
Existing value function approximation methods have been successfully used in
many applications, but they often lack useful a priori error bounds. We propose
a new approximate bilinear programming formulation of value function
approximation, which employs global optimization. The formulation provides
strong a priori guarantees on both robust and expected policy loss by
minimizing specific norms of the Bellman residual. Solving a bilinear program
optimally is NP-hard, but this is unavoidable because the Bellman-residual
minimization itself is NP-hard. We describe and analyze both optimal and
approximate algorithms for solving bilinear programs. The analysis shows that
this algorithm offers a convergent generalization of approximate policy
iteration. We also briefly analyze the behavior of bilinear programming
algorithms under incomplete samples. Finally, we demonstrate that the proposed
approach can consistently minimize the Bellman residual on simple benchmark
problems
A Hierarchical Reinforcement Learning Method for Persistent Time-Sensitive Tasks
Reinforcement learning has been applied to many interesting problems such as
the famous TD-gammon and the inverted helicopter flight. However, little effort
has been put into developing methods to learn policies for complex persistent
tasks and tasks that are time-sensitive. In this paper, we take a step towards
solving this problem by using signal temporal logic (STL) as task
specification, and taking advantage of the temporal abstraction feature that
the options framework provide. We show via simulation that a relatively easy to
implement algorithm that combines STL and options can learn a satisfactory
policy with a small number of training case
Distral: Robust Multitask Reinforcement Learning
Most deep reinforcement learning algorithms are data inefficient in complex
and rich environments, limiting their applicability to many scenarios. One
direction for improving data efficiency is multitask learning with shared
neural network parameters, where efficiency may be improved through transfer
across related tasks. In practice, however, this is not usually observed,
because gradients from different tasks can interfere negatively, making
learning unstable and sometimes even less data efficient. Another issue is the
different reward schemes between tasks, which can easily lead to one task
dominating the learning of a shared model. We propose a new approach for joint
training of multiple tasks, which we refer to as Distral (Distill & transfer
learning). Instead of sharing parameters between the different workers, we
propose to share a "distilled" policy that captures common behaviour across
tasks. Each worker is trained to solve its own task while constrained to stay
close to the shared policy, while the shared policy is trained by distillation
to be the centroid of all task policies. Both aspects of the learning process
are derived by optimizing a joint objective function. We show that our approach
supports efficient transfer on complex 3D environments, outperforming several
related methods. Moreover, the proposed learning process is more robust and
more stable---attributes that are critical in deep reinforcement learning
Learning Classical Planning Strategies with Policy Gradient
A common paradigm in classical planning is heuristic forward search. Forward
search planners often rely on simple best-first search which remains fixed
throughout the search process. In this paper, we introduce a novel search
framework capable of alternating between several forward search approaches
while solving a particular planning problem. Selection of the approach is
performed using a trainable stochastic policy, mapping the state of the search
to a probability distribution over the approaches. This enables using policy
gradient to learn search strategies tailored to a specific distributions of
planning problems and a selected performance metric, e.g. the IPC score. We
instantiate the framework by constructing a policy space consisting of five
search approaches and a two-dimensional representation of the planner's state.
Then, we train the system on randomly generated problems from five IPC domains
using three different performance metrics. Our experimental results show that
the learner is able to discover domain-specific search strategies, improving
the planner's performance relative to the baselines of plain best-first search
and a uniform policy.Comment: Accepted for ICAPS 201
A hierarchical reinforcement learning method for persistent time-sensitive tasks
Reinforcement learning has been applied to many interesting problems such as the famous TD-gammon and the inverted helicopter flight. However, little effort has been put into developing methods to learn policies for complex persistent tasks and tasks that are time-sensitive. In this paper, we take a step towards solving this problem by using signal temporal logic (STL) as task specification, and taking advantage of the temporal abstraction feature that the options framework provide. We show via simulation that a relatively easy to implement algorithm that combines STL and options can learn a satisfactory policy with a small number of training cases
- …