2,488 research outputs found
DAC: The Double Actor-Critic Architecture for Learning Options
We reformulate the option framework as two parallel augmented MDPs. Under
this novel formulation, all policy optimization algorithms can be used off the
shelf to learn intra-option policies, option termination conditions, and a
master policy over options. We apply an actor-critic algorithm on each
augmented MDP, yielding the Double Actor-Critic (DAC) architecture.
Furthermore, we show that, when state-value functions are used as critics, one
critic can be expressed in terms of the other, and hence only one critic is
necessary. We conduct an empirical study on challenging robot simulation tasks.
In a transfer learning setting, DAC outperforms both its hierarchy-free
counterpart and previous gradient-based option learning algorithms.Comment: NeurIPS 201
When Waiting is not an Option : Learning Options with a Deliberation Cost
Recent work has shown that temporally extended actions (options) can be
learned fully end-to-end as opposed to being specified in advance. While the
problem of "how" to learn options is increasingly well understood, the question
of "what" good options should be has remained elusive. We formulate our answer
to what "good" options should be in the bounded rationality framework (Simon,
1957) through the notion of deliberation cost. We then derive practical
gradient-based learning algorithms to implement this objective. Our results in
the Arcade Learning Environment (ALE) show increased performance and
interpretability
- …