4 research outputs found
Model-free trajectory optimization for reinforcement learning
Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update.
However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy.
In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation
of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system
dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics
Better Exploration with Optimistic Actor-Critic
Actor-critic methods, a type of model-free Reinforcement Learning, have been
successfully applied to challenging tasks in continuous control, often
achieving state-of-the art performance. However, wide-scale adoption of these
methods in real-world domains is made difficult by their poor sample
efficiency. We address this problem both theoretically and empirically. On the
theoretical side, we identify two phenomena preventing efficient exploration in
existing state-of-the-art algorithms such as Soft Actor Critic. First,
combining a greedy actor update with a pessimistic estimate of the critic leads
to the avoidance of actions that the agent does not know about, a phenomenon we
call pessimistic underexploration. Second, current algorithms are directionally
uninformed, sampling actions with equal probability in opposite directions from
the current mean. This is wasteful, since we typically need actions taken along
certain directions much more than others. To address both of these phenomena,
we introduce a new algorithm, Optimistic Actor Critic, which approximates a
lower and upper confidence bound on the state-action value function. This
allows us to apply the principle of optimism in the face of uncertainty to
perform directed exploration using the upper bound while still using the lower
bound to avoid overestimation. We evaluate OAC in several challenging
continuous control tasks, achieving state-of the art sample efficiency.Comment: 20 pages (including supplement
Model-Free Trajectory-based Policy Optimization with Monotonic Improvement
Many of the recent trajectory optimization algorithms alternate between linear approximation
of the system dynamics around the mean trajectory and conservative policy update.
One way of constraining the policy change is by bounding the Kullback-Leibler (KL)
divergence between successive policies. These approaches already demonstrated great experimental
success in challenging problems such as end-to-end control of physical systems.
However, these approaches lack any improvement guarantee as the linear approximation of
the system dynamics can introduce a bias in the policy update and prevent convergence
to the optimal policy. In this article, we propose a new model-free trajectory-based policy
optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates
a local, quadratic and time-dependent Q-Function learned from trajectory data
instead of a model of the system dynamics. Our policy update ensures exact KL-constraint
satisfaction without simplifying assumptions on the system dynamics. We experimentally
demonstrate on highly non-linear control tasks the improvement in performance of our algorithm
in comparison to approaches linearizing the system dynamics. In order to show the
monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of
our policy update scheme to derive a lower bound of the change in policy return between
successive iterations