14 research outputs found
Model-free trajectory optimization for reinforcement learning
Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update.
However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy.
In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation
of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system
dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics
Optimal control and inverse optimal control by distribution matching
Optimal control is a powerful approach to achieve optimal behavior. However, it typically requires a manual specification of a cost function which often contains several objectives, such as reaching goal positions at different time steps or energy efficiency. Manually trading-off these objectives is often difficult and requires a high engineering effort. In this paper, we present a new approach to specify optimal behavior. We directly specify the desired behavior by a distribution over future states or features of the states. For example, the experimenter could choose to reach certain mean positions with given accuracy/variance at specified time steps. Our approach also unifies optimal control and inverse optimal control in one framework. Given a desired state distribution, we estimate a cost function such that the optimal controller matches the desired distribution. If the desired distribution is estimated from expert demonstrations, our approach performs inverse optimal control. We evaluate our approach on several optimal and inverse optimal control tasks on non-linear systems using incremental linearizations similar to differential dynamic programming approaches
Better Exploration with Optimistic Actor-Critic
Actor-critic methods, a type of model-free Reinforcement Learning, have been
successfully applied to challenging tasks in continuous control, often
achieving state-of-the art performance. However, wide-scale adoption of these
methods in real-world domains is made difficult by their poor sample
efficiency. We address this problem both theoretically and empirically. On the
theoretical side, we identify two phenomena preventing efficient exploration in
existing state-of-the-art algorithms such as Soft Actor Critic. First,
combining a greedy actor update with a pessimistic estimate of the critic leads
to the avoidance of actions that the agent does not know about, a phenomenon we
call pessimistic underexploration. Second, current algorithms are directionally
uninformed, sampling actions with equal probability in opposite directions from
the current mean. This is wasteful, since we typically need actions taken along
certain directions much more than others. To address both of these phenomena,
we introduce a new algorithm, Optimistic Actor Critic, which approximates a
lower and upper confidence bound on the state-action value function. This
allows us to apply the principle of optimism in the face of uncertainty to
perform directed exploration using the upper bound while still using the lower
bound to avoid overestimation. We evaluate OAC in several challenging
continuous control tasks, achieving state-of the art sample efficiency.Comment: 20 pages (including supplement
Model-Free Trajectory-based Policy Optimization with Monotonic Improvement
Many of the recent trajectory optimization algorithms alternate between linear approximation
of the system dynamics around the mean trajectory and conservative policy update.
One way of constraining the policy change is by bounding the Kullback-Leibler (KL)
divergence between successive policies. These approaches already demonstrated great experimental
success in challenging problems such as end-to-end control of physical systems.
However, these approaches lack any improvement guarantee as the linear approximation of
the system dynamics can introduce a bias in the policy update and prevent convergence
to the optimal policy. In this article, we propose a new model-free trajectory-based policy
optimization algorithm with guaranteed monotonic improvement. The algorithm backpropagates
a local, quadratic and time-dependent Q-Function learned from trajectory data
instead of a model of the system dynamics. Our policy update ensures exact KL-constraint
satisfaction without simplifying assumptions on the system dynamics. We experimentally
demonstrate on highly non-linear control tasks the improvement in performance of our algorithm
in comparison to approaches linearizing the system dynamics. In order to show the
monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of
our policy update scheme to derive a lower bound of the change in policy return between
successive iterations
Learning Replanning Policies with Direct Policy Search
Direct policy search has been successful in learning challenging real world robotic motor skills by learning open-loop movement primitives with high sample efficiency. These primitives can be generalized to different contexts with varying initial configurations and goals. Current state-of-the-art contextual policy search algorithms can however not adapt to changing, noisy context measurements. Yet, these are common characteristics of real world robotic tasks. Planning a trajectory ahead based on an inaccurate context that may change during the motion often results in poor accuracy, especially with highly dynamical tasks. To adapt to updated contexts, it is sensible to learn trajectory replanning strategies. We propose a framework to learn trajectory replanning policies via contextual policy search and demonstrate that they are safe for the robot, that they can be learned efficiently and that they outperform non-replanning policies for problems with partially observable or perturbed contex