1,533 research outputs found
Deep Bayesian Quadrature Policy Optimization
We study the problem of obtaining accurate policy gradient estimates using a
finite number of samples. Monte-Carlo methods have been the default choice for
policy gradient estimation, despite suffering from high variance in the
gradient estimates. On the other hand, more sample efficient alternatives like
Bayesian quadrature methods have received little attention due to their high
computational complexity. In this work, we propose deep Bayesian quadrature
policy gradient (DBQPG), a computationally efficient high-dimensional
generalization of Bayesian quadrature, for policy gradient estimation. We show
that DBQPG can substitute Monte-Carlo estimation in policy gradient methods,
and demonstrate its effectiveness on a set of continuous control benchmarks. In
comparison to Monte-Carlo estimation, DBQPG provides (i) more accurate gradient
estimates with a significantly lower variance, (ii) a consistent improvement in
the sample complexity and average return for several deep policy gradient
algorithms, and, (iii) the uncertainty in gradient estimation that can be
incorporated to further improve the performance.Comment: Conference paper: AAAI-21. Code available at
https://github.com/Akella17/Deep-Bayesian-Quadrature-Policy-Optimizatio
Deep Bayesian Quadrature Policy Optimization
We study the problem of obtaining accurate policy gradient estimates using a finite number of samples. Monte-Carlo methods have been the default choice for policy gradient estimation, despite suffering from high variance in the gradient estimates. On the other hand, more sample efficient alternatives like Bayesian quadrature methods are less scalable due to their high computational complexity. In this work, we propose deep Bayesian quadrature policy gradient (DBQPG), a computationally efficient high-dimensional generalization of Bayesian quadrature, for policy gradient estimation. We show that DBQPG can substitute Monte-Carlo estimation in policy gradient methods, and demonstrate its effectiveness on a set of continuous control benchmarks. In comparison to Monte-Carlo estimation, DBQPG provides (i) more accurate gradient estimates with a significantly lower variance, (ii) a consistent improvement in the sample complexity and average return for several deep policy gradient algorithms, and, (iii) the uncertainty in gradient estimation that can be incorporated to further improve the performance
Fingerprint Policy Optimisation for Robust Reinforcement Learning
Policy gradient methods ignore the potential value of adjusting environment
variables: unobservable state features that are randomly determined by the
environment in a physical setting, but are controllable in a simulator. This
can lead to slow learning, or convergence to suboptimal policies, if the
environment variable has a large impact on the transition dynamics. In this
paper, we present fingerprint policy optimisation (FPO), which finds a policy
that is optimal in expectation across the distribution of environment
variables. The central idea is to use Bayesian optimisation (BO) to actively
select the distribution of the environment variable that maximises the
improvement generated by each iteration of the policy gradient method. To make
this BO practical, we contribute two easy-to-compute low-dimensional
fingerprints of the current policy. Our experiments show that FPO can
efficiently learn policies that are robust to significant rare events, which
are unlikely to be observable under random sampling, but are key to learning
good policies.Comment: ICML 201
- …