18,635 research outputs found
Generalized Off-Policy Actor-Critic
We propose a new objective, the counterfactual objective, unifying existing
objectives for off-policy policy gradient algorithms in the continuing
reinforcement learning (RL) setting. Compared to the commonly used excursion
objective, which can be misleading about the performance of the target policy
when deployed, our new objective better predicts such performance. We prove the
Generalized Off-Policy Policy Gradient Theorem to compute the policy gradient
of the counterfactual objective and use an emphatic approach to get an unbiased
sample from this policy gradient, yielding the Generalized Off-Policy
Actor-Critic (Geoff-PAC) algorithm. We demonstrate the merits of Geoff-PAC over
existing algorithms in Mujoco robot simulation tasks, the first empirical
success of emphatic algorithms in prevailing deep RL benchmarks.Comment: NeurIPS 201
Fingerprint Policy Optimisation for Robust Reinforcement Learning
Policy gradient methods ignore the potential value of adjusting environment
variables: unobservable state features that are randomly determined by the
environment in a physical setting, but are controllable in a simulator. This
can lead to slow learning, or convergence to suboptimal policies, if the
environment variable has a large impact on the transition dynamics. In this
paper, we present fingerprint policy optimisation (FPO), which finds a policy
that is optimal in expectation across the distribution of environment
variables. The central idea is to use Bayesian optimisation (BO) to actively
select the distribution of the environment variable that maximises the
improvement generated by each iteration of the policy gradient method. To make
this BO practical, we contribute two easy-to-compute low-dimensional
fingerprints of the current policy. Our experiments show that FPO can
efficiently learn policies that are robust to significant rare events, which
are unlikely to be observable under random sampling, but are key to learning
good policies.Comment: ICML 201
Correcting discount-factor mismatch in on-policy policy gradient methods
The policy gradient theorem gives a convenient form of the policy gradient in
terms of three factors: an action value, a gradient of the action likelihood,
and a state distribution involving discounting called the \emph{discounted
stationary distribution}. But commonly used on-policy methods based on the
policy gradient theorem ignores the discount factor in the state distribution,
which is technically incorrect and may even cause degenerate learning behavior
in some environments. An existing solution corrects this discrepancy by using
as a factor in the gradient estimate. However, this solution is not
widely adopted and does not work well in tasks where the later states are
similar to earlier states. We introduce a novel distribution correction to
account for the discounted stationary distribution that can be plugged into
many existing gradient estimators. Our correction circumvents the performance
degradation associated with the correction with a lower variance.
Importantly, compared to the uncorrected estimators, our algorithm provides
improved state emphasis to evade suboptimal policies in certain environments
and consistently matches or exceeds the original performance on several OpenAI
gym and DeepMind suite benchmarks
- …