19 research outputs found
Offline Reinforcement Learning as Anti-Exploration
Offline Reinforcement Learning (RL) aims at learning an optimal control from
a fixed dataset, without interactions with the system. An agent in this setting
should avoid selecting actions whose consequences cannot be predicted from the
data. This is the converse of exploration in RL, which favors such actions. We
thus take inspiration from the literature on bonus-based exploration to design
a new offline RL agent. The core idea is to subtract a prediction-based
exploration bonus from the reward, instead of adding it for exploration. This
allows the policy to stay close to the support of the dataset. We connect this
approach to a more common regularization of the learned policy towards the
data. Instantiated with a bonus based on the prediction error of a variational
autoencoder, we show that our agent is competitive with the state of the art on
a set of continuous control locomotion and manipulation tasks