1 research outputs found
Safe Policy Improvement with Baseline Bootstrapping
This paper considers Safe Policy Improvement (SPI) in Batch Reinforcement
Learning (Batch RL): from a fixed dataset and without direct access to the true
environment, train a policy that is guaranteed to perform at least as well as
the baseline policy used to collect the data. Our approach, called SPI with
Baseline Bootstrapping (SPIBB), is inspired by the knows-what-it-knows
paradigm: it bootstraps the trained policy with the baseline when the
uncertainty is high. Our first algorithm, -SPIBB, comes with SPI
theoretical guarantees. We also implement a variant, -SPIBB, that
is even more efficient in practice. We apply our algorithms to a motivational
stochastic gridworld domain and further demonstrate on randomly generated MDPs
the superiority of SPIBB with respect to existing algorithms, not only in
safety but also in mean performance. Finally, we implement a model-free version
of SPIBB and show its benefits on a navigation task with deep RL implementation
called SPIBB-DQN, which is, to the best of our knowledge, the first RL
algorithm relying on a neural network representation able to train efficiently
and reliably from batch data, without any interaction with the environment.Comment: accepted as a long oral at ICML201