We consider un-discounted reinforcement learning (RL) in Markov decision
processes (MDPs) under drifting non-stationarity, i.e., both the reward and
state transition distributions are allowed to evolve over time, as long as
their respective total variations, quantified by suitable metrics, do not
exceed certain variation budgets. We first develop the Sliding Window
Upper-Confidence bound for Reinforcement Learning with Confidence Widening
(SWUCRL2-CW) algorithm, and establish its dynamic regret bound when the
variation budgets are known. In addition, we propose the
Bandit-over-Reinforcement Learning (BORL) algorithm to adaptively tune the
SWUCRL2-CW algorithm to achieve the same dynamic regret bound, but in a
parameter-free manner, i.e., without knowing the variation budgets. Notably,
learning non-stationary MDPs via the conventional optimistic exploration
technique presents a unique challenge absent in existing (non-stationary)
bandit learning settings. We overcome the challenge by a novel confidence
widening technique that incorporates additional optimism.Comment: To appear in proceedings of the 37th International Conference on
Machine Learning. Shortened conference version of its journal version
(available at: arXiv:1906.02922