2 research outputs found
Rarely-switching linear bandits: optimization of causal effects for the real world
Excessively changing policies in many real world scenarios is difficult,
unethical, or expensive. After all, doctor guidelines, tax codes, and price
lists can only be reprinted so often. We may thus want to only change a policy
when it is probable that the change is beneficial. In cases that a policy is a
threshold on contextual variables we can estimate treatment effects for
populations lying at the threshold. This allows for a schedule of incremental
policy updates that let us optimize a policy while making few detrimental
changes. Using this idea, and the theory of linear contextual bandits, we
present a conservative policy updating procedure which updates a deterministic
policy only when justified. We extend the theory of linear bandits to this
rarely-switching case, proving that such procedures share the same regret, up
to constant scaling, as the common LinUCB algorithm. However the algorithm
makes far fewer changes to its policy and, of those changes, fewer are
detrimental. We provide simulations and an analysis of an infant health
well-being causal inference dataset, showing the algorithm efficiently learns a
good policy with few changes. Our approach allows efficiently solving problems
where changes are to be avoided, with potential applications in medicine,
economics and beyond.Comment: 17 pages, 9 figure
Safe Policy Improvement with Baseline Bootstrapping
This paper considers Safe Policy Improvement (SPI) in Batch Reinforcement
Learning (Batch RL): from a fixed dataset and without direct access to the true
environment, train a policy that is guaranteed to perform at least as well as
the baseline policy used to collect the data. Our approach, called SPI with
Baseline Bootstrapping (SPIBB), is inspired by the knows-what-it-knows
paradigm: it bootstraps the trained policy with the baseline when the
uncertainty is high. Our first algorithm, -SPIBB, comes with SPI
theoretical guarantees. We also implement a variant, -SPIBB, that
is even more efficient in practice. We apply our algorithms to a motivational
stochastic gridworld domain and further demonstrate on randomly generated MDPs
the superiority of SPIBB with respect to existing algorithms, not only in
safety but also in mean performance. Finally, we implement a model-free version
of SPIBB and show its benefits on a navigation task with deep RL implementation
called SPIBB-DQN, which is, to the best of our knowledge, the first RL
algorithm relying on a neural network representation able to train efficiently
and reliably from batch data, without any interaction with the environment.Comment: accepted as a long oral at ICML201