MNL-Bandit in non-stationary environments

Abstract

In this paper, we study the MNL-Bandit problem in a non-stationary environment and present an algorithm with a worst-case expected regret of O~(min⁑{NTLβ€…β€Š,β€…β€ŠN13(Ξ”βˆžK)13T23+NT})\tilde{O}\left( \min \left\{ \sqrt{NTL}\;,\; N^{\frac{1}{3}}(\Delta_{\infty}^{K})^{\frac{1}{3}} T^{\frac{2}{3}} + \sqrt{NT}\right\}\right). Here NN is the number of arms, LL is the number of changes and Ξ”βˆžK\Delta_{\infty}^{K} is a variation measure of the unknown parameters. Furthermore, we show matching lower bounds on the expected regret (up to logarithmic factors), implying that our algorithm is optimal. Our approach builds upon the epoch-based algorithm for stationary MNL-Bandit in Agrawal et al. 2016. However, non-stationarity poses several challenges and we introduce new techniques and ideas to address these. In particular, we give a tight characterization for the bias introduced in the estimators due to non stationarity and derive new concentration bounds

    Similar works

    Full text

    thumbnail-image

    Available Versions