13 research outputs found
Fighting Bandits with a New Kind of Smoothness
We define a novel family of algorithms for the adversarial multi-armed bandit
problem, and provide a simple analysis technique based on convex smoothing. We
prove two main results. First, we show that regularization via the
\emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the
minimax regret. Second, we show that a wide class of
perturbation methods achieve a near-optimal regret as low as if the perturbation distribution has a bounded hazard rate. For example,
the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this
key property.Comment: In Proceedings of NIPS, 201
Tsallis-INF: An Optimal Algorithm for Stochastic and Adversarial Bandits
We derive an algorithm that achieves the optimal (within constants)
pseudo-regret in both adversarial and stochastic multi-armed bandits without
prior knowledge of the regime and time horizon. The algorithm is based on
online mirror descent (OMD) with Tsallis entropy regularization with power
and reduced-variance loss estimators. More generally, we define an
adversarial regime with a self-bounding constraint, which includes stochastic
regime, stochastically constrained adversarial regime (Wei and Luo), and
stochastic regime with adversarial corruptions (Lykouris et al.) as special
cases, and show that the algorithm achieves logarithmic regret guarantee in
this regime and all of its special cases simultaneously with the adversarial
regret guarantee.} The algorithm also achieves adversarial and stochastic
optimality in the utility-based dueling bandit setting. We provide empirical
evaluation of the algorithm demonstrating that it significantly outperforms
UCB1 and EXP3 in stochastic environments. We also provide examples of
adversarial environments, where UCB1 and Thompson Sampling exhibit almost
linear regret, whereas our algorithm suffers only logarithmic regret. To the
best of our knowledge, this is the first example demonstrating vulnerability of
Thompson Sampling in adversarial environments. Last, but not least, we present
a general stochastic analysis and a general adversarial analysis of OMD
algorithms with Tsallis entropy regularization for and explain
the reason why works best
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously
We develop the first general semi-bandit algorithm that simultaneously
achieves regret for stochastic environments and
regret for adversarial environments without knowledge
of the regime or the number of rounds . The leading problem-dependent
constants of our bounds are not only optimal in some worst-case sense studied
previously, but also optimal for two concrete instances of semi-bandit
problems. Our algorithm and analysis extend the recent work of (Zimmert &
Seldin, 2019) for the special case of multi-armed bandit, but importantly
requires a novel hybrid regularizer designed specifically for semi-bandit.
Experimental results on synthetic data show that our algorithm indeed performs
well uniformly over different environments. We finally provide a preliminary
extension of our results to the full bandit feedback
Banker Online Mirror Descent
We propose Banker-OMD, a novel framework generalizing the classical Online
Mirror Descent (OMD) technique in online learning algorithm design. Banker-OMD
allows algorithms to robustly handle delayed feedback, and offers a general
methodology for achieving -style regret bounds
in various delayed-feedback online learning tasks, where is the time
horizon length and is the total feedback delay. We demonstrate the power of
Banker-OMD with applications to three important bandit scenarios with delayed
feedback, including delayed adversarial Multi-armed bandits (MAB), delayed
adversarial linear bandits, and a novel delayed best-of-both-worlds MAB
setting. Banker-OMD achieves nearly-optimal performance in all the three
settings. In particular, it leads to the first delayed adversarial linear
bandit algorithm achieving
regret
Improved Best-of-Both-Worlds Guarantees for Multi-Armed Bandits: FTRL with General Regularizers and Multiple Optimal Arms
We study the problem of designing adaptive multi-armed bandit algorithms that
perform optimally in both the stochastic setting and the adversarial setting
simultaneously (often known as a best-of-both-world guarantee). A line of
recent works shows that when configured and analyzed properly, the
Follow-the-Regularized-Leader (FTRL) algorithm, originally designed for the
adversarial setting, can in fact optimally adapt to the stochastic setting as
well. Such results, however, critically rely on an assumption that there exists
one unique optimal arm. Recently, Ito (2021) took the first step to remove such
an undesirable uniqueness assumption for one particular FTRL algorithm with the
-Tsallis entropy regularizer. In this work, we significantly
improve and generalize this result, showing that uniqueness is unnecessary for
FTRL with a broad family of regularizers and a new learning rate schedule. For
some regularizers, our regret bounds also improve upon prior results even when
uniqueness holds. We further provide an application of our results to the
decoupled exploration and exploitation problem, demonstrating that our
techniques are broadly applicable.Comment: Update the camera-ready version for NeurIPS 202
A Modern Introduction to Online Learning
In this monograph, I introduce the basic concepts of Online Learning through
a modern view of Online Convex Optimization. Here, online learning refers to
the framework of regret minimization under worst-case assumptions. I present
first-order and second-order algorithms for online learning with convex losses,
in Euclidean and non-Euclidean settings. All the algorithms are clearly
presented as instantiation of Online Mirror Descent or
Follow-The-Regularized-Leader and their variants. Particular attention is given
to the issue of tuning the parameters of the algorithms and learning in
unbounded domains, through adaptive and parameter-free online learning
algorithms. Non-convex losses are dealt through convex surrogate losses and
through randomization. The bandit setting is also briefly discussed, touching
on the problem of adversarial and stochastic multi-armed bandits. These notes
do not require prior knowledge of convex analysis and all the required
mathematical tools are rigorously explained. Moreover, all the proofs have been
carefully chosen to be as simple and as short as possible.Comment: Fixed more typos, added more history bits, added local norms bounds
for OMD and FTR