110 research outputs found
A Neural Networks Committee for the Contextual Bandit Problem
This paper presents a new contextual bandit algorithm, NeuralBandit, which
does not need hypothesis on stationarity of contexts and rewards. Several
neural networks are trained to modelize the value of rewards knowing the
context. Two variants, based on multi-experts approach, are proposed to choose
online the parameters of multi-layer perceptrons. The proposed algorithms are
successfully tested on a large dataset with and without stationarity of
rewards.Comment: 21st International Conference on Neural Information Processin
One Arrow, Two Kills: An Unified Framework for Achieving Optimal Regret Guarantees in Sleeping Bandits
We address the problem of \emph{`Internal Regret'} in \emph{Sleeping Bandits}
in the fully adversarial setup, as well as draw connections between different
existing notions of sleeping regrets in the multiarmed bandits (MAB) literature
and consequently analyze the implications: Our first contribution is to propose
the new notion of \emph{Internal Regret} for sleeping MAB. We then proposed an
algorithm that yields sublinear regret in that measure, even for a completely
adversarial sequence of losses and availabilities. We further show that a low
sleeping internal regret always implies a low external regret, and as well as a
low policy regret for iid sequence of losses. The main contribution of this
work precisely lies in unifying different notions of existing regret in
sleeping bandits and understand the implication of one to another. Finally, we
also extend our results to the setting of \emph{Dueling Bandits} (DB)--a
preference feedback variant of MAB, and proposed a reduction to MAB idea to
design a low regret algorithm for sleeping dueling bandits with stochastic
preferences and adversarial availabilities. The efficacy of our algorithms is
justified through empirical evaluations
Dueling Bandits with Adversarial Sleeping
We introduce the problem of sleeping dueling bandits with stochastic
preferences and adversarial availabilities (DB-SPAA). In almost all dueling
bandit applications, the decision space often changes over time; eg, retail
store management, online shopping, restaurant recommendation, search engine
optimization, etc. Surprisingly, this `sleeping aspect' of dueling bandits has
never been studied in the literature. Like dueling bandits, the goal is to
compete with the best arm by sequentially querying the preference feedback of
item pairs. The non-triviality however results due to the non-stationary item
spaces that allow any arbitrary subsets items to go unavailable every round.
The goal is to find an optimal `no-regret' policy that can identify the best
available item at each round, as opposed to the standard `fixed best-arm regret
objective' of dueling bandits. We first derive an instance-specific lower bound
for DB-SPAA , where is the number of items and is the
gap between items and . This indicates that the sleeping problem with
preference feedback is inherently more difficult than that for classical
multi-armed bandits (MAB). We then propose two algorithms, with near optimal
regret guarantees. Our results are corroborated empirically
Ballooning Multi-Armed Bandits
In this paper, we introduce Ballooning Multi-Armed Bandits (BL-MAB), a novel
extension of the classical stochastic MAB model. In the BL-MAB model, the set
of available arms grows (or balloons) over time. In contrast to the classical
MAB setting where the regret is computed with respect to the best arm overall,
the regret in a BL-MAB setting is computed with respect to the best available
arm at each time. We first observe that the existing stochastic MAB algorithms
result in linear regret for the BL-MAB model. We prove that, if the best arm is
equally likely to arrive at any time instant, a sub-linear regret cannot be
achieved. Next, we show that if the best arm is more likely to arrive in the
early rounds, one can achieve sub-linear regret. Our proposed algorithm
determines (1) the fraction of the time horizon for which the newly arriving
arms should be explored and (2) the sequence of arm pulls in the exploitation
phase from among the explored arms. Making reasonable assumptions on the
arrival distribution of the best arm in terms of the thinness of the
distribution's tail, we prove that the proposed algorithm achieves sub-linear
instance-independent regret. We further quantify explicit dependence of regret
on the arrival distribution parameters. We reinforce our theoretical findings
with extensive simulation results. We conclude by showing that our algorithm
would achieve sub-linear regret even if (a) the distributional parameters are
not exactly known, but are obtained using a reasonable learning mechanism or
(b) the best arm is not more likely to arrive early, but a large fraction of
arms is likely to arrive relatively early.Comment: A full version of this paper is accepted in the Journal of Artificial
Intelligence (AIJ) of Elsevier. A preliminary version is published as an
extended abstract in AAMAS 2020. Proceedings of the 19th International
Conference on Autonomous Agents and MultiAgent Systems. 202
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.Comment: To appear in Foundations and Trends in Machine Learnin
- âŠ