1,920 research outputs found
Model Selection in Contextual Stochastic Bandit Problems
We study model selection in stochastic bandit problems. Our approach relies
on a master algorithm that selects its actions among candidate base algorithms.
While this problem is studied for specific classes of stochastic base
algorithms, our objective is to provide a method that can work with more
general classes of stochastic base algorithms. We propose a master algorithm
inspired by CORRAL \cite{DBLP:conf/colt/AgarwalLNS17} and introduce a novel and
generic smoothing transformation for stochastic bandit algorithms that permits
us to obtain regret guarantees for a wide class of base
algorithms when working along with our master. We exhibit a lower bound showing
that even when one of the base algorithms has regret, in general it
is impossible to get better than regret in model selection,
even asymptotically. We apply our algorithm to choose among different values of
for the -greedy algorithm, and to choose between the
-armed UCB and linear UCB algorithms. Our empirical studies further confirm
the effectiveness of our model-selection method.Comment: 12 main pages, 2 figures, 14 appendix page
A Neural Networks Committee for the Contextual Bandit Problem
This paper presents a new contextual bandit algorithm, NeuralBandit, which
does not need hypothesis on stationarity of contexts and rewards. Several
neural networks are trained to modelize the value of rewards knowing the
context. Two variants, based on multi-experts approach, are proposed to choose
online the parameters of multi-layer perceptrons. The proposed algorithms are
successfully tested on a large dataset with and without stationarity of
rewards.Comment: 21st International Conference on Neural Information Processin
Context Attentive Bandits: Contextual Bandit with Restricted Context
We consider a novel formulation of the multi-armed bandit model, which we
call the contextual bandit with restricted context, where only a limited number
of features can be accessed by the learner at every iteration. This novel
formulation is motivated by different online problems arising in clinical
trials, recommender systems and attention modeling. Herein, we adapt the
standard multi-armed bandit algorithm known as Thompson Sampling to take
advantage of our restricted context setting, and propose two novel algorithms,
called the Thompson Sampling with Restricted Context(TSRC) and the Windows
Thompson Sampling with Restricted Context(WTSRC), for handling stationary and
nonstationary environments, respectively. Our empirical results demonstrate
advantages of the proposed approaches on several real-life datasetsComment: IJCAI 201
- âŠ