1,216 research outputs found
From Bandits to Experts: On the Value of Side-Observations
We consider an adversarial online learning setting where a decision maker can
choose an action in every stage of the game. In addition to observing the
reward of the chosen action, the decision maker gets side observations on the
reward he would have obtained had he chosen some of the other actions. The
observation structure is encoded as a graph, where node i is linked to node j
if sampling i provides information on the reward of j. This setting naturally
interpolates between the well-known "experts" setting, where the decision maker
can view all rewards, and the multi-armed bandits setting, where the decision
maker can only view the reward of the chosen action. We develop practical
algorithms with provable regret guarantees, which depend on non-trivial
graph-theoretic properties of the information feedback structure. We also
provide partially-matching lower bounds.Comment: Presented at the NIPS 2011 conferenc
An Optimal Online Method of Selecting Source Policies for Reinforcement Learning
Transfer learning significantly accelerates the reinforcement learning
process by exploiting relevant knowledge from previous experiences. The problem
of optimally selecting source policies during the learning process is of great
importance yet challenging. There has been little theoretical analysis of this
problem. In this paper, we develop an optimal online method to select source
policies for reinforcement learning. This method formulates online source
policy selection as a multi-armed bandit problem and augments Q-learning with
policy reuse. We provide theoretical guarantees of the optimal selection
process and convergence to the optimal policy. In addition, we conduct
experiments on a grid-based robot navigation domain to demonstrate its
efficiency and robustness by comparing to the state-of-the-art transfer
learning method
Unimodal Thompson Sampling for Graph-Structured Arms
We study, to the best of our knowledge, the first Bayesian algorithm for
unimodal Multi-Armed Bandit (MAB) problems with graph structure. In this
setting, each arm corresponds to a node of a graph and each edge provides a
relationship, unknown to the learner, between two nodes in terms of expected
reward. Furthermore, for any node of the graph there is a path leading to the
unique node providing the maximum expected reward, along which the expected
reward is monotonically increasing. Previous results on this setting describe
the behavior of frequentist MAB algorithms. In our paper, we design a Thompson
Sampling-based algorithm whose asymptotic pseudo-regret matches the lower bound
for the considered setting. We show that -as it happens in a wide number of
scenarios- Bayesian MAB algorithms dramatically outperform frequentist ones. In
particular, we provide a thorough experimental evaluation of the performance of
our and state-of-the-art algorithms as the properties of the graph vary
- …