2 research outputs found
Top-k Combinatorial Bandits with Full-Bandit Feedback
Top-k Combinatorial Bandits generalize multi-armed bandits, where at each
round any subset of out of arms may be chosen and the sum of the
rewards is gained. We address the full-bandit feedback, in which the agent
observes only the sum of rewards, in contrast to the semi-bandit feedback, in
which the agent observes also the individual arms' rewards. We present the
Combinatorial Successive Accepts and Rejects (CSAR) algorithm, which
generalizes SAR (Bubeck et al, 2013) for top-k combinatorial bandits. Our main
contribution is an efficient sampling scheme that uses Hadamard matrices in
order to estimate accurately the individual arms' expected rewards. We discuss
two variants of the algorithm, the first minimizes the sample complexity and
the second minimizes the regret. We also prove a lower bound on sample
complexity, which is tight for . Finally, we run experiments and show
that our algorithm outperforms other methods
Explicit Best Arm Identification in Linear Bandits Using No-Regret Learners
We study the problem of best arm identification in linearly parameterised
multi-armed bandits. Given a set of feature vectors
a confidence parameter and an
unknown vector the goal is to identify
, with probability at least
using noisy measurements of the form For this fixed confidence
(-PAC) setting, we propose an explicitly implementable and provably
order-optimal sample-complexity algorithm to solve this problem. Previous
approaches rely on access to minimax optimization oracles. The algorithm, which
we call the \textit{Phased Elimination Linear Exploration Game} (PELEG),
maintains a high-probability confidence ellipsoid containing in each
round and uses it to eliminate suboptimal arms in phases. PELEG achieves fast
shrinkage of this confidence ellipsoid along the most confusing (i.e., close
to, but not optimal) directions by interpreting the problem as a two player
zero-sum game, and sequentially converging to its saddle point using low-regret
learners to compute players' strategies in each round. We analyze the sample
complexity of PELEG and show that it matches, up to order, an
instance-dependent lower bound on sample complexity in the linear bandit
setting. We also provide numerical results for the proposed algorithm
consistent with its theoretical guarantees