4,959 research outputs found
Explore no more: Improved high-probability regret bounds for non-stochastic bandits
This work addresses the problem of regret minimization in non-stochastic
multi-armed bandit problems, focusing on performance guarantees that hold with
high probability. Such results are rather scarce in the literature since
proving them requires a large deal of technical effort and significant
modifications to the standard, more intuitive algorithms that come only with
guarantees that hold on expectation. One of these modifications is forcing the
learner to sample arms from the uniform distribution at least
times over rounds, which can adversely affect
performance if many of the arms are suboptimal. While it is widely conjectured
that this property is essential for proving high-probability regret bounds, we
show in this paper that it is possible to achieve such strong results without
this undesirable exploration component. Our result relies on a simple and
intuitive loss-estimation strategy called Implicit eXploration (IX) that allows
a remarkably clean analysis. To demonstrate the flexibility of our technique,
we derive several improved high-probability bounds for various extensions of
the standard multi-armed bandit framework. Finally, we conduct a simple
experiment that illustrates the robustness of our implicit exploration
technique.Comment: To appear at NIPS 201
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
In this paper, we consider the problem of sequentially optimizing a black-box
function based on noisy samples and bandit feedback. We assume that is
smooth in the sense of having a bounded norm in some reproducing kernel Hilbert
space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian
process bandit optimization. We provide algorithm-independent lower bounds on
the simple regret, measuring the suboptimality of a single point reported after
rounds, and on the cumulative regret, measuring the sum of regrets over the
chosen points. For the isotropic squared-exponential kernel in
dimensions, we find that an average simple regret of requires , and the
average cumulative regret is at least , thus matching existing upper bounds up to the replacement of by
in both cases. For the Mat\'ern- kernel, we give analogous
bounds of the form and
, and discuss the resulting
gaps to the existing upper bounds.Comment: Appearing in COLT 2017. This version corrects a few minor mistakes in
Table I, which summarizes the new and existing regret bound
Learning Algorithms for Minimizing Queue Length Regret
We consider a system consisting of a single transmitter/receiver pair and
channels over which they may communicate. Packets randomly arrive to the
transmitter's queue and wait to be successfully sent to the receiver. The
transmitter may attempt a frame transmission on one channel at a time, where
each frame includes a packet if one is in the queue. For each channel, an
attempted transmission is successful with an unknown probability. The
transmitter's objective is to quickly identify the best channel to minimize the
number of packets in the queue over time slots. To analyze system
performance, we introduce queue length regret, which is the expected difference
between the total queue length of a learning policy and a controller that knows
the rates, a priori. One approach to designing a transmission policy would be
to apply algorithms from the literature that solve the closely-related
stochastic multi-armed bandit problem. These policies would focus on maximizing
the number of successful frame transmissions over time. However, we show that
these methods have queue length regret. On the other hand, we
show that there exists a set of queue-length based policies that can obtain
order optimal queue length regret. We use our theoretical analysis to
devise heuristic methods that are shown to perform well in simulation.Comment: 28 Pages, 11 figure
- …