22 research outputs found
Adaptation to Easy Data in Prediction with Limited Advice
We derive an online learning algorithm with improved regret guarantees for
`easy' loss sequences. We consider two types of `easiness': (a) stochastic loss
sequences and (b) adversarial loss sequences with small effective range of the
losses. While a number of algorithms have been proposed for exploiting small
effective range in the full information setting, Gerchinovitz and Lattimore
[2016] have shown the impossibility of regret scaling with the effective range
of the losses in the bandit setting. We show that just one additional
observation per round is sufficient to circumvent the impossibility result. The
proposed Second Order Difference Adjustments (SODA) algorithm requires no prior
knowledge of the effective range of the losses, , and achieves an
expected regret guarantee, where is the time horizon and is the number
of actions. The scaling with the effective loss range is achieved under
significantly weaker assumptions than those made by Cesa-Bianchi and Shamir
[2018] in an earlier attempt to circumvent the impossibility result. We also
provide a regret lower bound of , which almost
matches the upper bound. In addition, we show that in the stochastic setting
SODA achieves an pseudo-regret bound that holds simultaneously
with the adversarial regret guarantee. In other words, SODA is safe against an
unrestricted oblivious adversary and provides improved regret guarantees for at
least two different types of `easiness' simultaneously.Comment: Fixed a mistake in the proof and statement of Theorem
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously
We develop the first general semi-bandit algorithm that simultaneously
achieves regret for stochastic environments and
regret for adversarial environments without knowledge
of the regime or the number of rounds . The leading problem-dependent
constants of our bounds are not only optimal in some worst-case sense studied
previously, but also optimal for two concrete instances of semi-bandit
problems. Our algorithm and analysis extend the recent work of (Zimmert &
Seldin, 2019) for the special case of multi-armed bandit, but importantly
requires a novel hybrid regularizer designed specifically for semi-bandit.
Experimental results on synthetic data show that our algorithm indeed performs
well uniformly over different environments. We finally provide a preliminary
extension of our results to the full bandit feedback
Banker Online Mirror Descent
We propose Banker-OMD, a novel framework generalizing the classical Online
Mirror Descent (OMD) technique in online learning algorithm design. Banker-OMD
allows algorithms to robustly handle delayed feedback, and offers a general
methodology for achieving -style regret bounds
in various delayed-feedback online learning tasks, where is the time
horizon length and is the total feedback delay. We demonstrate the power of
Banker-OMD with applications to three important bandit scenarios with delayed
feedback, including delayed adversarial Multi-armed bandits (MAB), delayed
adversarial linear bandits, and a novel delayed best-of-both-worlds MAB
setting. Banker-OMD achieves nearly-optimal performance in all the three
settings. In particular, it leads to the first delayed adversarial linear
bandit algorithm achieving
regret