We derive an online learning algorithm with improved regret guarantees for
`easy' loss sequences. We consider two types of `easiness': (a) stochastic loss
sequences and (b) adversarial loss sequences with small effective range of the
losses. While a number of algorithms have been proposed for exploiting small
effective range in the full information setting, Gerchinovitz and Lattimore
[2016] have shown the impossibility of regret scaling with the effective range
of the losses in the bandit setting. We show that just one additional
observation per round is sufficient to circumvent the impossibility result. The
proposed Second Order Difference Adjustments (SODA) algorithm requires no prior
knowledge of the effective range of the losses, ε, and achieves an
O(εKTlnK)+O~(εK4T)
expected regret guarantee, where T is the time horizon and K is the number
of actions. The scaling with the effective loss range is achieved under
significantly weaker assumptions than those made by Cesa-Bianchi and Shamir
[2018] in an earlier attempt to circumvent the impossibility result. We also
provide a regret lower bound of Ω(εTK), which almost
matches the upper bound. In addition, we show that in the stochastic setting
SODA achieves an O(∑a:Δa>0ΔaK3ε2) pseudo-regret bound that holds simultaneously
with the adversarial regret guarantee. In other words, SODA is safe against an
unrestricted oblivious adversary and provides improved regret guarantees for at
least two different types of `easiness' simultaneously.Comment: Fixed a mistake in the proof and statement of Theorem