We consider the adversarial linear contextual bandit problem, where the loss
vectors are selected fully adversarially and the per-round action set (i.e. the
context) is drawn from a fixed distribution. Existing methods for this problem
either require access to a simulator to generate free i.i.d. contexts, achieve
a sub-optimal regret no better than O(T65β), or are
computationally inefficient. We greatly improve these results by achieving a
regret of O(Tβ) without a simulator, while maintaining
computational efficiency when the action set in each round is small. In the
special case of sleeping bandits with adversarial loss and stochastic arm
availability, our result answers affirmatively the open question by Saha et al.
[2020] on whether there exists a polynomial-time algorithm with
poly(d)Tβ regret. Our approach naturally handles the case where the
loss is linear up to an additive misspecification error, and our regret shows
near-optimal dependence on the magnitude of the error