1 research outputs found

    Explicit Best Arm Identification in Linear Bandits Using No-Regret Learners

    Full text link
    We study the problem of best arm identification in linearly parameterised multi-armed bandits. Given a set of feature vectors XRd,\mathcal{X}\subset\mathbb{R}^d, a confidence parameter δ\delta and an unknown vector θ,\theta^*, the goal is to identify argmaxxXxTθ\arg\max_{x\in\mathcal{X}}x^T\theta^*, with probability at least 1δ,1-\delta, using noisy measurements of the form xTθ.x^T\theta^*. For this fixed confidence (δ\delta-PAC) setting, we propose an explicitly implementable and provably order-optimal sample-complexity algorithm to solve this problem. Previous approaches rely on access to minimax optimization oracles. The algorithm, which we call the \textit{Phased Elimination Linear Exploration Game} (PELEG), maintains a high-probability confidence ellipsoid containing θ\theta^* in each round and uses it to eliminate suboptimal arms in phases. PELEG achieves fast shrinkage of this confidence ellipsoid along the most confusing (i.e., close to, but not optimal) directions by interpreting the problem as a two player zero-sum game, and sequentially converging to its saddle point using low-regret learners to compute players' strategies in each round. We analyze the sample complexity of PELEG and show that it matches, up to order, an instance-dependent lower bound on sample complexity in the linear bandit setting. We also provide numerical results for the proposed algorithm consistent with its theoretical guarantees
    corecore