In this paper, we study the problem of optimal data collection for policy
evaluation in linear bandits. In policy evaluation, we are given a target
policy and asked to estimate the expected reward it will obtain when executed
in a multi-armed bandit environment. Our work is the first work that focuses on
such optimal data collection strategy for policy evaluation involving
heteroscedastic reward noise in the linear bandit setting. We first formulate
an optimal design for weighted least squares estimates in the heteroscedastic
linear bandit setting that reduces the MSE of the value of the target policy.
We then use this formulation to derive the optimal allocation of samples per
action during data collection. We then introduce a novel algorithm SPEED
(Structured Policy Evaluation Experimental Design) that tracks the optimal
design and derive its regret with respect to the optimal design. Finally, we
empirically validate that SPEED leads to policy evaluation with mean squared
error comparable to the oracle strategy and significantly lower than simply
running the target policy