In applying reinforcement learning (RL) to high-stakes domains, quantitative
and qualitative evaluation using observational data can help practitioners
understand the generalization performance of new policies. However, this type
of off-policy evaluation (OPE) is inherently limited since offline data may not
reflect the distribution shifts resulting from the application of new policies.
On the other hand, online evaluation by collecting rollouts according to the
new policy is often infeasible, as deploying new policies in these domains can
be unsafe. In this work, we propose a semi-offline evaluation framework as an
intermediate step between offline and online evaluation, where human users
provide annotations of unobserved counterfactual trajectories. While tempting
to simply augment existing data with such annotations, we show that this naive
approach can lead to biased results. Instead, we design a new family of OPE
estimators based on importance sampling (IS) and a novel weighting scheme that
incorporate counterfactual annotations without introducing additional bias. We
analyze the theoretical properties of our approach, showing its potential to
reduce both bias and variance compared to standard IS estimators. Our analyses
reveal important practical considerations for handling biased, noisy, or
missing annotations. In a series of proof-of-concept experiments involving
bandits and a healthcare-inspired simulator, we demonstrate that our approach
outperforms purely offline IS estimators and is robust to imperfect
annotations. Our framework, combined with principled human-centered design of
annotation solicitation, can enable the application of RL in high-stakes
domains.Comment: 36 pages, 12 figures, 5 tables. NeurIPS 2023. Code available at
https://github.com/MLD3/CounterfactualAnnot-SemiOP