Ranking interfaces are everywhere in online platforms. There is thus an ever
growing interest in their Off-Policy Evaluation (OPE), aiming towards an
accurate performance evaluation of ranking policies using logged data. A
de-facto approach for OPE is Inverse Propensity Scoring (IPS), which provides
an unbiased and consistent value estimate. However, it becomes extremely
inaccurate in the ranking setup due to its high variance under large action
spaces. To deal with this problem, previous studies assume either independent
or cascade user behavior, resulting in some ranking versions of IPS. While
these estimators are somewhat effective in reducing the variance, all existing
estimators apply a single universal assumption to every user, causing excessive
bias and variance. Therefore, this work explores a far more general formulation
where user behavior is diverse and can vary depending on the user context. We
show that the resulting estimator, which we call Adaptive IPS (AIPS), can be
unbiased under any complex user behavior. Moreover, AIPS achieves the minimum
variance among all unbiased estimators based on IPS. We further develop a
procedure to identify the appropriate user behavior model to minimize the mean
squared error (MSE) of AIPS in a data-driven fashion. Extensive experiments
demonstrate that the empirical accuracy improvement can be significant,
enabling effective OPE of ranking systems even under diverse user behavior.Comment: KDD2023 Research trac