3 research outputs found
Exponential Smoothing for Off-Policy Learning
Off-policy learning (OPL) aims at finding improved policies from logged
bandit data, often by minimizing the inverse propensity scoring (IPS) estimator
of the risk. In this work, we investigate a smooth regularization for IPS, for
which we derive a two-sided PAC-Bayes generalization bound. The bound is
tractable, scalable, interpretable and provides learning certificates. In
particular, it is also valid for standard IPS without making the assumption
that the importance weights are bounded. We demonstrate the relevance of our
approach and its favorable performance through a set of learning tasks. Since
our bound holds for standard IPS, we are able to provide insight into when
regularizing IPS is useful. Namely, we identify cases where regularization
might not be needed. This goes against the belief that, in practice, clipped
IPS often enjoys favorable performance than standard IPS in OPL.Comment: ICML 2023 (Oral and Poster
Offline Evaluation of Reward-Optimizing Recommender Systems: The Case of Simulation
Both in academic and industry-based research, online evaluation methods are
seen as the golden standard for interactive applications like recommendation
systems. Naturally, the reason for this is that we can directly measure utility
metrics that rely on interventions, being the recommendations that are being
shown to users. Nevertheless, online evaluation methods are costly for a number
of reasons, and a clear need remains for reliable offline evaluation
procedures. In industry, offline metrics are often used as a first-line
evaluation to generate promising candidate models to evaluate online. In
academic work, limited access to online systems makes offline metrics the de
facto approach to validating novel methods. Two classes of offline metrics
exist: proxy-based methods, and counterfactual methods. The first class is
often poorly correlated with the online metrics we care about, and the latter
class only provides theoretical guarantees under assumptions that cannot be
fulfilled in real-world environments. Here, we make the case that
simulation-based comparisons provide ways forward beyond offline metrics, and
argue that they are a preferable means of evaluation.Comment: Accepted at the ACM RecSys 2021 Workshop on Simulation Methods for
Recommender System
Exponential Smoothing for Off-Policy Learning
International audienceOff-policy learning (OPL) aims at finding improved policies from logged bandit data, often by minimizing the inverse propensity scoring (IPS) estimator of the risk. In this work, we investigate a smooth regularization for IPS, for which we derive a two-sided PAC-Bayes generalization bound. The bound is tractable, scalable, interpretable and provides learning certificates. In particular, it is also valid for standard IPS without making the assumption that the importance weights are bounded. We demonstrate the relevance of our approach and its favorable performance through a set of learning tasks. Since our bound holds for standard IPS, we are able to provide insight into when regularizing IPS is useful. Namely, we identify cases where regularization might not be needed. This goes against the belief that, in practice, clipped IPS often enjoys favorable performance than standard IPS in OPL