8 research outputs found
Cascade Model-based Propensity Estimation for Counterfactual Learning to Rank
Unbiased CLTR requires click propensities to compensate for the difference
between user clicks and true relevance of search results via IPS. Current
propensity estimation methods assume that user click behavior follows the PBM
and estimate click propensities based on this assumption. However, in reality,
user clicks often follow the CM, where users scan search results from top to
bottom and where each next click depends on the previous one. In this cascade
scenario, PBM-based estimates of propensities are not accurate, which, in turn,
hurts CLTR performance. In this paper, we propose a propensity estimation
method for the cascade scenario, called CM-IPS. We show that CM-IPS keeps CLTR
performance close to the full-information performance in case the user clicks
follow the CM, while PBM-based CLTR has a significant gap towards the
full-information. The opposite is true if the user clicks follow PBM instead of
the CM. Finally, we suggest a way to select between CM- and PBM-based
propensity estimation methods based on historical user clicks.Comment: 4 pages, 2 figures, 43rd International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR '20
Human-Centered Design to Address Biases in Artificial Intelligence
The potential of artificial intelligence (AI) to reduce health care disparities and inequities is recognized, but it can also exacerbate these issues if not implemented in an equitable manner. This perspective identifies potential biases in each stage of the AI life cycle, including data collection, annotation, machine learning model development, evaluation, deployment, operationalization, monitoring, and feedback integration. To mitigate these biases, we suggest involving a diverse group of stakeholders, using human-centered AI principles. Human-centered AI can help ensure that AI systems are designed and used in a way that benefits patients and society, which can reduce health disparities and inequities. By recognizing and addressing biases at each stage of the AI life cycle, AI can achieve its potential in health care
Accelerated Convergence for Counterfactual Learning to Rank
Counterfactual Learning to Rank (LTR) algorithms learn a ranking model from
logged user interactions, often collected using a production system. Employing
such an offline learning approach has many benefits compared to an online one,
but it is challenging as user feedback often contains high levels of bias.
Unbiased LTR uses Inverse Propensity Scoring (IPS) to enable unbiased learning
from logged user interactions. One of the major difficulties in applying
Stochastic Gradient Descent (SGD) approaches to counterfactual learning
problems is the large variance introduced by the propensity weights. In this
paper we show that the convergence rate of SGD approaches with IPS-weighted
gradients suffers from the large variance introduced by the IPS weights:
convergence is slow, especially when there are large IPS weights. To overcome
this limitation, we propose a novel learning algorithm, called CounterSample,
that has provably better convergence than standard IPS-weighted gradient
descent methods. We prove that CounterSample converges faster and complement
our theoretical findings with empirical results by performing extensive
experimentation in a number of biased LTR scenarios -- across optimizers, batch
sizes, and different degrees of position bias.Comment: SIGIR 2020 full conference pape
Counterfactual Evaluation of Slate Recommendations with Sequential Reward Interactions
Users of music streaming, video streaming, news recommendation, and
e-commerce services often engage with content in a sequential manner. Providing
and evaluating good sequences of recommendations is therefore a central problem
for these services. Prior reweighting-based counterfactual evaluation methods
either suffer from high variance or make strong independence assumptions about
rewards. We propose a new counterfactual estimator that allows for sequential
interactions in the rewards with lower variance in an asymptotically unbiased
manner. Our method uses graphical assumptions about the causal relationships of
the slate to reweight the rewards in the logging policy in a way that
approximates the expected sum of rewards under the target policy. Extensive
experiments in simulation and on a live recommender system show that our
approach outperforms existing methods in terms of bias and data efficiency for
the sequential track recommendations problem
Re-examining assumptions in fair and unbiased learning to rank
In this thesis, we re-examine the assumptions of existing methods for bias correction and fairness optimization in ranking. Consequently, we propose methods that are more general than the existing ones, in the sense that they rely on less assumptions, or they are applicable in more situations. On the bias side, we first show that the click model assumption matters and propose cascade model-based inverse propensity scoring (IPS). Next, we prove that the unbiasedness of IPS relies on the assumption that the clicks do not suffer from trust bias. When trust bias exists, we extend IPS and propose the affine correction (AC) method and prove that, in contrast to IPS, it gives unbiased estimates of the relevance. Finally, we show that the unbiasedness proofs of IPS and AC are conditioned on an accurate estimation of the bias parameters, and propose a bias correction method that does not rely on relevance estimation. On the fairness side, we re-examine the implicit assumption that fair distribution of exposure leads to fair treatment by the users. We argue that fairness of exposure is necessary but not enough for a fair treatment and propose a correction method for this type of bias. Finally, we notice that the existing general post-processing framework for optimizing fairness of ranking metrics is based on the Plackett-Luce distribution, the optimization of which has room for improvement for queries with a small number of repeating sessions. To close this gap, we propose a new permutation distribution based on permutation graphs