3 research outputs found
Matched Pair Calibration for Ranking Fairness
We propose a test of fairness in score-based ranking systems called matched
pair calibration. Our approach constructs a set of matched item pairs with
minimal confounding differences between subgroups before computing an
appropriate measure of ranking error over the set. The matching step ensures
that we compare subgroup outcomes between identically scored items so that
measured performance differences directly imply unfairness in subgroup-level
exposures. We show how our approach generalizes the fairness intuitions of
calibration from a binary classification setting to ranking and connect our
approach to other proposals for ranking fairness measures. Moreover, our
strategy shows how the logic of marginal outcome tests extends to cases where
the analyst has access to model scores. Lastly, we provide an example of
applying matched pair calibration to a real-word ranking data set to
demonstrate its efficacy in detecting ranking bias.Comment: 19 pages, 8 figure
Reinforcement Learning When All Actions Are Not Always Available
The Markov decision process (MDP) formulation used to model many real-world sequential decision making problems does not efficiently capture the setting where the set of available decisions (actions) at each time step is stochastic. Recently, the stochastic action set Markov decision process (SAS-MDP) formulation has been proposed, which better captures the concept of a stochastic action set. In this paper we argue that existing RL algorithms for SAS-MDPs can suffer from potential divergence issues, and present new policy gradient algorithms for SAS-MDPs that incorporate variance reduction techniques unique to this setting, and provide conditions for their convergence. We conclude with experiments that demonstrate the practicality of our approaches on tasks inspired by real-life use cases wherein the action set is stochastic
Visually Communicating Bayesian Statistics to
Effectively communicating Bayesian statistics to laypersons has been an open challenge for many years. Recent research in psychology proposed that there is a direct correlation between comprehension and representation. Specifically, a series of studies suggests that pictorial representations with icon arrays may be better suited for communicating Bayesian statistics than Euler diagrams. Though these results are compelling, the experiments were conducted in controlled lab settings and with limited samples. In this paper, we extend the previous research by expanding the sample to a more diverse population through crowdsourcing. We conducted a user study that compares three different pictorial representations of Bayesian statistics icon arrays, Euler diagrams and discretized Euler diagrams. Our findings fail to replicate previous results and demonstrate no significant difference between the three representations. We discuss possible explanations for these findings and propose directions for future investigations. 1