44 research outputs found
The Sensitivity of Counterfactual Fairness to Unmeasured Confounding
Causal approaches to fairness have seen substantial recent interest, both from the machine
learning community and from wider parties interested in ethical prediction algorithms. In
no small part, this has been due to the fact
that causal models allow one to simultaneously
leverage data and expert knowledge to remove
discriminatory effects from predictions. However, one of the primary assumptions in causal
modeling is that you know the causal graph.
This introduces a new opportunity for bias,
caused by misspecifying the causal model.
One common way for misspecification to occur is via unmeasured confounding: the true
causal effect between variables is partially described by unobserved quantities. In this work
we design tools to assess the sensitivity of fairness measures to this confounding for the popular class of non-linear additive noise models (ANMs). Specifically, we give a procedure for computing the maximum difference
between two counterfactually fair predictors,
where one has become biased due to confounding. For the case of bivariate confounding our
technique can be swiftly computed via a sequence of closed-form updates. For multivariate confounding we give an algorithm that can
be efficiently solved via automatic differentiation. We demonstrate our new sensitivity analysis tools in real-world fairness scenarios to assess the bias arising from confounding
Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality
As virtually all aspects of our lives are increasingly impacted by
algorithmic decision making systems, it is incumbent upon us as a society to
ensure such systems do not become instruments of unfair discrimination on the
basis of gender, race, ethnicity, religion, etc. We consider the problem of
determining whether the decisions made by such systems are discriminatory,
through the lens of causal models. We introduce two definitions of group
fairness grounded in causality: fair on average causal effect (FACE), and fair
on average causal effect on the treated (FACT). We use the Rubin-Neyman
potential outcomes framework for the analysis of cause-effect relationships to
robustly estimate FACE and FACT. We demonstrate the effectiveness of our
proposed approach on synthetic data. Our analyses of two real-world data sets,
the Adult income data set from the UCI repository (with gender as the protected
attribute), and the NYC Stop and Frisk data set (with race as the protected
attribute), show that the evidence of discrimination obtained by FACE and FACT,
or lack thereof, is often in agreement with the findings from other studies. We
further show that FACT, being somewhat more nuanced compared to FACE, can yield
findings of discrimination that differ from those obtained using FACE.Comment: 7 pages, 2 figures, 2 tables.To appear in Proceedings of the
International Conference on World Wide Web (WWW), 201