Causal approaches to fairness have seen substantial recent interest, both from the machine
learning community and from wider parties interested in ethical prediction algorithms. In
no small part, this has been due to the fact
that causal models allow one to simultaneously
leverage data and expert knowledge to remove
discriminatory effects from predictions. However, one of the primary assumptions in causal
modeling is that you know the causal graph.
This introduces a new opportunity for bias,
caused by misspecifying the causal model.
One common way for misspecification to occur is via unmeasured confounding: the true
causal effect between variables is partially described by unobserved quantities. In this work
we design tools to assess the sensitivity of fairness measures to this confounding for the popular class of non-linear additive noise models (ANMs). Specifically, we give a procedure for computing the maximum difference
between two counterfactually fair predictors,
where one has become biased due to confounding. For the case of bivariate confounding our
technique can be swiftly computed via a sequence of closed-form updates. For multivariate confounding we give an algorithm that can
be efficiently solved via automatic differentiation. We demonstrate our new sensitivity analysis tools in real-world fairness scenarios to assess the bias arising from confounding