As society transitions towards an AI-based decision-making infrastructure, an
ever-increasing number of decisions once under control of humans are now
delegated to automated systems. Even though such developments make various
parts of society more efficient, a large body of evidence suggests that a great
deal of care needs to be taken to make such automated decision-making systems
fair and equitable, namely, taking into account sensitive attributes such as
gender, race, and religion. In this paper, we study a specific decision-making
task called outcome control in which an automated system aims to optimize an
outcome variable Y while being fair and equitable. The interest in such a
setting ranges from interventions related to criminal justice and welfare, all
the way to clinical decision-making and public health. In this paper, we first
analyze through causal lenses the notion of benefit, which captures how much a
specific individual would benefit from a positive decision, counterfactually
speaking, when contrasted with an alternative, negative one. We introduce the
notion of benefit fairness, which can be seen as the minimal fairness
requirement in decision-making, and develop an algorithm for satisfying it. We
then note that the benefit itself may be influenced by the protected attribute,
and propose causal tools which can be used to analyze this. Finally, if some of
the variations of the protected attribute in the benefit are considered as
discriminatory, the notion of benefit fairness may need to be strengthened,
which leads us to articulating a notion of causal benefit fairness. Using this
notion, we develop a new optimization procedure capable of maximizing Y while
ascertaining causal fairness in the decision process