We tackle the problem of bias mitigation of algorithmic decisions in a
setting where both the output of the algorithm and the sensitive variable are
continuous. Most of prior work deals with discrete sensitive variables, meaning
that the biases are measured for subgroups of persons defined by a label,
leaving out important algorithmic bias cases, where the sensitive variable is
continuous. Typical examples are unfair decisions made with respect to the age
or the financial status. In our work, we then propose a bias mitigation
strategy for continuous sensitive variables, based on the notion of endogeneity
which comes from the field of econometrics. In addition to solve this new
problem, our bias mitigation strategy is a weakly supervised learning method
which requires that a small portion of the data can be measured in a fair
manner. It is model agnostic, in the sense that it does not make any hypothesis
on the prediction model. It also makes use of a reasonably large amount of
input observations and their corresponding predictions. Only a small fraction
of the true output predictions should be known. This therefore limits the need
for expert interventions. Results obtained on synthetic data show the
effectiveness of our approach for examples as close as possible to real-life
applications in econometrics.Comment: 30 pages, 25 figure