2 research outputs found
Learning Fair and Interpretable Representations via Linear Orthogonalization
To reduce human error and prejudice, many high-stakes decisions have been
turned over to machine algorithms. However, recent research suggests that this
does not remove discrimination, and can perpetuate harmful stereotypes. While
algorithms have been developed to improve fairness, they typically face at
least one of three shortcomings: they are not interpretable, their prediction
quality deteriorates quickly compared to unbiased equivalents, and they are not
easily transferable across models. To address these shortcomings, we propose a
geometric method that removes correlations between data and any number of
protected variables. Further, we can control the strength of debiasing through
an adjustable parameter to address the trade-off between prediction quality and
fairness. The resulting features are interpretable and can be used with many
popular models, such as linear regression, random forest, and multilayer
perceptrons. The resulting predictions are found to be more accurate and fair
compared to several state-of-the-art fair AI algorithms across a variety of
benchmark datasets. Our work shows that debiasing data is a simple and
effective solution toward improving fairness.Comment: 9 pages, 5 figure
Fair Normalizing Flows
Fair representation learning is an attractive approach that promises fairness
of downstream predictors by encoding sensitive data. Unfortunately, recent work
has shown that strong adversarial predictors can still exhibit unfairness by
recovering sensitive attributes from these representations. In this work, we
present Fair Normalizing Flows (FNF), a new approach offering more rigorous
fairness guarantees for learned representations. Specifically, we consider a
practical setting where we can estimate the probability density for sensitive
groups. The key idea is to model the encoder as a normalizing flow trained to
minimize the statistical distance between the latent representations of
different groups. The main advantage of FNF is that its exact likelihood
computation allows us to obtain guarantees on the maximum unfairness of any
potentially adversarial downstream predictor. We experimentally demonstrate the
effectiveness of FNF in enforcing various group fairness notions, as well as
other attractive properties such as interpretability and transfer learning, on
a variety of challenging real-world datasets