1 research outputs found
Kernel Dependence Regularizers and Gaussian Processes with Applications to Algorithmic Fairness
Current adoption of machine learning in industrial, societal and economical
activities has raised concerns about the fairness, equity and ethics of
automated decisions. Predictive models are often developed using biased
datasets and thus retain or even exacerbate biases in their decisions and
recommendations. Removing the sensitive covariates, such as gender or race, is
insufficient to remedy this issue since the biases may be retained due to other
related covariates. We present a regularization approach to this problem that
trades off predictive accuracy of the learned models (with respect to biased
labels) for the fairness in terms of statistical parity, i.e. independence of
the decisions from the sensitive covariates. In particular, we consider a
general framework of regularized empirical risk minimization over reproducing
kernel Hilbert spaces and impose an additional regularizer of dependence
between predictors and sensitive covariates using kernel-based measures of
dependence, namely the Hilbert-Schmidt Independence Criterion (HSIC) and its
normalized version. This approach leads to a closed-form solution in the case
of squared loss, i.e. ridge regression. Moreover, we show that the dependence
regularizer has an interpretation as modifying the corresponding Gaussian
process (GP) prior. As a consequence, a GP model with a prior that encourages
fairness to sensitive variables can be derived, allowing principled
hyperparameter selection and studying of the relative relevance of covariates
under fairness constraints. Experimental results in synthetic examples and in
real problems of income and crime prediction illustrate the potential of the
approach to improve fairness of automated decisions