4 research outputs found
Causal Fairness-Guided Dataset Reweighting using Neural Networks
The importance of achieving fairness in machine learning models cannot be
overstated. Recent research has pointed out that fairness should be examined
from a causal perspective, and several fairness notions based on the on Pearl's
causal framework have been proposed. In this paper, we construct a reweighting
scheme of datasets to address causal fairness. Our approach aims at mitigating
bias by considering the causal relationships among variables and incorporating
them into the reweighting process. The proposed method adopts two neural
networks, whose structures are intentionally used to reflect the structures of
a causal graph and of an interventional graph. The two neural networks can
approximate the causal model of the data, and the causal model of
interventions. Furthermore, reweighting guided by a discriminator is applied to
achieve various fairness notions. Experiments on real-world datasets show that
our method can achieve causal fairness on the data while remaining close to the
original data for downstream tasks.Comment: To be published in the proceedings of 2023 IEEE International
Conference on Big Data (IEEE BigData 2023
Causal inference for social discrimination reasoning
The discovery of discriminatory bias in human or automated decision making is a task of increasing importance and difficulty, exacerbated by the pervasive use of machine learning and data mining. Currently, discrimination discovery largely relies upon correlation analysis of decisions records, disregarding the impact of confounding biases. We present a method for causal discrimination discovery based on propensity score analysis, a statistical tool for filtering out the effect of confounding variables. We introduce causal measures of discrimination which quantify the effect of group membership on the decisions, and highlight causal discrimination/favoritism patterns by learning regression trees over the novel measures. We validate our approach on two real world datasets. Our proposed framework for causal discrimination has the potential to enhance the transparency of machine learning with tools for detecting discriminatory bias both in the training data and in the learning algorithms