2 research outputs found
Achieving Fairness in Determining Medicaid Eligibility through Fairgroup Construction
Effective complements to human judgment, artificial intelligence techniques
have started to aid human decisions in complicated social problems across the
world. In the context of United States for instance, automated ML/DL
classification models offer complements to human decisions in determining
Medicaid eligibility. However, given the limitations in ML/DL model design,
these algorithms may fail to leverage various factors for decision making,
resulting in improper decisions that allocate resources to individuals who may
not be in the most need. In view of such an issue, we propose in this paper the
method of \textit{fairgroup construction}, based on the legal doctrine of
\textit{disparate impact}, to improve the fairness of regressive classifiers.
Experiments on American Community Survey dataset demonstrate that our method
could be easily adapted to a variety of regressive classification models to
boost their fairness in deciding Medicaid Eligibility, while maintaining high
levels of classification accuracy
Fair Correlation Clustering
In this paper, we study correlation clustering under fairness constraints.
Fair variants of -median and -center clustering have been studied
recently, and approximation algorithms using a notion called fairlet
decomposition have been proposed. We obtain approximation algorithms for fair
correlation clustering under several important types of fairness constraints.
Our results hinge on obtaining a fairlet decomposition for correlation
clustering by introducing a novel combinatorial optimization problem. We define
a fairlet decomposition with cost similar to the -median cost and this
allows us to obtain approximation algorithms for a wide range of fairness
constraints.
We complement our theoretical results with an in-depth analysis of our
algorithms on real graphs where we show that fair solutions to correlation
clustering can be obtained with limited increase in cost compared to the
state-of-the-art (unfair) algorithms