110 research outputs found
Creating Fair Models of Atherosclerotic Cardiovascular Disease Risk
Guidelines for the management of atherosclerotic cardiovascular disease
(ASCVD) recommend the use of risk stratification models to identify patients
most likely to benefit from cholesterol-lowering and other therapies. These
models have differential performance across race and gender groups with
inconsistent behavior across studies, potentially resulting in an inequitable
distribution of beneficial therapy. In this work, we leverage adversarial
learning and a large observational cohort extracted from electronic health
records (EHRs) to develop a "fair" ASCVD risk prediction model with reduced
variability in error rates across groups. We empirically demonstrate that our
approach is capable of aligning the distribution of risk predictions
conditioned on the outcome across several groups simultaneously for models
built from high-dimensional EHR data. We also discuss the relevance of these
results in the context of the empirical trade-off between fairness and model
performance
Protecting the Protected Group: Circumventing Harmful Fairness
Machine Learning (ML) algorithms shape our lives. Banks use them to determine
if we are good borrowers; IT companies delegate them recruitment decisions;
police apply ML for crime-prediction, and judges base their verdicts on ML.
However, real-world examples show that such automated decisions tend to
discriminate against protected groups. This potential discrimination generated
a huge hype both in media and in the research community. Quite a few formal
notions of fairness were proposed, which take a form of constraints a "fair"
algorithm must satisfy. We focus on scenarios where fairness is imposed on a
self-interested party (e.g., a bank that maximizes its revenue). We find that
the disadvantaged protected group can be worse off after imposing a fairness
constraint. We introduce a family of \textit{Welfare-Equalizing} fairness
constraints that equalize per-capita welfare of protected groups, and include
\textit{Demographic Parity} and \textit{Equal Opportunity} as particular cases.
In this family, we characterize conditions under which the fairness constraint
helps the disadvantaged group. We also characterize the structure of the
optimal \textit{Welfare-Equalizing} classifier for the self-interested party,
and provide an algorithm to compute it. Overall, our
\textit{Welfare-Equalizing} fairness approach provides a unified framework for
discussing fairness in classification in the presence of a self-interested
party.Comment: Published in AAAI 202
- …