2 research outputs found
Criteria for algorithmic fairness metric selection under different supervised classification scenarios
Treball fi de màster de: Master in Intelligent Interactive SystemsTutor: Carlos CastilloThe research community, (supra-)national institutions, and regular users have noticed
that Artificial Intelligence and Machine Learning algorithms can amplify existing
inequity between groups. One way to limit this is to use group fairness metrics
to measure inequity, optimise and select models. However, there are many different
group fairness metrics. Here I combined a clustering of metrics (as done by Friedler
et al. in their 2019 paper "A comparative study of fairness-enhancing interventions
in machine learning" and by Miron et al. in their 2020 paper "Addressing multiple
metrics of group fairness in data-driven decision making") and expert-driven recommendations
(from a case study by Rodolfa et al., published in 2020: "Case study:
Predictive Fairness to Reduce Misdemeanor Recidivism Through Social Service Interventions")
to select fairness metrics. Although this clustering was not consistent,
it enabled fairness metric selection and fostered general recommendations on the
matter: an algorithm designer should extensively study their algorithm’s application
context and explicitly justify their choices relative to fairness. So long as there
is no absolute guide to metric selection, this should help nourish an ongoing and
context-specific discussion on algorithmic fairness, within and outside of the research
community