Predictive models that satisfy group fairness criteria in aggregate for
members of a protected class, but do not guarantee subgroup fairness, could
produce biased predictions for individuals at the intersection of two or more
protected classes. To address this risk, we propose Conditional Bias Scan
(CBS), a flexible auditing framework for detecting intersectional biases in
classification models. CBS identifies the subgroup for which there is the most
significant bias against the protected class, as compared to the equivalent
subgroup in the non-protected class, and can incorporate multiple commonly used
fairness definitions for both probabilistic and binarized predictions. We show
that this methodology can detect previously unidentified intersectional and
contextual biases in the COMPAS pre-trial risk assessment tool and has higher
bias detection power compared to similar methods that audit for subgroup
fairness.Comment: 29 pages, 7 figure