1 research outputs found
FairCal: Fairness Calibration for Face Verification
Despite being widely used, face recognition models suffer from bias: the
probability of a false positive (incorrect face match) strongly depends on
sensitive attributes such as the ethnicity of the face. As a result, these
models can disproportionately and negatively impact minority groups,
particularly when used by law enforcement. The majority of bias reduction
methods have several drawbacks: they use an end-to-end retraining approach, may
not be feasible due to privacy issues, and often reduce accuracy. An
alternative approach is post-processing methods that build fairer decision
classifiers using the features of pre-trained models. However, they still have
drawbacks: they reduce accuracy (AGENDA, FTC), or require retuning for
different false positive rates (FSN). In this work, we introduce the Fairness
Calibration (FairCal) method, a post-training approach that: (i) increases
model accuracy (improving the state-of-the-art), (ii) produces
fairly-calibrated probabilities, (iii) significantly reduces the gap in the
false positive rates, (iv) does not require knowledge of the sensitive
attribute, and (v) does not require retraining, training an additional model,
or retuning. We apply it to the task of Face Verification, and obtain
state-of-the-art results with all the above advantages.Comment: 10 pages, 4 tables, 2 figures, + appendi