49 research outputs found
Variation of Gender Biases in Visual Recognition Models Before and After Finetuning
We introduce a framework to measure how biases change before and after
fine-tuning a large scale visual recognition model for a downstream task. Deep
learning models trained on increasing amounts of data are known to encode
societal biases. Many computer vision systems today rely on models typically
pretrained on large scale datasets. While bias mitigation techniques have been
developed for tuning models for downstream tasks, it is currently unclear what
are the effects of biases already encoded in a pretrained model. Our framework
incorporates sets of canonical images representing individual and pairs of
concepts to highlight changes in biases for an array of off-the-shelf
pretrained models across model sizes, dataset sizes, and training objectives.
Through our analyses, we find that (1) supervised models trained on datasets
such as ImageNet-21k are more likely to retain their pretraining biases
regardless of the target dataset compared to self-supervised models. We also
find that (2) models finetuned on larger scale datasets are more likely to
introduce new biased associations. Our results also suggest that (3) biases can
transfer to finetuned models and the finetuning objective and dataset can
impact the extent of transferred biases.Comment: 10 pages, 3 Figure