Assessment of ethnic and gender bias in automated first impression analysis

Abstract

This thesis aims to investigate possible gender and ethnic biases in state-of-the-art deep learning methods in first impression analysis. Analysing a person with some software, businesses want to find the best candidate, without the person being judged by their gender or ethnicity. To achieve this, a first impression dataset about the big five personality traits, with additional information about the person’s gender and ethnic background, was used. Biases were both investigated with models trained on balanced and imbalanced data, where balanced here refers to the number of frames used from people classified as Asian, African-American, or Caucasian in the dataset. The results with both the balanced and imbalanced datasets were similar. With all the models the accuracy for Asians was much higher compared to others, which may come from the fact that the dataset did not include enough variance in the Asian data, so when evaluating, all Asians were seen similarly

    Similar works