Disparate biases associated with datasets and trained classifiers in hateful
and abusive content identification tasks have raised many concerns recently.
Although the problem of biased datasets on abusive language detection has been
addressed more frequently, biases arising from trained classifiers have not yet
been a matter of concern. Here, we first introduce a transfer learning approach
for hate speech detection based on an existing pre-trained language model
called BERT and evaluate the proposed model on two publicly available datasets
annotated for racism, sexism, hate or offensive content on Twitter. Next, we
introduce a bias alleviation mechanism in hate speech detection task to
mitigate the effect of bias in training set during the fine-tuning of our
pre-trained BERT-based model. Toward that end, we use an existing
regularization method to reweight input samples, thereby decreasing the effects
of high correlated training set' s n-grams with class labels, and then
fine-tune our pre-trained BERT-based model with the new re-weighted samples. To
evaluate our bias alleviation mechanism, we employ a cross-domain approach in
which we use the trained classifiers on the aforementioned datasets to predict
the labels of two new datasets from Twitter, AAE-aligned and White-aligned
groups, which indicate tweets written in African-American English (AAE) and
Standard American English (SAE) respectively. The results show the existence of
systematic racial bias in trained classifiers as they tend to assign tweets
written in AAE from AAE-aligned group to negative classes such as racism,
sexism, hate, and offensive more often than tweets written in SAE from
White-aligned. However, the racial bias in our classifiers reduces
significantly after our bias alleviation mechanism is incorporated. This work
could institute the first step towards debiasing hate speech and abusive
language detection systems.Comment: This paper has been accepted in the PLOS ONE journal in August 202