865 research outputs found
Hate Speech Detection and Racial Bias Mitigation in Social Media based on BERT model
Disparate biases associated with datasets and trained classifiers in hateful
and abusive content identification tasks have raised many concerns recently.
Although the problem of biased datasets on abusive language detection has been
addressed more frequently, biases arising from trained classifiers have not yet
been a matter of concern. Here, we first introduce a transfer learning approach
for hate speech detection based on an existing pre-trained language model
called BERT and evaluate the proposed model on two publicly available datasets
annotated for racism, sexism, hate or offensive content on Twitter. Next, we
introduce a bias alleviation mechanism in hate speech detection task to
mitigate the effect of bias in training set during the fine-tuning of our
pre-trained BERT-based model. Toward that end, we use an existing
regularization method to reweight input samples, thereby decreasing the effects
of high correlated training set' s n-grams with class labels, and then
fine-tune our pre-trained BERT-based model with the new re-weighted samples. To
evaluate our bias alleviation mechanism, we employ a cross-domain approach in
which we use the trained classifiers on the aforementioned datasets to predict
the labels of two new datasets from Twitter, AAE-aligned and White-aligned
groups, which indicate tweets written in African-American English (AAE) and
Standard American English (SAE) respectively. The results show the existence of
systematic racial bias in trained classifiers as they tend to assign tweets
written in AAE from AAE-aligned group to negative classes such as racism,
sexism, hate, and offensive more often than tweets written in SAE from
White-aligned. However, the racial bias in our classifiers reduces
significantly after our bias alleviation mechanism is incorporated. This work
could institute the first step towards debiasing hate speech and abusive
language detection systems.Comment: This paper has been accepted in the PLOS ONE journal in August 202
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning
Human society had a long history of suffering from cognitive biases leading
to social prejudices and mass injustice. The prevalent existence of cognitive
biases in large volumes of historical data can pose a threat of being
manifested as unethical and seemingly inhuman predictions as outputs of AI
systems trained on such data. To alleviate this problem, we propose a
bias-aware multi-objective learning framework that given a set of identity
attributes (e.g. gender, ethnicity etc.) and a subset of sensitive categories
of the possible classes of prediction outputs, learns to reduce the frequency
of predicting certain combinations of them, e.g. predicting stereotypes such as
`most blacks use abusive language', or `fear is a virtue of women'. Our
experiments conducted on an emotion prediction task with balanced class priors
shows that a set of baseline bias-agnostic models exhibit cognitive biases with
respect to gender, such as women are prone to be afraid whereas men are more
prone to be angry. In contrast, our proposed bias-aware multi-objective
learning methodology is shown to reduce such biases in the predictied emotions
Detecting East Asian Prejudice on Social Media
The outbreak of COVID-19 has transformed societies across the world as
governments tackle the health, economic and social costs of the pandemic. It
has also raised concerns about the spread of hateful language and prejudice
online, especially hostility directed against East Asia. In this paper we
report on the creation of a classifier that detects and categorizes social
media posts from Twitter into four classes: Hostility against East Asia,
Criticism of East Asia, Meta-discussions of East Asian prejudice and a neutral
class. The classifier achieves an F1 score of 0.83 across all four classes. We
provide our final model (coded in Python), as well as a new 20,000 tweet
training dataset used to make the classifier, two analyses of hashtags
associated with East Asian prejudice and the annotation codebook. The
classifier can be implemented by other researchers, assisting with both online
content moderation processes and further research into the dynamics, prevalence
and impact of East Asian prejudice online during this global pandemic.Comment: 12 page
- …