1 research outputs found
Explainable and High-Performance Hate and Offensive Speech Detection
The spread of information through social media platforms can create
environments possibly hostile to vulnerable communities and silence certain
groups in society. To mitigate such instances, several models have been
developed to detect hate and offensive speech. Since detecting hate and
offensive speech in social media platforms could incorrectly exclude
individuals from social media platforms, which can reduce trust, there is a
need to create explainable and interpretable models. Thus, we build an
explainable and interpretable high performance model based on the XGBoost
algorithm, trained on Twitter data. For unbalanced Twitter data, XGboost
outperformed the LSTM, AutoGluon, and ULMFiT models on hate speech detection
with an F1 score of 0.75 compared to 0.38 and 0.37, and 0.38 respectively. When
we down-sampled the data to three separate classes of approximately 5000
tweets, XGBoost performed better than LSTM, AutoGluon, and ULMFiT; with F1
scores for hate speech detection of 0.79 vs 0.69, 0.77, and 0.66 respectively.
XGBoost also performed better than LSTM, AutoGluon, and ULMFiT in the
down-sampled version for offensive speech detection with F1 score of 0.83 vs
0.88, 0.82, and 0.79 respectively. We use Shapley Additive Explanations (SHAP)
on our XGBoost models' outputs to makes it explainable and interpretable
compared to LSTM, AutoGluon and ULMFiT that are black-box models