3 research outputs found
Automated Identification of Sexual Orientation and Gender Identity Discriminatory Texts from Issue Comments
In an industry dominated by straight men, many developers representing other
gender identities and sexual orientations often encounter hateful or
discriminatory messages. Such communications pose barriers to participation for
women and LGBTQ+ persons. Due to sheer volume, manual inspection of all
communications for discriminatory communication is infeasible for a large-scale
Free Open-Source Software (FLOSS) community. To address this challenge, this
study aims to develop an automated mechanism to identify Sexual orientation and
Gender identity Discriminatory (SGID) texts from software developers'
communications. On this goal, we trained and evaluated SGID4SE ( Sexual
orientation and Gender Identity Discriminatory text identification for (4)
Software Engineering texts) as a supervised learning-based SGID detection tool.
SGID4SE incorporates six preprocessing steps and ten state-of-the-art
algorithms. SGID4SE implements six different strategies to improve the
performance of the minority class. We empirically evaluated each strategy and
identified an optimum configuration for each algorithm. In our ten-fold
cross-validation-based evaluations, a BERT-based model boosts the best
performance with 85.9% precision, 80.0% recall, and 82.9% F1-Score for the SGID
class. This model achieves 95.7% accuracy and 80.4% Matthews Correlation
Coefficient. Our dataset and tool establish a foundation for further research
in this direction
Misogyny Detection in Social Media on the Twitter Platform
The thesis is devoted to the problem of misogyny detection in social media. In the work we analyse the difference between all offensive language and misogyny language in social media, and review the best existing approaches to detect offensive and misogynistic language, which are based on classical machine learning and neural networks. We also review recent shared tasks aimed to detect misogyny in social media, several of which we have participated in. We propose an approach to the detection and classification of misogyny in texts, based on the construction of an ensemble of models of classical machine learning: Logistic Regression, Naive Bayes, Support Vectors Machines. Also, at the preprocessing stage we used some linguistic features, and novel approaches which allow us to improve the quality of classification. We tested the model on the real datasets both English and multilingual corpora. The results we achieved with our model are highly competitive in this area and demonstrate the capability for future improvement