3 research outputs found
Automated Identification of Sexual Orientation and Gender Identity Discriminatory Texts from Issue Comments
In an industry dominated by straight men, many developers representing other
gender identities and sexual orientations often encounter hateful or
discriminatory messages. Such communications pose barriers to participation for
women and LGBTQ+ persons. Due to sheer volume, manual inspection of all
communications for discriminatory communication is infeasible for a large-scale
Free Open-Source Software (FLOSS) community. To address this challenge, this
study aims to develop an automated mechanism to identify Sexual orientation and
Gender identity Discriminatory (SGID) texts from software developers'
communications. On this goal, we trained and evaluated SGID4SE ( Sexual
orientation and Gender Identity Discriminatory text identification for (4)
Software Engineering texts) as a supervised learning-based SGID detection tool.
SGID4SE incorporates six preprocessing steps and ten state-of-the-art
algorithms. SGID4SE implements six different strategies to improve the
performance of the minority class. We empirically evaluated each strategy and
identified an optimum configuration for each algorithm. In our ten-fold
cross-validation-based evaluations, a BERT-based model boosts the best
performance with 85.9% precision, 80.0% recall, and 82.9% F1-Score for the SGID
class. This model achieves 95.7% accuracy and 80.4% Matthews Correlation
Coefficient. Our dataset and tool establish a foundation for further research
in this direction
Towards Automated Classification of Code Review Feedback to Support Analytics
Background: As improving code review (CR) effectiveness is a priority for
many software development organizations, projects have deployed CR analytics
platforms to identify potential improvement areas. The number of issues
identified, which is a crucial metric to measure CR effectiveness, can be
misleading if all issues are placed in the same bin. Therefore, a finer-grained
classification of issues identified during CRs can provide actionable insights
to improve CR effectiveness. Although a recent work by Fregnan et al. proposed
automated models to classify CR-induced changes, we have noticed two potential
improvement areas -- i) classifying comments that do not induce changes and ii)
using deep neural networks (DNN) in conjunction with code context to improve
performances. Aims: This study aims to develop an automated CR comment
classifier that leverages DNN models to achieve a more reliable performance
than Fregnan et al. Method: Using a manually labeled dataset of 1,828 CR
comments, we trained and evaluated supervised learning-based DNN models
leveraging code context, comment text, and a set of code metrics to classify CR
comments into one of the five high-level categories proposed by Turzo and Bosu.
Results: Based on our 10-fold cross-validation-based evaluations of multiple
combinations of tokenization approaches, we found a model using CodeBERT
achieving the best accuracy of 59.3%. Our approach outperforms Fregnan et al.'s
approach by achieving 18.7% higher accuracy. Conclusion: Besides facilitating
improved CR analytics, our proposed model can be useful for developers in
prioritizing code review feedback and selecting reviewers