3,645 research outputs found
Graph-based Features for Automatic Online Abuse Detection
While online communities have become increasingly important over the years,
the moderation of user-generated content is still performed mostly manually.
Automating this task is an important step in reducing the financial cost
associated with moderation, but the majority of automated approaches strictly
based on message content are highly vulnerable to intentional obfuscation. In
this paper, we discuss methods for extracting conversational networks based on
raw multi-participant chat logs, and we study the contribution of graph
features to a classification system that aims to determine if a given message
is abusive. The conversational graph-based system yields unexpectedly high
performance , with results comparable to those previously obtained with a
content-based approach
Dataset annotation in abusive language detection
The last decade saw the rise of research in the area of hate speech and abusive language detection. A lot of research has been conducted, with further datasets being introduced and new models put forward. However, contrastive studies of the annotation of different datasets also revealed that some problematic issues remain. Theoretically ambiguous and misleading definitions between different studies make it more difficult to evaluate model reproducibility and generalizability and require additional steps for dataset standardization. To overcome these challenges, the field needs a common understanding of concepts and problems such that standard datasets and different compatible approaches can be developed, avoiding inefficient and redundant research. This article attempts to identify persistent challenges and develop guidelines to help future annotation tasks. Some of the challenges and guidelines identified and discussed in the article relate to concept subjectivity, focus on overt hate speech, dataset integrity and lack of ethical considerations
- …