2 research outputs found
Duluth at SemEval-2019 Task 6: Lexical Approaches to Identify and Categorize Offensive Tweets
This paper describes the Duluth systems that participated in SemEval--2019
Task 6, Identifying and Categorizing Offensive Language in Social Media
(OffensEval). For the most part these systems took traditional Machine Learning
approaches that built classifiers from lexical features found in manually
labeled training data. However, our most successful system for classifying a
tweet as offensive (or not) was a rule-based black--list approach, and we also
experimented with combining the training data from two different but related
SemEval tasks. Our best systems in each of the three OffensEval tasks placed in
the middle of the comparative evaluation, ranking 57th of 103 in task A, 39th
of 75 in task B, and 44th of 65 in task C.Comment: 7 pages, Appears in the Proceedings of the 13th International
Workshop on Semantic Eva luation (SemEval 2019), June 2019, pp. 593-599,
Minneapolis, MN (a NAACL-2019 workshop, aka OffenseEval--2019
Duluth at SemEval-2020 Task 12: Offensive Tweet Identification in English with Logistic Regression
This paper describes the Duluth systems that participated in SemEval--2020
Task 12, Multilingual Offensive Language Identification in Social Media
(OffensEval--2020). We participated in the three English language tasks. Our
systems provide a simple Machine Learning baseline using logistic regression.
We trained our models on the distantly supervised training data made available
by the task organizers and used no other resources. As might be expected we did
not rank highly in the comparative evaluation: 79th of 85 in Task A, 34th of 43
in Task B, and 24th of 39 in Task C. We carried out a qualitative analysis of
our results and found that the class labels in the gold standard data are
somewhat noisy. We hypothesize that the extremely high accuracy (> 90%) of the
top ranked systems may reflect methods that learn the training data very well
but may not generalize to the task of identifying offensive language in
English. This analysis includes examples of tweets that despite being mildly
redacted are still offensive.Comment: 10 pages, To appear in the Proceedings of the 14th International
Workshop on Semantic Evaluation (SemEval--2020), December 12-13, 2020,
Barcelona (a COLING-2020 workshop, aka OffensEval--2020