99 research outputs found

    The role of sarcasm in hate speech.A multilingual perspective

    Get PDF

    Automatic Identification of Misogyny in English and Italian Tweets at EVALITA 2018 with a Multilingual Hate Lexicon

    Get PDF
    In this paper we describe our submission to the shared task of Automatic Misogyny Identification in English and Italian Tweets (AMI) organized at EVALITA 2018. Our approach is based on SVM classifiers and enhanced by stylistic and lexical features. Additionally, we analyze the use of the novel HurtLex multilingual linguistic resource, developed by enriching in a computational and multilingual perspective of the hate words Italian lexicon by the linguist Tullio De Mauro, in order to investigate its impact in this task.Nel presente lavoro descriviamo il sistema inviato allo shared task di Automatic Misogyny Identification (AMI) ad EVALITA 2018. Il nostro approccio si basa su classificatori SVM, ottimizzati da feature stilistiche e lessicali. Inoltre, analizziamo il ruolo della nuova risorsa linguistica HurtLex, un’estensione in prospettiva computazionale e multilingue del lessico di parole per ferire in italiano proposto dal linguista Tullio De Mauro, per meglio comprendere il suo impatto in questo tipo di task

    Misogyny Detection in Social Media on the Twitter Platform

    Get PDF
    The thesis is devoted to the problem of misogyny detection in social media. In the work we analyse the difference between all offensive language and misogyny language in social media, and review the best existing approaches to detect offensive and misogynistic language, which are based on classical machine learning and neural networks. We also review recent shared tasks aimed to detect misogyny in social media, several of which we have participated in. We propose an approach to the detection and classification of misogyny in texts, based on the construction of an ensemble of models of classical machine learning: Logistic Regression, Naive Bayes, Support Vectors Machines. Also, at the preprocessing stage we used some linguistic features, and novel approaches which allow us to improve the quality of classification. We tested the model on the real datasets both English and multilingual corpora. The results we achieved with our model are highly competitive in this area and demonstrate the capability for future improvement

    Classifying Misogynistic Tweets Using a Blended Model: the AMI Shared Task in IBEREVAL 2018

    Get PDF
    This article describes a possible solution for Automatic Misogyny Identification (AMI) Shared Task at IBEREVAL-2018. The proposed technique is based on combining several simpler classifiers into one more complex blended model, which classified the data taking into account the probabilities of belonging to classes calculated by simpler models. We used the Logistic Regression, Naive Bayes, and SVM classifiers. The experimental results show that blended model works better than simpler models for all three type of classification, for both binomial classification (Misogyny Identifivation, Target Classification) and multinomial classification (Misogynistic Behavior)

    Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter

    Full text link
    [EN] Patriarchal behavior, such as other social habits, has been transferred online, appearing as misogynistic and sexist comments, posts or tweets. This online hate speech against women has serious consequences in real life, and recently, various legal cases have arisen against social platforms that scarcely block the spread of hate messages towards individuals. In this difficult context, this paper presents an approach that is able to detect the two sides of patriarchal behavior, misogyny and sexism, analyzing three collections of English tweets, and obtaining promising results.The work of Simona Frenda and Paolo Rosso was partially funded by the Spanish MINECO under the research project SomEMBED (TIN2015-71147-C2-1-P). We also thank the support of CONACYT-Mexico (project FC-2410).Frenda, S.; Ghanem, B.; Montes-Y-Gómez, M.; Rosso, P. (2019). Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter. Journal of Intelligent & Fuzzy Systems. 36(5):4743-4752. https://doi.org/10.3233/JIFS-179023S47434752365Anzovino M. , Fersini E. and Rosso P. , Automatic Identification and Classification of Misogynistic Language on Twitter, Proc 23rd International Conference on Applications of Natural Language to Information Systems, NLDB-2018, Springer-Verlag, LNCS 10859, 2018, pp. 57–64.Burnap P. and Williams M.L. , Hate speech, machine classification and statistical modelling of information flows on Twitter: Interpretation and communication for policy decision making, Internet, Policy and Politics, Oxford, UK, 2014.Burnap, P., Rana, O. F., Avis, N., Williams, M., Housley, W., Edwards, A., … Sloan, L. (2015). Detecting tension in online communities with computational Twitter analysis. Technological Forecasting and Social Change, 95, 96-108. doi:10.1016/j.techfore.2013.04.013Chen Y. , Zhou Y. , Zhu S. and Xu H. , Detecting offensive language in social media to protect adolescent online safety, Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Conference on Social Computing (SocialCom), Amsterdam, Netherlands, IEEE, 2012, pp. 71–80.Escalante, H. J., Villatoro-Tello, E., Garza, S. E., López-Monroy, A. P., Montes-y-Gómez, M., & Villaseñor-Pineda, L. (2017). Early detection of deception and aggressiveness using profile-based representations. Expert Systems with Applications, 89, 99-111. doi:10.1016/j.eswa.2017.07.040Fersini E. , Anzovino M. and Rosso P. , Overview of the Task on Automatic Misogyny Identification at IBEREVAL, CEUR Workshop Proceedings 2150, Seville, Spain, 2018.Fersini E. , Nozza D. and Rosso P. , Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI), Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA’18), Turin, Italy, 2018.Fox, J., & Tang, W. Y. (2014). Sexism in online video games: The role of conformity to masculine norms and social dominance orientation. Computers in Human Behavior, 33, 314-320. doi:10.1016/j.chb.2013.07.014Fulper R. , Ciampaglia G.L. , Ferrara E. , Ahn Y. , Flammini A. , Menczer F. , Lewis B. and Rowe K. , Misogynistic language on Twitter and sexual violence, Proceedings of the ACM Web Science Workshop on Computational Approaches to Social Modeling (ChASM), 2014.Gambäck B. and Sikdar U.K. , Using convolutional neural networks to classify hate-speech, Proceedings of the First Workshop on Abusive Language Online 2017.Hewitt, S., Tiropanis, T., & Bokhove, C. (2016). The problem of identifying misogynist language on Twitter (and other online social spaces). Proceedings of the 8th ACM Conference on Web Science. doi:10.1145/2908131.2908183Justo R. , Corcoran T. , Lukin S.M. , Walker M. and Torres M.I. , Extracting relevant knowledge for the detection of sarcasm and nastiness in the social web, Knowledge-Based Systems, 2014.Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434-443. doi:10.1016/j.chb.2011.10.014Nobata C. , Tetreault J. , Thomas A. , Mehdad Y. and Chang Y. , Abusive language detection in online user content, Proceedings of the 25th International Conference on World Wide Web, Geneva, Switzerland, 2016, pp. 145–153.Poland, B. (2016). Haters. doi:10.2307/j.ctt1fq9wdpSamghabadi N.S. , Maharjan S. , Sprague A. , Diaz-Sprague R. and Solorio T. , Detecting nastiness in social media, Proceedings of the First Workshop on Abusive Language Online, Vancouver, Canada, 2017, pp. 63–72. Association for Computational Linguistics.Sood, S., Antin, J., & Churchill, E. (2012). Profanity use in online communities. Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI ’12. doi:10.1145/2207676.220861

    Overview of the Evalita 2018 task on Automatic Misogyny Identification (AMI)

    Get PDF
    Automatic Misogyny Identification (AMI) is a new shared task proposed for the first time at the Evalita 2018 evaluation campaign. The AMI challenge, based on both Italian and English tweets, is distinguished into two subtasks, i.e. Subtask A on misogyny identification and Subtask B about misogynistic behaviour categorization and target classification. Regarding the Italian language, we have received a total of 13 runs for Subtask A and 11 runs for Subtask B. Concerning the English language, we received 26 submissions for Subtask A and 23 runs for Subtask B. The participating systems have been distinguished according to the language, counting 6 teams for Italian and 10 teams for English. We present here an overview of the AMI shared task, the datasets, the evaluation methodology, the results obtained by the participants and a discussion of the methodology adopted by the teams. Finally, we draw some conclusions and discuss future work.Automatic Misogyny Identification (AMI) è un nuovo shared task proposto per la prima volta nella campagna di valutazione Evalita 2018. La sfida AMI, basata su tweet italiani e inglesi, si distingue in due sottotask ossia Subtask A relativo al riconoscimento della misoginia e Subtask B relativo alla categorizzazione di espressioni misogine e alla classificazione del soggetto target. Per quanto riguarda la lingua italiana, sono stati ricevuti un totale di 13 run per il Subtask A e 11 run per il Subtask B. Per quanto riguarda la lingua inglese, sono stati ricevuti 26 run per il Subtask A e 23 per Subtask B. I sistemi partecipanti sono stati distinti in base alla lingua, raccogliendo un totale di 6 team partecipanti per l’italiano e 10 team per l’inglese. Presentiamo di seguito una sintesi dello shared task AMI, i dataset, la metodologia di valutazione, i risultati ottenuti dai partecipanti e una discussione sulle metodologie adottate dai diversi team. Infine, vengono discusse conclusioni e delineati gli sviluppi futuri

    Misogyny Detection and Classification in English Tweets: The Experience of the ITT Team

    Get PDF
    The problem of online misogyny and women-based offending has become increasingly widespread, and the automatic detection of such messages is an urgent priority. In this paper, we present an approach based on an ensemble of Logistic Regression, Support Vector Machines, and NaĂŻve Bayes models for the detection of misogyny in texts extracted from the Twitter platform. Our method has been presented in the framework of the participation in the Automatic Misogyny Identification (AMI) Shared Task in the EVALITA 2018 evaluation campaign

    Exploration of Misogyny in Spanish and English tweets

    Get PDF
    • …
    corecore