3 research outputs found

    Introduction to the Special Section on Computational Modeling and Understanding of Emotions in Conflictual Social Interactions

    Full text link
    The editorial work of C. Clavel for this special issue was partially supported by a grant overseen by the French National Research Agency (ANR17-MAOI) and by the European project H2020 ANIMATAS (MSCA-ITN-ETN 7659552). The editorial work of V. Patti was partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618_L2_BOSC_01). P. Rosso was partially funded by Spanish MICINN under the research project MISMIS-FAKEnHATE on MISinformation and MIScommunication in social media: FAKE news and HATE speech (PGC2018-096212-B-C31).Damiano, R.; Patti, V.; Clavel, C.; Rosso, P. (2020). Introduction to the Special Section on Computational Modeling and Understanding of Emotions in Conflictual Social Interactions. ACM Transactions on Internet Technology. 20(2):1-5. https://doi.org/10.1145/3392334S15202Basile, V., Bosco, C., Fersini, E., Nozza, D., Patti, V., Rangel Pardo, F. M., … Sanguinetti, M. (2019). SemEval-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter. Proceedings of the 13th International Workshop on Semantic Evaluation. doi:10.18653/v1/s19-2007Bassignana, E., Basile, V., & Patti, V. (2018). Hurtlex: A Multilingual Lexicon of Words to Hurt. Proceedings of the Fifth Italian Conference on Computational Linguistics CLiC-it 2018, 51-56. doi:10.4000/books.aaccademia.3085Cristina Bosco Felice Dell’Orletta Fabio Poletto Manuela Sanguinetti and Maurizio Tesconi. 2018. Overview of the EVALITA 2018 hate speech detection task. In Proceedings of the 6th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA’18) co-located with the 5th Italian Conference on Computational Linguistics (CLiC-it’18). 9. http://ceur-ws.org/Vol-2263/paper010.pdf Cristina Bosco Felice Dell’Orletta Fabio Poletto Manuela Sanguinetti and Maurizio Tesconi. 2018. Overview of the EVALITA 2018 hate speech detection task. In Proceedings of the 6th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA’18) co-located with the 5th Italian Conference on Computational Linguistics (CLiC-it’18). 9. http://ceur-ws.org/Vol-2263/paper010.pdfBrady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313-7318. doi:10.1073/pnas.1618923114Fortuna, P., & Nunes, S. (2018). A Survey on Automatic Detection of Hate Speech in Text. ACM Computing Surveys, 51(4), 1-30. doi:10.1145/3232676Pamungkas, E. W., & Patti, V. (2019). Cross-domain and Cross-lingual Abusive Language Detection: A Hybrid Approach with Deep Learning and a Multilingual Lexicon. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. doi:10.18653/v1/p19-2051Plutchik, R. (2001). The Nature of Emotions. American Scientist, 89(4), 344. doi:10.1511/2001.4.344Schmidt, A., & Wiegand, M. (2017). A Survey on Hate Speech Detection using Natural Language Processing. Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. doi:10.18653/v1/w17-1101W. Wilmot and J. Hocker. 2013. Interpersonal Conflict (9th ed.). McGraw-Hill New York NY. W. Wilmot and J. Hocker. 2013. Interpersonal Conflict (9th ed.). McGraw-Hill New York NY

    “No Offense but Have You Considered Dieting?” : A Content Analysis of Weight Bias in Mukbanger Comments on TikTok

    Get PDF
    This thesis examines the occurrence of weight bias in comments received by mukbang creators on the short form video platform TikTok. Weight bias refers to negative attitudes and beliefs that are based on a person's size or weight and these can include for example the objectification of larger individuals and the focus on personal responsibility in weight outcomes. Weight bias can have serious concrete implications such as decreased opportunities for health care, interpersonal relationships and professional ventures and a higher occurrence of harassment. Social media was chosen as the research avenue due to its rising importance in modern communication and TikTok in particular due to its rapidly increasing popularity and young userbase. Food was used as a framing device in this study to collect comparable data and thus all the content creators chosen produce mukbang videos. Mukbang is an online phenomenon originating from South Korea where content creators record themselves eating substantial amounts of energy-dense foods in front of a camera. This thesis focuses on the implications of weight bias by comparing content creators who present themselves online as normative bodied and content creators who present themselves as larger than normative. Since weight bias is known to be a gendered phenomenon, the content creators were chosen based on size and binary gender, male and female. Thus, the data set comprised of 600 comments in total, 150 collected from the comment sections of each creator, one female larger than normative creator, @shirinjka, one larger than normative male creator, @realnikocadoavocado, one normative female creator, @keilapacheco, and one normative male creator, @stevensushi. The collected data was analyzed using quantitative content analysis and utilizing a hate speech categorization model that categorizes comments based on their supportive, critical/hostile, and neutral sentiments. After looking at the data quantitatively, it was then further analyzed using thematic analysis to see what kinds of themes arise in weight bias discourse online. The results showed that weight bias exists in online communication. Both the male creators and the larger than normative female creator received a majority of critical/hostile comments while the female normative creator received a majority of supportive comments. In the critical/hostile category, both the larger content creators received more comments aimed at their personal characteristics than their normative counterpart. Furthermore, larger than normative creator comments showcased themes that have weight bias implications such as notions of self-responsibility in weight outcomes and humor which contributes to objectification. The normative male creator did receive a considerable number of critical/hostile comments as well which points out the large amount of online hate males receive online. He also received, as the only creator in the data set, inappropriate sexual remarks which raises interesting questions about gendered and heteronormative representations online. The vast differences in the female creators' comments also shows how weight bias can be more intense and divided when it comes to females online
    corecore