1,403 research outputs found

    Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter

    Full text link
    [EN] Patriarchal behavior, such as other social habits, has been transferred online, appearing as misogynistic and sexist comments, posts or tweets. This online hate speech against women has serious consequences in real life, and recently, various legal cases have arisen against social platforms that scarcely block the spread of hate messages towards individuals. In this difficult context, this paper presents an approach that is able to detect the two sides of patriarchal behavior, misogyny and sexism, analyzing three collections of English tweets, and obtaining promising results.The work of Simona Frenda and Paolo Rosso was partially funded by the Spanish MINECO under the research project SomEMBED (TIN2015-71147-C2-1-P). We also thank the support of CONACYT-Mexico (project FC-2410).Frenda, S.; Ghanem, B.; Montes-Y-Gómez, M.; Rosso, P. (2019). Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter. Journal of Intelligent & Fuzzy Systems. 36(5):4743-4752. https://doi.org/10.3233/JIFS-179023S47434752365Anzovino M. , Fersini E. and Rosso P. , Automatic Identification and Classification of Misogynistic Language on Twitter, Proc 23rd International Conference on Applications of Natural Language to Information Systems, NLDB-2018, Springer-Verlag, LNCS 10859, 2018, pp. 57–64.Burnap P. and Williams M.L. , Hate speech, machine classification and statistical modelling of information flows on Twitter: Interpretation and communication for policy decision making, Internet, Policy and Politics, Oxford, UK, 2014.Burnap, P., Rana, O. F., Avis, N., Williams, M., Housley, W., Edwards, A., … Sloan, L. (2015). Detecting tension in online communities with computational Twitter analysis. Technological Forecasting and Social Change, 95, 96-108. doi:10.1016/j.techfore.2013.04.013Chen Y. , Zhou Y. , Zhu S. and Xu H. , Detecting offensive language in social media to protect adolescent online safety, Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Conference on Social Computing (SocialCom), Amsterdam, Netherlands, IEEE, 2012, pp. 71–80.Escalante, H. J., Villatoro-Tello, E., Garza, S. E., López-Monroy, A. P., Montes-y-Gómez, M., & Villaseñor-Pineda, L. (2017). Early detection of deception and aggressiveness using profile-based representations. Expert Systems with Applications, 89, 99-111. doi:10.1016/j.eswa.2017.07.040Fersini E. , Anzovino M. and Rosso P. , Overview of the Task on Automatic Misogyny Identification at IBEREVAL, CEUR Workshop Proceedings 2150, Seville, Spain, 2018.Fersini E. , Nozza D. and Rosso P. , Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI), Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA’18), Turin, Italy, 2018.Fox, J., & Tang, W. Y. (2014). Sexism in online video games: The role of conformity to masculine norms and social dominance orientation. Computers in Human Behavior, 33, 314-320. doi:10.1016/j.chb.2013.07.014Fulper R. , Ciampaglia G.L. , Ferrara E. , Ahn Y. , Flammini A. , Menczer F. , Lewis B. and Rowe K. , Misogynistic language on Twitter and sexual violence, Proceedings of the ACM Web Science Workshop on Computational Approaches to Social Modeling (ChASM), 2014.Gambäck B. and Sikdar U.K. , Using convolutional neural networks to classify hate-speech, Proceedings of the First Workshop on Abusive Language Online 2017.Hewitt, S., Tiropanis, T., & Bokhove, C. (2016). The problem of identifying misogynist language on Twitter (and other online social spaces). Proceedings of the 8th ACM Conference on Web Science. doi:10.1145/2908131.2908183Justo R. , Corcoran T. , Lukin S.M. , Walker M. and Torres M.I. , Extracting relevant knowledge for the detection of sarcasm and nastiness in the social web, Knowledge-Based Systems, 2014.Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434-443. doi:10.1016/j.chb.2011.10.014Nobata C. , Tetreault J. , Thomas A. , Mehdad Y. and Chang Y. , Abusive language detection in online user content, Proceedings of the 25th International Conference on World Wide Web, Geneva, Switzerland, 2016, pp. 145–153.Poland, B. (2016). Haters. doi:10.2307/j.ctt1fq9wdpSamghabadi N.S. , Maharjan S. , Sprague A. , Diaz-Sprague R. and Solorio T. , Detecting nastiness in social media, Proceedings of the First Workshop on Abusive Language Online, Vancouver, Canada, 2017, pp. 63–72. Association for Computational Linguistics.Sood, S., Antin, J., & Churchill, E. (2012). Profanity use in online communities. Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI ’12. doi:10.1145/2207676.220861

    A computational approach to analyzing and detecting trans-exclusionary radical feminists (TERFs) on Twitter

    Get PDF
    Within the realm of abusive content detection for social media, little research has been conducted on the transphobic hate group known as trans-exclusionary radical feminists (TERFs). The community engages in harmful behaviors such as targeted harassment of transgender people on Twitter, and perpetuates transphobic rhetoric such as denial of trans existence under the guise of feminism. This thesis analyzes the network of the TERF community on Twitter, by discovering several sub-communities as well as modeling the topics of their tweets. We also introduce TERFSPOT, a classifier for predicting whether a Twitter user is a TERF or not, based on a combination of network and textual features. The contributions of this work are twofold: we conduct the first large-scale computational analysis of the TERF hate group on Twitter, and demonstrate a classifier with a 90% accuracy for identifying TERFs

    Gender inequality on Twitter during the UK election of 2019

    Get PDF
    Social media platforms such as Twitter play an essential role in politics and social movements nowadays. The aim of this paper is to compare and contrast the language used on Twitter to refer to the candidates of the last UK general election of December 2019 in order to raise awareness of gender inequality in politics. The methodology followed is based on three aspects: (a) a quantitative analysis using Sketch Engine to extract the main collocates from the corpus; (b) a sentiment analysis of the compiled tweets by means of two lexicon classifications: BING (Hu & Liu, 2004) and NRC (Mohammad & Turney, 2013), which classifies words into eight basic emotions and two sentiments (positive and negative); and (c) a qualitative analysis employing a Critical Discourse Analysis approach (Fairclough, 2013) to examine verbal abuse towards women from a linguistics perspective

    Virtual manhood acts within social networks: The enactment of toxic masculinity on Reddit

    Get PDF
    Toxic masculinity (TM) has emerged as a label for the western hegemonic masculine ideal, which is generally defined by the pressure for boys and men to be aggressive and dominant, restrict emotional expression, and marginalize women and others that do not adhere to these values (Connell & Messerschmidt, 2005; Kupers, 2005). This phenomenon increases the risk for male identified individuals to engage in general acts of bigotry, especially gender-based violence (APA, 2018; Baugher & Gazmararian, 2015; FBI, 2007; Feder, Levant, & Dean, 2010). A contemporary touchpoint associated with proliferating TM ideologies is participation in online anonymous “toxic technoculture” social network communities (Massanari, 2017; Salter, 2018). A timely investigation was conducted by Moloney and Love (2018) that explored the enactments of masculinity in virtual online spaces. They subsequently introduced the concept of Virtual Manhood Acts (VMA) which provides a framework to understand how masculinity is observed online. VMA were characterized as behaviors enacted to maintain a heterosexist environment and to oppress women and others in virtual social spaces. Prior to this study there was limited empirical understanding of how problematic and toxic enactments of masculinity, evident in society, are also enacted in virtual spaces. This study addresses the call to investigate enactments of VMA on other online social platforms. This qualitative investigation of the enactment of VMA was conducted on Reddit, the most popular social network website and seventh most trafficked website in North America (Hardwick, 2020, May 9). Data was captured before and after two publicized mass femicide events from two Manosphere connected Reddit community forums (r/IncelsWithoutHate & r/MensRights). The identified forums have been implicated as featuring misogynistic and bigoted ideological posts. Data was analyzed utilizing the a priori concept of VMA and a modified constructivist grounded theory approach (Charmaz, 2014). This hybrid deductive and inductive approach allowed for identifiable, novel, and divergent themes of manhood enactments to emerge. The results and implications of this study are discussed with select psychological frameworks and other fields of study in mind

    Playing with the News on Reddit: The Politics Game on r/The_Donald

    Full text link
    Research into online forms of far-right, alt-right, populist and supremacist politics has raised questions about the extent to which social media enables or constitutes extremist affects and ideologies. Building on this research, and through a case study of how a pro-Trump community on Reddit made sense of news events and sought to contest their representation, this paper explores the relationship between games and politics, arguing that digital platforms encourage people to apprehend, interpret and contest political ideas and information as if engaged in a kind of video game. We show how the group sought to manipulate platform affordances, waging a kind of Info War rooted in an understanding of politics as a pure space of conflict. We show how social media orients people to politics, phenomenologically, through the logics, structures and narratives of online games and argue that this affects not only online behaviours but more general apprehensions of politics

    The Body Politics of Data

    Get PDF
    The PhD project The Body Politics of Data is an artistic, practice-based study exploring how feminist methodologies can create new ways to conceptualise digital embodiment within the field of art and technology. As a field of practice highly influenced by scientific and technical methodologies, the discursive and artistic tools for examining data as a social concern are limited. The research draws on performance art from the 60s, cyberfeminist practice, Object Oriented Feminism and intersectional perspectives on data to conceive of new models of practice that can account for the body political operations of extractive big data technologies and Artificial Intelligence. The research is created through a body of individual and collective experimental artistic projects featured in the solo exhibition The Body Politics of Data at London Gallery West (2020). It includes work on maternity data and predictive products in relation to reproductive health in the UK, created in collaboration with Loes Bogers (2016-2017), workshops on “bodily bureaucracies” with Autonomous Tech Fetish (2013-2016) and Accumulative Care, a feminist model of care for labouring in the age of extractive digital technologies. This research offers an embodied feminist methodology for artistic practice to become investigative of how processes of digitalisation have adverse individual and collective effects in order to identify and resist the forms of personal and collective risk emerging with data driven technologies

    Automatic Misogyny Detection in Social Media: a Survey

    Get PDF
    This article presents a survey of automated misogyny identification techniques in social media, especially in Twitter. This problem is urgent because of the high speed at which messages on social platforms grow and the widespread use of offensive language (including misogynistic language) in them. In this article we survey approaches proposed in the literature to solve the problem of misogynistic message recognition. These include classical machine learning models like Sup-port Vector Machine, Naive Bayes, Logistic Regression and ensembles of different classical machine learning models and deep neural networks such as Long Short-term memory and Convolutional Neural Networks. We consider results of experiments with these models in different languages: English, Spanish and Italian tweets. The survey describes some features which help to identify misogynistic tweets and some challenges which aim was to create misogyny language classifiers. The survey includes not only models which help to identify misogyny language, but also systems which help to recognize a target of an offense (an individual or a group of persons)

    Exploring Misogyny across the Manosphere in Reddit

    Get PDF
    The ‘manosphere’ has been a recent subject of feminist scholar- ship on the web. Serious accusations have been levied against it for its role in encouraging misogyny and violent threats towards women online, as well as for potentially radicalising lonely or dis-enfranchised men. Feminist scholars evidence this through a shift in the language and interests of some men’s rights activists on the manosphere, away from traditional subjects of family law or mental health and towards more sexually explicit, violent, racist and homophobic language. In this paper, we study this phenomenon by investigating the flow of extreme language across seven online communities on Reddit, with openly misogynistic members (e.g., Men Going Their Own Way, Involuntarily Celibates), and investigate if and how misogynistic ideas spread within and across these communities. Grounded on feminist critiques of language, we created nine lexicons capturing specific misogynistic rhetoric (Physical Violence, Sexual Violence, Hostility, Patriarchy, Stoicism, Racism, Homophobia, Belittling, and Flipped Narrative) and used these lexicons to explore how language evolves within and across misogynistic groups. This analysis was conducted on 6 million posts, from 300K conversations created between 2011 and December 2018. Our results shows increasing patterns on misogynistic content and users as well as violent attitudes, corroborating existing theories of feminist studies that the amount of misogyny, hostility and violence is steadily increasing in the manosphere
    corecore