8 research outputs found
Recommended from our members
Misogynoir: Public Online Response Towards Self-Reported Misogynoir
“Misogynoir” refers to the specific forms of misogyny that Black women experience, which couple racism and sexism together. To better understand the online manifestations of this type of hate, and to propose methods that can automatically identify it, in this paper, we conduct a study on 4 cases of Black women in Tech reporting experiences of misogynoir on the Twitter platform. We follow the reactions to these cases (both supportive and non-supportive responses) and categorise them within a model of misogynoir that highlights experiences of Tone Policing, White Centring, Racial Gaslighting and Defensiveness. As an intersectional form of abusive or hateful speech, we investigate the possibilities and challenges to detect online instances of misogynoir in an automated way. We then conduct a closer qualitative analysis on messages of support and non-support to look at some of these categories in more detail. The purpose of this investigation is to understand responses to misogynoir online, including doubling down on misogynoir, engaging in performative allyship, and showing solidarity with Black women in tech
Recommended from our members
Misogynoir: Challenges in Detecting Intersectional Hate
"Misogynoir" is a term that refers to the anti-Black forms of misogyny that Black women experience. To explore how current automated hate speech detection approaches perform in detecting this type of hate, we evaluated the performance of two state-of-the-art detection tools, HateSonar and Google's Perspective API, on a balanced dataset of 300 tweets, half of which are examples of misogynoir and half of which are examples of supporting Black women and an imbalanced dataset of 3138 tweets of which 162 tweets are examples of misogynoir and 2976 tweets are examples of allyship tweets. We aim to determine if these tools flag these messages under any of their classifications of hateful speech (e.g. "hate speech'', "offensive language", "toxicity'' etc.).
Close analysis of the classifications and errors shows that current hate speech detection tools are ineffective in detecting misogynoir. They lack sensitivity to context, which is an essential component for misogynoir detection. We found that tweets likely to be classified as hate speech explicitly reference racism or sexism or use profane or aggressive words. Subtle tweets without references to these topics are more challenging to classify. We find that the lack of sensitivity to context may make such tools not only ineffective but potentially harmful to Black women
Towards a Positioning Model for Evaluating the Use and Design of Anti-Disinformation Tools
With the increasing amounts of mis- and disinformation circulating online, the demand for tools to combat and contain the phenomenon has also increased. The multifaceted nature of the phenomenon requires a set of tools that can respond effectively, and can deal with the different ways in which disinformation can present itself, In this paper, after consulting independent fact-checkers to create a list, we map the landscape of tools available to combat different typologies of mis and disinformation on the basis of three levels of analysis: the employment of policy-regulated strategies, the use of co-creation, and the preference for manual or automated processes of detection. We then create a model in which we position the different tools across three axes of analysis, and show how the tools distribute across different market positions
A Conceptual Model for Approaching the Design of Anti-disinformation Tools
Part 2: Digital SocietyInternational audienceWith the increasing amounts of mis- and disinformation circulating online, the demand for tools to combat and contain the phenomenon has also increased. The multifaceted nature of the phenomenon requires a set of tools that can respond effectively, and can deal with the different ways in which disinformation can present itself, such as text, images, and videos, the agents responsible for spreading it, and the various platforms on which incorrect information is prevalent. In this paper, after consulting independent fact-checkers to create a list, we map the landscape of the most known tools that are available to combat different typologies of mis and disinformation on the basis of three levels of analysis: the employment of policy-regulated strategies, the use of co-creation, and the preference for manual or automated processes of detection. We then create a model in which we position the different tools across three axes of analysis, and show how the tools distribute across different market positions. The most crowded positions are characterized by tools that employ automated processes of detection, varying degrees of policy implementation, and low levels of co-creation, but there is an opening for newly developed tools that score high across all three axes. The interest in co-creative efforts in the challenge towards addressing mis- and disinformation could indeed be an effective solution to cater to the need of the users, and respond effectively to the amounts and variety of mis and disinformation spreading online
Understanding the Role of Human Values in the Spread of Misinformation
Social media platforms are often implicated in the spread of misinformation for encouraging the behaviour of rapid-sharing without adequate mechanisms for verifying information. To counter this phenomena, much related research in computer science has been focusing on developing tools to detect misinformation, to rank fact-check-worthy claims, and to understand their spread patterns, while psychosocial approaches have been focused on understanding information literacy, ideology and partisanship. In this paper, we demonstrate through a survey of nearly 100 people that the Human Values could have a significant influence on the way people perceive and share information. We argue that integrating a valuesoriented perspective into computational approaches for handling misinformation could encourage misinformation prevention, and assist in predicting and ranking misinformation
SOCIAL MEDIA USE, TRUST AND TECHNOLOGY ACCEPTANCE: INVESTIGATING THE EFFECTIVENESS OF A CO-CREATED BROWSER PLUGIN IN MITIGATING THE SPREAD OF MISINFORMATION ON SOCIAL MEDIA
Social media have become online spaces where misinformation abounds and spreads virally in the absence of professional gatekeeping. This information landscape requires everyday citizens, who rely on these technologies to access information, to cede control of information. This work sought to examine whether the control of information can be regained by humans with the support of a co-created browser plugin, which integrated credibility labels and nudges, and was informed by artificial intelligence models and rule engines. Given the literature on the complexity of information evaluation on social media, we investigated the role of technological, situational and individual characteristics in “liking” or “sharing” misinformation. We adopted a mixed-methods research design with 80 participants from four European sites, who viewed a curated timeline of credible and non-credible posts on Twitter, with (n=40) or without (n=40) the presence of the plugin. The role of the technological intervention was important: the absence of the plugin strongly correlated with misinformation endorsement (via “liking”). Trust in the technology and technology acceptance were correlated and emerged as important situational characteristics, with participants with higher trust profiles being less likely to share misinformation. Findings on individual characteristics indicated that only social media use was a significant predictor for trusting the plugin. This work extends ongoing research on deterring the spread of misinformation by situating the findings in an authentic social media environment using a co-created technological intervention. It holds implications for how to support a misinformation-resilient citizenry with the use of artificial intelligence-driven tools
Combating misinformation online: re-imagining social media for policy-making
Social media have created communication channels between citizens and policymakers but are also susceptible to rampant misinformation. This new context demands new social media policies that can aid policymakers in making evidence-based decisions for combating misinformation online. This paper reports on data collected from policymakers in Austria, Greece, and Sweden, using focus groups and in-depth interviews. Analyses provide insights into challenges and identify four important themes for supporting policy-making for combating misinformation: a) creating a trusted network of experts and collaborators, b) facilitating the validation of online information, c) providing access to visualisations of data at different levels of granularity, and d) increasing the transparency and explainability of flagged misinformative content. These recommendations have implications for rethinking how revised social media policies can contribute to evidence-based decision-making