94 research outputs found

    Narratives of solidarity, outrage and hatred towards LGTBQI+ people in the digital society

    Get PDF
    This paper was presented at the 4th International Conference ILIS – International Lab for Innovative Social Research, June 8-9, 2023 [https://www.labilis.org/2023/04/22/4th-international-conference-ilis/]. In this communication, we explore different [viral, highly spread] case studies where LGBTIQ+ people receive support or rejection in social media. We aim to identify the main discourses, networks, and communities, pro- and against LGBTIQ+ people, and look for patterns in communication when support or hate is produced. The paper is part of the I+D+i Project titled "Conspiracy Theories and Hate Speech Online: Comparison of Patterns in Narratives and social networks about COVID-19, immigrants, refugees, and LGBTI people [NON-CONSPIRA-HATE!]", PID2021-123983OB-I00, funded by MCIN/AEI/10.13039/501100011033/ and by "ERDF A way of making Europe." (https://eseis.es/investigacion/discursos-de-odio/discursos-odio-tc). It has also been made possible thanks to the MCIN/AEI/ 10.13039/501100011033 and by "ESF Investing in your future" in Spain, which funds a Predoctoral Grant for University Teacher Training (FPU20/02848). We also thank the Ministry of Universities, which is financing a Margarita Salas Grant to train young PhDs funded by the European Union-NextGenerationEU. We are also grateful for the support of our research group: "Estudios Sociales E Intervención Social" (GrupoESEIS), and the research center "Pensamiento Contemporáneo e Innovación para el Desarrollo Social" (COIDESO), both of the University of Huelva.Esta comunicación fue presentada en la 4ª Conferencia Internacional de ILIS – International Lab for Innovative Social Research, 8 y 9 de junio de 2023 [https://www.labilis.org/2023/04/22/4th-international-conference-ilis/]. En la misma exploramos diferentes estudios de caso [virales, muy difundidos] donde las personas LGBTIQ+ reciben apoyo o rechazo en las redes sociales. Nuestro objetivo es identificar los principales discursos, redes y comunidades, a favor y en contra de las personas LGBTIQ+, y buscar patrones en la comunicación cuando se produce apoyo u odio. La comunicación forma parte del Proyecto I+D+i titulado "Teorías de la conspiración y discurso del odio en la red: comparación de patrones en narrativas y redes sociales sobre el COVID-19, inmigrantes, refugiados y personas LGBTI [NO-CONSPIRA-HATE!]", PID2021-123983OB-I00, financiado por MCIN/AEI/10.13039/501100011033/ y por "FEDER Una forma de hacer Europa". (https://eseis.es/investigacion/discursos-de-odio/discursos-odio-tc). También ha sido posible gracias a la MCIN/AEI/ 10.13039/501100011033 y por "ESF Invierte en tu futuro" en España, que financia una Beca Predoctoral para la Formación del Profesorado Universitario (FPU20/02848). También agradecemos al Ministerio de Universidades, que está financiando una Beca Margarita Salas para formar jóvenes doctores financiada por la Unión Europea-NextGenerationEU. Agradecemos igualmente el apoyo de nuestro grupo de investigación: "Estudios Sociales E Intervención Social" (GrupoESEIS), y del centro de investigación "Pensamiento Contemporáneo e Innovación para el Desarrollo Social" (COIDESO), ambos de la Universidad de Huelva.Proyecto PID2021-123983OB-I00, financiado por MCIN/AEI/10.13039/501100011033/ y por "FEDER Una forma de hacer Europa". Ayuda MCIN/AEI/ 10.13039/501100011033 y por "ESF Invierte en tu futuro" (FPU20/02848). Ayuda del Ministerio de Universidades a través de una Beca Margarita Salas para formar jóvenes doctores financiada por la Unión Europea-NextGenerationEU

    Like trainer, like bot? Inheritance of bias in algorithmic content moderation

    Get PDF
    The internet has become a central medium through which `networked publics' express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated for offence. While such systems could help encourage more civil debate, they must navigate inherently normatively contestable boundaries, and are subject to the idiosyncratic norms of the human raters who provide the training data. An important objective for platforms implementing such measures might be to ensure that they are not unduly biased towards or against particular norms of offence. This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence. We train classifiers on comments labelled by different demographic subsets (men and women) to understand how differences in conceptions of offence between these groups might affect the performance of the resulting models on various test sets. We conclude by discussing some of the ethical choices facing the implementers of algorithmic moderation systems, given various desired levels of diversity of viewpoints amongst discussion participants.Comment: 12 pages, 3 figures, 9th International Conference on Social Informatics (SocInfo 2017), Oxford, UK, 13--15 September 2017 (forthcoming in Springer Lecture Notes in Computer Science

    LGBTIQ+, conspiracy theories and narratives of support and hate online: from streets to screens

    Get PDF
    This paper was presented at the International Conference "Genders, Sexualities and Diversities" (Universidad de Huelva, April 19, 2023). In this communication, we explore, through different case studies, how sometimes LGBTIQ+ people are supported in the contexts of various events, demonstrations, or mobilizations in the streets (as is the case of International LGBT+ Pride Day or Women's International Day). Other times LGBTIQ+ people are recipients of online hate speech or different conspiracy theories in other scenarios, for example, in films, series, videogames, etc. It is possible to find online narratives about LGBTIQ+ people in different social networks that go from the more positive pole (support, solidarity, or even outrage when LGTBI people are rejected or recipients of hate crime) to the more negative one (disseminating hate, fake news or conspiracy theories). These narratives take place in different contexts: from street to screens. The paper is part of the I+D+i Project titled "Conspiracy Theories and Hate Speech Online: Comparison of Patterns in Narratives and social networks about COVID-19, immigrants, refugees, and LGBTI people [NON-CONSPIRA-HATE!]", PID2021-123983OB-I00, funded by MCIN/AEI/10.13039/501100011033/ and by "ERDF A way of making Europe." (https://eseis.es/investigacion/discursos-de-odio/discursos-odio-tc). It has also been made possible thanks to the MCIN/AEI/ 10.13039/501100011033 and by "ESF Investing in your future" in Spain, which funds a Predoctoral Grant for University Teacher Training (FPU20/02848). We also thank for the support of our research group: "Estudios Sociales E Intervención Social" (GrupoESEIS), and the research center "Pensamiento Contemporáneo e Innovación para el Desarrollo Social" (COIDESO), both of the University of Huelva.Este trabajo fue presentado en el Congreso Internacional "Genders, Sexualities and Diversities" (Universidad de Huelva, 19 de abril de 2023). En esta comunicación exploramos, a través de diferentes estudios de caso, cómo en ocasiones se apoya a las personas LGTBIQ+ en el contexto de diversos eventos, manifestaciones o movilizaciones en las calles (como es el caso del Día Internacional del Orgullo LGTBQI+ o el Día Internacional de la Mujer). Otras veces las personas LGTBIQ+ son receptoras de discursos de odio online o se encuentran diferentes teorías de la conspiración sobre las mismas en otros escenarios, por ejemplo, en películas, series, videojuegos, etc. Igualmente, es posible encontrar narrativas en línea sobre personas LGTBIQ+ en diferentes redes sociales que van desde las más positivas (apoyo, solidaridad, o incluso indignación cuando las personas LGTBIQ+ son objeto de rechazo o de delitos de odio) a las más negativas (difusión de odio, fake news, desinformación o teorías de la conspiración). Estas narrativas tienen lugar en diferentes contextos: de la calle a las pantallas. Esta comunicación forma parte del Proyecto de I+D+i titulado "Teorías de la conspiración y discurso del odio en línea: Comparación de patrones en narrativas y redes sociales sobre el COVID-19, inmigrantes, refugiados y personas LGBTI [NON-CONSPIRA-HATE!]" , PID2021-123983OB-I00, financiado por MCIN/AEI/10.13039/501100011033/ y por "FEDER Una forma de hacer Europa". (https://eseis.es/investigacion/discursos-de-odio/discursos-odio-tc). También ha sido posible gracias a la MCIN/AEI/ 10.13039/501100011033 y por "ESF Invierte en tu futuro" en España, que financia una Beca Predoctoral para la Formación del Profesorado Universitario (FPU20/02848). También agradecemos el apoyo de nuestro grupo de investigación: "Estudios Sociales E Intervención Social" (GrupoESEIS), y del centro de investigación "Pensamiento Contemporáneo e Innovación para el Desarrollo Social" (COIDESO), ambos de la Universidad de Huelva.Proyecto PID2021-123983OB-I00 y ayuda FPU20/02848 financiados por MCIN/AEI /10.13039/501100011033, por “FEDER Una manera de hacer Europa” y por “FSE Invierte en tu futuro”

    Online civic intervention: A new form of political participation under conditions of a disruptive online discourse

    Get PDF
    In the everyday practice of online communication, we observe users deliberately reporting abusive content or opposing hate speech through counterspeech, while at the same time, online platforms are increasingly relying on and supporting this kind of user action to fight disruptive online behavior. We refer to this type of user engagement as online civic intervention (OCI) and regard it as a new form of user-based political participation in the digital sphere that contributes to an accessible and reasoned public discourse. Because OCI has received little scholarly attention thus far, this article conceptualizes low- and high-threshold types of OCI as different kinds of user responses to common disruptive online behavior such as hate speech or hostility toward the media. Against the background of participation research, we propose a theoretically grounded individual-level model that serves to explain OCI

    Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture

    Full text link
    Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social-technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on to help manage and govern the "anyone can edit" encyclopedia at a massive scale. These bots, scripts, tools, plugins, and dashboards make Wikipedia more efficient for those who know how to work with them, but like all organizational culture, newcomers must learn them if they want to fully participate. I illustrate how cultural and organizational expertise is enacted around algorithmic agents by discussing two autoethnographic vignettes, which relate my personal experience as a veteran in Wikipedia. I present thick descriptions of how governance and gatekeeping practices are articulated through and in alignment with these automated infrastructures. Over the past 15 years, Wikipedian veterans and administrators have made specific decisions to support administrative and editorial workflows with automation in particular ways and not others. I use these cases of Wikipedia's bot-supported bureaucracy to discuss several issues in the fields of critical algorithms studies, critical data studies, and fairness, accountability, and transparency in machine learning -- most principally arguing that scholarship and practice must go beyond trying to "open up the black box" of such systems and also examine sociocultural processes like newcomer socialization.Comment: 14 pages, typo fixed in v

    Policing virtual spaces: public and private online challenges in a legal perspective

    Get PDF
    The chapter concerns public and private policing of online platforms and the current challenges in terms of legislation, policing practices and the Dark Web
    corecore