31 research outputs found

    ‘Welcome to #GabFam’: Far-right virtual community on Gab.

    Get PDF
    With large social media platforms coming under increasing pressure to deplatform far-right users, the Alternative Technology movement (Alt-Tech) emerged as a new digital support infrastructure for the far right. We conduct a qualitative analysis of the prominent Alt-Tech platform Gab, a social networking service primarily modelled on Twitter, to assess the far-right virtual community on the platform. We find Gab’s technological affordances – including its lack of content moderation, culture of anonymity, microblogging architecture and funding model – have fostered an ideologically eclectic far-right community united by fears of persecution at the hands of ‘Big Tech’. We argue that this points to the emergence of a novel techno-social victimology as an axis of far-right virtual community, wherein shared experiences or fears of being deplatformed facilitate a coalescing of assorted far-right tendencies online

    Withdrawal to the shadows: dark social media as opportunity structures for extremism

    Get PDF
    Dark social media has been described as a home base for extremists and a breeding ground for dark participation. Beyond the description of single cases, it often remains unclear what exactly is meant by dark social media and which opportunity structures for extremism emerge on these applications. The current paper contributes to filling this gap. We present a theoretical framework conceptualizing dark social media as opportunity structures shaped by (a) regulation on the macro-level; (b) different genres and types of (dark) social media as influence factors on the meso level; and (c) individual attitudes, salient norms, and technological affordances on the micro-level. The results of a platform analysis and a scoping review identified meaningful differences between dark social media of different types. Particularly social counter-media and fringe communities positioned themselves as "safe havens" for dark participation, indicating a high tolerance for accordant content. This makes them a fertile ground for those spreading extremist worldviews, consuming such content, or engaging in dark participation. Context-bound alternative social media were comparable to mainstream social media but oriented towards different legal spaces and were more intertwined with governments in China and Russia. Private-first channels such as Instant messengers were rooted in private communication. Yet, particularly Telegram also included far-reaching public communication formats and optimal opportunities for the convergence of mass, group, and interpersonal communication. Overall, we show that a closer examination of different types and genres of social media provides a more nuanced understanding of shifting opportunity structures for extremism in the digital realm

    Analyzing Norm Violations in Live-Stream Chat

    Full text link
    Toxic language, such as hate speech, can deter users from participating in online communities and enjoying popular platforms. Previous approaches to detecting toxic language and norm violations have been primarily concerned with conversations from online forums and social media, such as Reddit and Twitter. These approaches are less effective when applied to conversations on live-streaming platforms, such as Twitch and YouTube Live, as each comment is only visible for a limited time and lacks a thread structure that establishes its relationship with other comments. In this work, we share the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms. We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch. We articulate several facets of live-stream data that differ from other forums, and demonstrate that existing models perform poorly in this setting. By conducting a user study, we identify the informational context humans use in live-stream moderation, and train models leveraging context to identify norm violations. Our results show that appropriate contextual information can boost moderation performance by 35\%.Comment: 17 pages, 8 figures, 15 table

    Does Platform Migration Compromise Content Moderation? {Evidence} from {r/The\_Donald} and {r/Incels}

    Get PDF
    When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment

    #Scamdemic, #Plandemic, or #Scaredemic: What Parler Social Media Platform Tells Us about COVID-19 Vaccine

    Get PDF
    A grant from the One-University Open Access Fund at the University of Kansas was used to defray the author's publication fees in this Open Access journal. The Open Access Fund, administered by librarians from the KU, KU Law, and KUMC libraries, is made possible by contributions from the offices of KU Provost, KU Vice Chancellor for Research & Graduate Studies, and KUMC Vice Chancellor for Research. For more information about the Open Access Fund, please see http://library.kumc.edu/authors-fund.xml.This study aims to understand public discussions regarding COVID-19 vaccine on Parler, a newer social media platform that recently gained in popularity. Through analyzing a random sample (n = 400) of Parler posts using the hashtags #COVID19Vaccine and #NoCovidVaccine, we use the concept of echo chambers to understand users’ discussions through a text analytics approach. Thematic analysis reveals five key themes: reasons to refuse the COVID-19 vaccine (40%), side effects of the COVID-19 vaccine (28%), population control through the COVID-19 vaccine (23%), children getting vaccinated without parental consent (5%), and comparison of other health issues with COVID-19 (2%). Textual analysis shows that the most frequently used words in the corpus were: nocovidvaccine (348); vaccine (264); covid (184); covid19 (157); and vaccines (128). These findings suggest that users adopted different terms and hashtags to express their beliefs regarding the COVID-19 vaccine. Further, findings revealed that users used certain hashtags such as “echo” to encourage like-minded people to reinforce their existing beliefs on COVID-19 vaccine efficacy and vaccine acceptance. These findings have implications for public health communication in attempts to correct false narratives on social media platforms. Through widely sharing the scientific findings of COVID-19 vaccine-related studies can help individuals understand the COVID-19 vaccines efficacy accurately

    Analyzing Social Media for Measuring Public Attitudes towards Controversies and their Driving Factors - A Case Study of Migration

    Get PDF
    Among other ways of expressing opinions on media such as blogs, and forums, social media (such as Twitter) has become one of the most widely used channels by populations for expressing their opinions. With an increasing interest in the topic of migration in Europe, it is important to process and analyze these opinions. To this end, this study aims at measuring the public attitudes toward migration in terms of sentiments and hate speech from a large number of tweets crawled on the decisive topic of migration. This study introduces a knowledge base (KB) of anonymized migration-related annotated tweets termed as MigrationsKB (MGKB). The tweets from 2013 to July 2021 in the European countries that are hosts of immigrants are collected, pre-processed, and filtered using advanced topic modeling techniques. BERT-based entity linking and sentiment analysis, complemented by attention-based hate speech detection, are performed to annotate the curated tweets. Moreover, external databases are used to identify the potential social and economic factors causing negative public attitudes toward migration. The analysis aligns with the hypothesis that the countries with more migrants have fewer negative and hateful tweets. To further promote research in the interdisciplinary fields of social sciences and computer science, the outcomes are integrated into MGKB, which significantly extends the existing ontology to consider the public attitudes toward migrations and economic indicators. This study further discusses the use-cases and exploitation of MGKB. Finally, MGKB is made publicly available, fully supporting the FAIR principles

    Analyzing social media for measuring public attitudes toward controversies and their driving factors: a case study of migration

    Get PDF
    Among other ways of expressing opinions on media such as blogs, and forums, social media (such as Twitter) has become one of the most widely used channels by populations for expressing their opinions. With an increasing interest in the topic of migration in Europe, it is important to process and analyze these opinions. To this end, this study aims at measuring the public attitudes toward migration in terms of sentiments and hate speech from a large number of tweets crawled on the decisive topic of migration. This study introduces a knowledge base (KB) of anonymized migration-related annotated tweets termed as MigrationsKB (MGKB). The tweets from 2013 to July 2021 in the European countries that are hosts of immigrants are collected, pre-processed, and filtered using advanced topic modeling techniques. BERT-based entity linking and sentiment analysis, complemented by attention-based hate speech detection, are performed to annotate the curated tweets. Moreover, external databases are used to identify the potential social and economic factors causing negative public attitudes toward migration. The analysis aligns with the hypothesis that the countries with more migrants have fewer negative and hateful tweets. To further promote research in the interdisciplinary fields of social sciences and computer science, the outcomes are integrated into MGKB, which significantly extends the existing ontology to consider the public attitudes toward migrations and economic indicators. This study further discusses the use-cases and exploitation of MGKB. Finally, MGKB is made publicly available, fully supporting the FAIR principles
    corecore