31 research outputs found

    Does Platform Migration Compromise Content Moderation? {Evidence} from {r/The\_Donald} and {r/Incels}

    Get PDF
    When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment

    Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels

    Get PDF
    When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment

    Assessing the influence and reach of digital activity amongst far-right actors: A comparative evaluation of mainstream and ‘free speech’ social media platforms

    Get PDF
    Mainstream social media platforms including Twitter, Facebook and YouTube, became more rigorous at the end of 2020 in implementing content moderation measures in efforts to combat false information related to the Covid-19 virus and election security in the United States of America. Some users viewed these measures as hostile towards various ideologies, prompting them to adopt alternative platforms for viewing and disseminating content (Abril 2021; Daly and Fischer, 2021). In 2020, the US Department of Homeland Security identified white supremacist extremists (WSE) as “the most persistent and lethal threat in the homeland” (DHS, 2020, p. 17). WSE disseminate their messages to a broader public by stoking grievances about race, immigration, multiculturalism and police-related policy issues, while also coordinating with networks of similar groups to carry their messages further (DHS, 2020). Current research lacks an understanding of the role these alternative platforms play in shaping, disseminating and amplifying extremist messages. This study utilized socio-computational methods to compare and contrast user behavior on the mainstream platforms, YouTube and Twitter, with the alternative social media platform Parler, during the two months before and after the 2021 January 6th U.S. Capitol attack. Toxicity assessment, topic stream analysis, social network analysis, and social cyber forensic analysis helped identify key far-right actors and illuminated cyber coordination and mobilization within these social networks. The findings reveal some surprising insights including that both toxicity and posting activity were far greater on mainstream platforms and that Parler displayed an extremely high rate of cross-media and cross-platform posting

    Radicalization and Recruitment Online: An Analysis of Alt-Right Online Extremist Groups in the United States

    Get PDF
    Radicalization studies generally focus on jihadist terrorism, but this study synthesizes general radicalization and social network theories to identify indoctrination tactics used in alt-right online social forums and the role of perceived oppression during indoctrination. Data were gathered using digital ethnography, and content and social network analysis were used to analyze thematic categories and participants’ social ties. Alt-right online indoctrination generally subscribes to stepwise radicalization theories contingent on the Internet’s fluid infrastructure, and perceived oppression catalyzes a cycle of victimization, violence, and enlightenment. Findings imply the need for education that critiques systems rather than individuals and provides effective media literacy

    Us against the World: Detection of Radical Language in Online Platforms

    Get PDF
    In this paper, we have investigated if we can detect radical comments in an online social network. We used comments from 6 subreddits, 3 of which are considered radical and 3 non-radical. Using various structural features of the texts in the comments, we were able to obtain an F1-score of 91% when using SVM with a linear kernel and a precision of almost 98% when using Random Forest

    Understanding Online Migration Decisions Following the Banning of Radical Communities

    Full text link
    The proliferation of radical online communities and their violent offshoots has sparked great societal concern. However, the current practice of banning such communities from mainstream platforms has unintended consequences: (I) the further radicalization of their members in fringe platforms where they migrate; and (ii) the spillover of harmful content from fringe back onto mainstream platforms. Here, in a large observational study on two banned subreddits, r/The\_Donald and r/fatpeoplehate, we examine how factors associated with the RECRO radicalization framework relate to users' migration decisions. Specifically, we quantify how these factors affect users' decisions to post on fringe platforms and, for those who do, whether they continue posting on the mainstream platform. Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform. Whereas social-level factors, users' connection with the radical community, only affect the propensity to be coactive on both platforms. Overall, our findings pave the way for evidence-based moderation policies, as the decisions to migrate and remain coactive amplify unintended consequences of community bans.Comment: 19 pages, 3 figures, 3 table

    Spillover of Antisocial Behavior from Fringe Platforms: The Unintended Consequences of Community Banning

    Full text link
    Online platforms face pressure to keep their communities civil and respectful. Thus, the bannings of problematic online communities from mainstream platforms like Reddit and Facebook are often met with enthusiastic public reactions. However, this policy can lead users to migrate to alternative fringe platforms with lower moderation standards and where antisocial behaviors like trolling and harassment are widely accepted. As users of these communities often remain \ca across mainstream and fringe platforms, antisocial behaviors may spill over onto the mainstream platform. We study this possible spillover by analyzing around 70,00070,000 users from three banned communities that migrated to fringe platforms: r/The\_Donald, r/GenderCritical, and r/Incels. Using a difference-in-differences design, we contrast \ca users with matched counterparts to estimate the causal effect of fringe platform participation on users' antisocial behavior on Reddit. Our results show that participating in the fringe communities increases users' toxicity on Reddit (as measured by Perspective API) and involvement with subreddits similar to the banned community -- which often also breach platform norms. The effect intensifies with time and exposure to the fringe platform. In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.Comment: 18 pages, 4 figures, 2 tables, submitte

    Roots of Trumpism: Homophily and Social Feedback in Donald Trump Support on Reddit

    Full text link
    We study the emergence of support for Donald Trump in Reddit's political discussion. With almost 800k subscribers, "r/The_Donald" is one of the largest communities on Reddit, and one of the main hubs for Trump supporters. It was created in 2015, shortly after Donald Trump began his presidential campaign. By using only data from 2012, we predict the likelihood of being a supporter of Donald Trump in 2016, the year of the last US presidential elections. To characterize the behavior of Trump supporters, we draw from three different sociological hypotheses: homophily, social influence, and social feedback. We operationalize each hypothesis as a set of features for each user, and train classifiers to predict their participation in r/The_Donald. We find that homophily-based and social feedback-based features are the most predictive signals. Conversely, we do not observe a strong impact of social influence mechanisms. We also perform an introspection of the best-performing model to build a "persona" of the typical supporter of Donald Trump on Reddit. We find evidence that the most prominent traits include a predominance of masculine interests, a conservative and libertarian political leaning, and links with politically incorrect and conspiratorial content.Comment: 10 pages. Published at WebSci2
    corecore