68 research outputs found

    Bans vs. Warning Labels: Examining Support for Community-wide Moderation Interventions

    Full text link
    Social media platforms like Facebook and Reddit host thousands of independently governed online communities. These platforms sanction communities that frequently violate platform policies; however, user perceptions of such sanctions remain unclear. In a pre-registered survey conducted in the US, I explore user perceptions of content moderation for communities that frequently feature hate speech, violent content, and sexually explicit content. Two community-wide moderation interventions are tested: (1) community bans, where all community posts and access to them are removed, and (2) community warning labels, where an interstitial warning label precedes access. I examine how third-person effects and support for free speech mediate user approval of these interventions. My findings show that presumed effects on others (PME3) is a significant predictor of backing for both interventions, while free speech beliefs significantly influence participants' inclination for using warning labels. I discuss the implications of these results for platform governance and free speech scholarship.Comment: arXiv admin note: text overlap with arXiv:2301.0220

    Personal Moderation Configurations on Facebook: Exploring the Role of FoMO, Social Media Addiction, Norms, and Platform Trust

    Full text link
    Personal moderation tools on social media platforms let users control their news feeds by configuring acceptable toxicity thresholds for their feed content or muting inappropriate accounts. This research examines how four critical psychosocial factors - fear of missing out (FoMO), social media addiction, subjective norms, and trust in moderation systems - shape Facebook users' configuration of these tools. Findings from a nationally representative sample of 1,061 participants show that FoMO and social media addiction make Facebook users more vulnerable to content-based harms by reducing their likelihood of adopting personal moderation tools to hide inappropriate posts. In contrast, descriptive and injunctive norms positively influence the use of these tools. Further, trust in Facebook's moderation systems also significantly affects users' engagement with personal moderation. This analysis highlights qualitatively different pathways through which FoMO and social media addiction make affected users disproportionately unsafe and offers design and policy solutions to address this challenge

    Understanding the Governance Challenges of Public Libraries Subscribing to Digital Content Distributors

    Full text link
    As popular demand for digital information increases, public libraries are increasingly turning to commercial digital content distribution services to save curation time and costs. These services let libraries subscribe to pre-configured digital content packages that become instantly available wholesale to their patrons. However, these packages often contain content that does not align with the library's curation policy. We conducted interviews with 15 public librarians in the US to examine their experiences with subscribing to digital distribution services. We found that the subscribing libraries face many digital governance challenges, including the sub-par quality of received content, a lack of control in the curation process, and a limited understanding of how distribution services operate. We draw from prior HCI and social media moderation literature to contextualize and examine these challenges. Building upon our findings, we suggest how digital distributors, libraries, and lawmakers could improve digital distribution services in library settings. We offer recommendations for co-constructing a robust digital content curation policy and discuss how librarian's cooperation and well-deployed content moderation mechanisms could help enforce that policy. Our work informs the utility of future content moderation research that bridges the fields of CSCW and library science

    Addressing harm in online gaming communities -- the opportunities and challenges for a restorative justice approach

    Full text link
    Most platforms implement some form of content moderation to address interpersonal harms such as harassment. Content moderation relies on offender-centered, punitive justice approaches such as bans and content removals. We consider an alternative justice framework, restorative justice, which aids victims to heal, supports offenders to repair the harm, and engages community members to address the harm collectively. To understand the utility of restorative justice in addressing online harm, we interviewed 23 users from Overwatch gaming communities, including moderators, victims, and offenders. We understand how they currently handle harm cases through the lens of restorative justice and identify their attitudes toward implementing restorative justice processes. Our analysis reveals that while online communities have needs for and existing structures to support restorative justice, there are structural, cultural, and resource-related obstacles to implementing this new approach within the existing punitive framework. We discuss the opportunities and challenges for applying restorative justice in online spaces

    Bystanders of Online Moderation: Examining the Effects of Witnessing Post-Removal Explanations

    Full text link
    Prior research on transparency in content moderation has demonstrated the benefits of offering post-removal explanations to sanctioned users. In this paper, we examine whether the influence of such explanations transcends those who are moderated to the bystanders who witness such explanations. We conduct a quasi-experimental study on two popular Reddit communities (r/askreddit and r/science) by collecting their data spanning 13 months-a total of 85.5M posts made by 5.9M users. Our causal-inference analyses show that bystanders significantly increase their posting activity and interactivity levels as compared to their matched control set of users. Our findings suggest that explanations clarify and reinforce the social norms of online spaces, enhance community engagement, and benefit many more members than previously understood. We discuss the theoretical implications and design recommendations of this research, focusing on how investing more efforts in post-removal explanations can help build thriving online communities

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging Social Media Posts

    Full text link
    Social media platforms offer flagging, a technical feature that empowers users to report inappropriate posts or bad actors, to reduce online harms. While flags are often presented as flimsy icons, their simple interface disguises complex underlying interactions among users, algorithms, and moderators. Through semi-structured interviews with 22 active social media users who had recently flagged, we examine their understanding of flagging procedures, explore the factors that motivate and demotivate them from engaging in flagging, and surface their emotional, cognitive, and privacy concerns. Our findings show that a belief in generalized reciprocity motivates flag submissions, but deficiencies in procedural transparency create gaps in users' mental models of how platforms process flags. We highlight how flags raise questions about the distribution of labor and responsibility between platforms and users for addressing online harm. We recommend innovations in the flagging design space that assist user comprehension and facilitate granular status checks while aligning with their privacy and security expectations.Comment: Under review at ACM CSC

    Does Platform Migration Compromise Content Moderation? {Evidence} from {r/The\_Donald} and {r/Incels}

    Get PDF
    When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment

    Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels

    Get PDF
    When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment

    "Taking Care of a Fruit Tree": Nurturing as a Layer of Concern in Online Community Moderation

    Get PDF
    Care in communities has a powerful influence on potentially disruptive social encounters. Practising care in moderation means exposing a group's core values, which, in turn, has the potential to strengthen identity and relationships in communities. Dissent is as inevitable in online communities as it is in their offline counterparts. However, dissent can be productive by sparking discussions that drive the evolution of community norms and boundaries, and there is value in understanding the role of moderation in this process. Our work draws on an exploratory analysis of moderation practices in the MetaFilter community, focusing on cases of intervention and response. We identify and analyse MetaFilter moderation with the metaphor: ``taking care of a fruit tree'', which is quoted from an interview with moderators on MetaFilter. We address the relevance of care as it is evidenced in these MetaFilter exchanges, and discuss what it might mean to approach an analysis of online moderation practices with a focus on nurturing care. We consider how HCI researchers might make use of care-as-nurture as a frame to identify multi-faceted and nuanced concepts characterising dissent and to develop tools for the sustainable support of online communities and their moderators
    • …
    corecore