3,919 research outputs found

    Addressing harm in online gaming communities -- the opportunities and challenges for a restorative justice approach

    Full text link
    Most platforms implement some form of content moderation to address interpersonal harms such as harassment. Content moderation relies on offender-centered, punitive justice approaches such as bans and content removals. We consider an alternative justice framework, restorative justice, which aids victims to heal, supports offenders to repair the harm, and engages community members to address the harm collectively. To understand the utility of restorative justice in addressing online harm, we interviewed 23 users from Overwatch gaming communities, including moderators, victims, and offenders. We understand how they currently handle harm cases through the lens of restorative justice and identify their attitudes toward implementing restorative justice processes. Our analysis reveals that while online communities have needs for and existing structures to support restorative justice, there are structural, cultural, and resource-related obstacles to implementing this new approach within the existing punitive framework. We discuss the opportunities and challenges for applying restorative justice in online spaces

    Negotiating conflict and negativity in an online\ud Community for recovering heart pateints

    Get PDF
    When an online community has been set up to support members living\ud with heart disease, it has a responsibility to provide a safe environment in terms of\ud emotional security and accurate health information. Unfortunately, in online\ud communities as in communities generally, relationships developed among\ud members can sometimes go awry. Situations can arise where private exchanges\ud between members exacerbate public discord and conflict erupts: occasionally with\ud both sides having legitimate reason to feel aggrieved. At this point, a usually selfregulating community can polarise and request the moderator's intervention. What\ud happens when the moderator is perceived to be doing nothing about the situation\ud and members of the community take matters into their own hands? This paper\ud discusses the implications and challenges of conflict in a therapeutic community.\ud It acknowledges that sometimes the situation can be too complex for simple\ud resolution and that in such circumstances, one or both of the conflicted parties\ud may have to withdraw from the site for a period of time

    Building Credibility, Trust, and Safety on Video-Sharing Platforms

    Get PDF
    Video-sharing platforms (VSPs) such as YouTube, TikTok, and Twitch attract millions of users and have become influential information sources, especially among the young generation. Video creators and live streamers make videos to engage viewers and form online communities. VSP celebrities obtain monetary benefits through monetization programs and affiliated markets. However, there is a growing concern that user-generated videos are becoming a vehicle for spreading misinformation and controversial content. Creators may make inappropriate content for attention and financial benefits. Some other creators also face harassment and attack. This workshop seeks to bring together a group of HCI scholars to brainstorm technical and design solutions to improve the credibility, trust, and safety of VSPs. We aim to discuss and identify research directions for technology design, policy-making, and platform services for video-sharing platforms. © 2023 Owner/Author

    Understanding the voluntary moderation practices in live streaming communities

    Get PDF
    Harmful content, such as hate speech, online abuses, harassment, and cyberbullying, proliferates across various online communities. Live streaming as a novel online community provides ways for thousands of users (viewers) to entertain and engage with a broadcaster (streamer) in real-time in the chatroom. While the streamer has the camera on and the screen shared, tens of thousands of viewers are watching and messaging in real-time, resulting in concerns about harassment and cyberbullying. To regulate harmful content—toxic messages in the chatroom, streamers rely on a combination of automated tools and volunteer human moderators (mods) to block users or remove content, which is termed content moderation. Live streaming as a mixed media contains some unique attributes such as synchronicity and authenticity, making real-time content moderation challenging. Given the high interactivity and ephemerality of live text-based communication in the chatroom, mods have to make decisions with time constraints and little instruction, suffering cognitive overload and emotional toll. While much previous work has focused on moderation in asynchronous online communities and social media platforms, very little is known about human moderation in synchronous online communities with live interaction among users in a timely manner. It is necessary to understand mods’ moderation practices in live streaming communities, considering their roles to support community growth. This dissertation centers on volunteer mods in live streaming communities to explore their moderation practices and relationships with streamers and viewers. Through quantitative and qualitative methods, this dissertation mainly focuses on three aspects: the strategies and tools used by moderators, the mental model and decision-making process applied toward violators, and the conflict management present in the moderation team. This dissertation uses various socio-technical theories to explain mods’ individual and collaborative practices and suggests several design interventions to facilitate the moderation process in live streaming communities

    "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

    Full text link
    The misuse of large language models (LLMs) has garnered significant attention from the general public and LLM vendors. In response, efforts have been made to align LLMs with human values and intent use. However, a particular type of adversarial prompts, known as jailbreak prompt, has emerged and continuously evolved to bypass the safeguards and elicit harmful content from LLMs. In this paper, we conduct the first measurement study on jailbreak prompts in the wild, with 6,387 prompts collected from four platforms over six months. Leveraging natural language processing technologies and graph-based community detection methods, we discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from public platforms to private ones, posing new challenges for LLM vendors in proactive detection. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 46,800 samples across 13 forbidden scenarios. Our experiments show that current LLMs and safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify two highly effective jailbreak prompts which achieve 0.99 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and they have persisted online for over 100 days. Our work sheds light on the severe and evolving threat landscape of jailbreak prompts. We hope our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs

    Gen Z Digital Leadership through Social Media

    Get PDF
    Born within specific timeframes, typically five to ten years, individuals form distinct generational cohorts that share everyday experiences and traits influenced by the technology that evolves during their formative years. Generation Z, encompassing those born between 1996 and 2006, navigated their childhood, adolescence, and young adulthood with the companionship of smartphones, tablets, social media, online gaming, and various digital interaction platforms. This exploration into Generation Z digital leadership through social media using qualitative methods with netnography and literature review illuminates the dynamic interplay between generational disparities and the evolving digital landscape. New findings underline that if leadership goals and purposes are explicit, Gen Z leaders' disclosures may be perceived as fair by followers, enhancing interaction quality and shaping follower perceptions positively. Social media possesses the potential to bridge gaps, serving as a powerful tool for fostering cohesion and connectivity for Generation Z within a broader range of social contexts. This study found that initiative and high impact are among the main characteristics of Gen Z digital leaders who prefer online over offline discussions. The development and expression of social media consist of personal growth, learning new things, and developing skills and voice. Gen Z digital leaders are found to use social media in various contexts, such as fostering collaboration, building networks, and inspiring action. Gen Z also face challenges while using social media, as they are faced with passivity and communication failure. At the same time, they are blessed with opportunities such as amplifying underrepresented voices, impacting, influencing others, and even inspiring action. Supporting and integrating Gen Z digital leadership skills in practices means knowing the cohort closely, fostering digital literacy in senior generations that work together with them, understanding personal strength, and creating opportunities in the leadership area with safe online environments

    Cleaning Up the Streets: Understanding Motivations, Mental Models, and Concerns of Users Flagging Social Media Posts

    Full text link
    Social media platforms offer flagging, a technical feature that empowers users to report inappropriate posts or bad actors, to reduce online harms. While flags are often presented as flimsy icons, their simple interface disguises complex underlying interactions among users, algorithms, and moderators. Through semi-structured interviews with 22 active social media users who had recently flagged, we examine their understanding of flagging procedures, explore the factors that motivate and demotivate them from engaging in flagging, and surface their emotional, cognitive, and privacy concerns. Our findings show that a belief in generalized reciprocity motivates flag submissions, but deficiencies in procedural transparency create gaps in users' mental models of how platforms process flags. We highlight how flags raise questions about the distribution of labor and responsibility between platforms and users for addressing online harm. We recommend innovations in the flagging design space that assist user comprehension and facilitate granular status checks while aligning with their privacy and security expectations.Comment: Under review at ACM CSC

    Trans Time: Safety, Privacy, and Content Warnings on a Transgender-Specific Social Media Site

    Full text link
    Trans people often use social media to connect with others, find and share resources, and post transition-related content. However, because most social media platforms are not built with trans people in mind and because online networks include people who may not accept one’s trans identity, sharing trans content can be difficult. We studied Trans Time, a social media site developed particularly for trans people to document transition and build community. We interviewed early Trans Time users (n = 6) and conducted focus groups with potential users (n = 21) to understand how a trans-specific site uniquely supports its users. We found that Trans Time has the potential to be a safe space, encourages privacy, and effectively enables its users to selectively view content using content warnings. Together, safety, privacy, and content warnings create an online space where trans people can simultaneously build community, find support, and express both the mundanity and excitement of trans life. Yet in each of these areas, we also learned ways that the site can improve. We provide implications for how social media sites may better support trans users, as well as insular communities of people from other marginalized groups.Institute for Research on Women and Gender (IRWG)Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162569/1/HaimsonTransTime.pdfDescription of HaimsonTransTime.pdf : Main articleSEL

    Nip it in the Bud: Moderation Strategies in Open Source Software Projects and the Role of Bots

    Full text link
    Much of our modern digital infrastructure relies critically upon open sourced software. The communities responsible for building this cyberinfrastructure require maintenance and moderation, which is often supported by volunteer efforts. Moderation, as a non-technical form of labor, is a necessary but often overlooked task that maintainers undertake to sustain the community around an OSS project. This study examines the various structures and norms that support community moderation, describes the strategies moderators use to mitigate conflicts, and assesses how bots can play a role in assisting these processes. We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance. Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks. We hope that these findings will inform the implementation of more effective moderation practices in open source communities

    Online Communities

    Full text link
    An online community is a group of people with shared identities or interests who use social technologies to connect and interact with each other. Since the early days of the Internet, online communities have been particularly important means for trans people to connect with similar others, explore identity, share resources, document transition, and work toward activism and advocacy. Some of these communities are for trans people broadly, while others focus on particular trans identities (e.g., trans women, nonbinary people, trans men) or particular identity facets or experiences that intersect with trans identities (e.g., race, disability status, age). Early Internet trans online communities involved high levels of anonymity, which enabled people to safely explore trans identities online. However, when many trans communities moved to social media sites, a new set of challenges emerged related to connections to one’s physical world persona, disclosure difficulties, convergence of multiple audiences, and difficulties of moderation and maintaining community boundaries. Future trans online communities would benefit from design processes that include trans people and communities, as well as technology designs that center trans experiences.http://deepblue.lib.umich.edu/bitstream/2027.42/168410/1/HaimsonOnlineCommunities.pdfDescription of HaimsonOnlineCommunities.pdf : Main articleSEL
    • …
    corecore