9 research outputs found

    A Cross-Game Look at Transgender Representation in Video Games

    Get PDF
    Despite a history of tracking, analyzing, and discussing transgender representation in media, research on video games is often left out. In this project, I analyzed 63 games released from 1988–2019, and documented on the LGBTQ Game Archive as having transgender characters. A content survey revealed four overarching trends in how video games represent transgender characters (i.e., dysphoria/physical transition, mentally ill killers, trans shock/reveal, and ambiguity). I also demonstrated how transgender representation in video games manifests in similar ways to film and television. Three out of four trends in transgender representation have been repeatedly studied in media studies, but the fourth and largest trend, gender ambiguity, remains understudied. Research on transgender representation in video games mostly focuses on explicit representation. However, the findings show that despite the lack of explicit representations, transness is largely included in media in the form of gender ambiguity without explicitly being there

    Review: Intersectional Tech: Black Users in Digital Gaming

    Get PDF
    Review: Intersectional Tech: Black Users in Digital Gaming, by Kishonna L. Gray. 2020. Louisiana State University Press. xiii + 195 pp

    "Should We Unban, Chat?": Communal Content Moderation on Twitch

    No full text
    Employing digital ethnographic and interview methods alongside thematic analysis, I examined communal content moderation practices on Twitch in “unban appeal streams.” Streamers use humor and unban appeal streams as opportunities to make entertaining content, but some streamers use these streams to educate and reform offenders. A feedback loop between Twitch and its most popular streamers exists, allowing Twitch and its users to hold each other accountable. However, platforms like Twitch have centralized actors that hold more power than their users, making conversation between the two difficult. With desires for a more communal, user-driven platform, I offer suggestions for improving protections for marginalized users experiencing online harassment and abuse, as well as suggestions for implementing justice theories into content moderation systems

    How Transgender People and Communities Were Involved in Trans Technology Design Processes

    Full text link
    Trans technology – technology created to help address challenges that trans people face – is an important area for innovation that can help improve marginalized people's lives. We conducted 104 interviews with 115 creators of trans technology to understand how they involved trans people and communities in design processes. We describe projects that used human-centered design processes, as well as design processes that involved trans people in smaller ways, including gathering feedback from users, conducting user testing, or the creators being trans themselves. We show how involving trans people and communities in design is vital for trans technologies to realize their potential for addressing trans needs. Yet we highlight a frequent gap between trans technology design and deployment, and discuss ways to bridge this gap. We argue for the importance of involving community in trans technology design to ensure that trans technology achieves its promise of helping address trans needs and challenges.National Science FoundationRiecker Undergraduate Research FundCenter for the Education of Women+UMSI REMS ProgramPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/176366/1/HaimsonHowTransgenderPeople.pdfDescription of HaimsonHowTransgenderPeople.pdf : Main articleSEL

    (In)visible moderation: A digital ethnography of marginalized users and content moderation on Twitch and Reddit

    Full text link
    Research suggests that marginalized social media users face disproportionate content moderation and removal. However, when content is removed or accounts suspended, the processes governing content moderation are largely invisible, making assessing content moderation bias difficult. To study this bias, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation visibility in real time. We found that on Reddit, a text-based platform, platform tools make content moderation practices invisible to users, but moderators make their practices visible through communication with users. Yet on Twitch, a live chat and streaming platform, content moderation practices are visible in channel live chats, “unban appeal” streams, and “back from my ban” streams. Our ethnography shows how content moderation visibility differs in important ways between social media platforms, harming those who must see offensive content, and at other times, allowing for increased platform accountability.National Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/174108/1/ThachInvisibleModeration.pdfDescription of ThachInvisibleModeration.pdf : Main articleSEL

    Content Moderation Folk Theories and Perceptions of Platform Spirit among Marginalized Social Media Users

    Full text link
    Social media users create folk theories to help explain how elements of social media operate. Marginalized social media users face disproportionate content moderation and removal on social media platforms. We conducted a qualitative interview study (n = 24) to understand how marginalized social media users may create folk theories in response to content moderation and their perceptions of platforms’ spirit, and how these theories may relate to their marginalized identities. We found that marginalized social media users develop folk theories informed by their perceptions of platforms’ spirit to explain instances where their content was moderated in ways that violate their perceptions of how content moderation should work in practice. These folk theories typically address content being removed despite not violating community guidelines, along with bias against marginalized users embedded in guidelines. We provide implications for platforms, such as using marginalized users’ folk theories as tools to identify elements of platform moderation systems that function incorrectly and disproportionately impact marginalized users.National Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/193110/1/3632741.pdfc4321027-eaa6-44f5-a298-a6880ec181d5Description of 3632741.pdf : Main articleSEL

    "What are you doing, TikTok" : How Marginalized Social Media Users Perceive, Theorize, and "Prove" Shadowbanning

    Full text link
    Shadowbanning is a unique content moderation strategy receiving recent media attention for the ways it impacts marginalized social media users and communities. Social media companies often deny this content moderation practice despite user experiences online. In this paper, we use qualitative surveys and interviews to understand how marginalized social media users make sense of shadowbanning, develop folk theories about shadowbanning, and attempt to prove its occurrence. We find that marginalized social media users collaboratively develop and test algorithmic folk theories to make sense of their unclear experiences with shadowbanning. Participants reported direct consequences of shadowbanning, including frustration, decreased engagement, the inability to post specific content, and potential financial implications. They reported holding negative perceptions of platforms where they experienced shadowbanning, sometimes attributing their shadowbans to platforms’ deliberate suppression of marginalized users’ content. Some marginalized social media users acted on their theories by adapting their social media behavior to avoid potential shadowbans. We contribute collaborative algorithm investigation: a new concept describing social media users’ strategies of collaboratively developing and testing algorithmic folk theories. Finally, we present design and policy recommendations for addressing shadowbanning and its potential harms.National Science Foundation award #1942125Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/192621/1/Shadowbanning_CSCW23_MinorRevisions.pdfc4321027-eaa6-44f5-a298-a6880ec181d5Description of Shadowbanning_CSCW23_MinorRevisions.pdf : Main articleSEL

    The Online Identity Help Center: Designing and Developing a Content Moderation Policy Resource for Marginalized Social Media Users

    Full text link
    Marginalized social media users struggle to navigate inequitable content moderation they experience online. We developed the Online Identity Help Center (OIHC) to confront this challenge by providing information on social media users' rights, summarizing platforms' policies, and providing instructions to appeal moderation decisions. We discuss our findings from interviews (n = 24) and surveys (n = 75) which informed the OIHC's design, along with interviews about and usability tests of the site (n = 12). We found that the OIHC's resources made it easier for participants to understand platforms' policies and access appeal resources. Participants expressed increased willingness to read platforms' policies after reading the OIHC's summarized versions, but expressed mistrust of platforms after reading them. We discuss the study's implications, such as the benefits of providing summarized policies to encourage digital literacy, and how doing so may enable users to express skepticism of platforms' policies after reading them.National Science FoundationPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/193107/1/3637406.pdfc4321027-eaa6-44f5-a298-a6880ec181d5Description of 3637406.pdf : Main articleSEL