18 research outputs found

    ‘‘Don’t Take on the Responsibilty of Somebody Else’s Fu**ed Up Behavior”: Responding to Online Abuse in the Context of Barriers to Support

    Get PDF
    Responsibilization, in a true circular fashion, is not only born of but also benefits institutional (e.g., social media companies and law enforcement) and cultural power structures (e.g., misogyny and patriarchy). When targets of online abuse take responsibility for the abuse launched against them, that assumption of responsibility requires energy, and that energy is taken away from efforts to hold institutions and perpetrators accountable. Responsibilization tries to tranquilize change in the service of power. The tricky thing about interrupting this process is that it requires more than just offering better support. It also requires exposing, challenging, and dismantling harmful ideologies, belief systems, and values that underpin the responsibility-taking that equality- seeking groups have long undergone as a way to deal with multiple forms of oppression and discrimination. Eliminating the problem may not be possible. The immediate focus instead should be on reducing harm in the here and now by offering stronger and more varied and effective support from all stakeholders, especially social media platforms

    Politics and porn: how news media characterizes problems presented by deepfakes, Critical Studies in Media Communication

    Get PDF
    “Deepfake” is a form of machine learning that creates fake videos by superimposing the face of one person on to the body of another in a new video. The technology has been used to create non-consensual fake pornography and sexual imagery, but there is concern that it will soon be used for politically nefarious ends. This study seeks to understand how the news media has characterized the problem(s) presented by deepfakes. We used discourse analysis to examine news articles about deepfakes, finding that news media discuss the problems of deepfakes in four ways: as (too) easily produced and distributed; as creating false beliefs; as undermining the political process; and as non-consensual sexual content. We provide an overview of how news media position each problem followed by a discussion about the varying degrees of emphasis given to each problem and the implications this has for the public’s perception and construction of deepfakes

    "Blockbotting Dissent": Publics, Counterpublics, and Algorithmic Public Sphere(s)

    Get PDF
    In 2014, at the height of gamergate hostilities, a blockbot was developed and circulated within the gaming community that allowed subscribers to automatically block upwards of 8,000 Twitter accounts. "Ggautoblocker" as it was called, was designed to insulate subscribers' Twitter feeds from hurtful, sexist, and in some cases deeply disturbing comments. In doing so it cast a wide net and became a source of considerable criticism from many in the industry and games community. During this time, the International Game Developers Association (IGDA) 2015 Video Game Developer Satisfaction Survey was circulating, resulting in a host of comments on the blockbot from workers in the industry. In this paper we analyze these responses, which constitute some of the first empirical data on a public response to the use of autoblocking technology, to consider the broader implications of the algorithmic structuring of the online public sphere. First, we emphasize the important role that ggautoblocker, and similar autoblocking tools, play in creating space for marginalized voices online. Then, we turn to our findings, and argue that the overwhelmingly negative response to ggautoblocker reflects underlying anxieties about fragmenting control over the structure of the online public sphere and online public life. In our discussion, we reflect upon what the negative responses suggest about normative expectations of participation in the online public sphere, and how this contrasts with the realities of algorithmically structured online spaces

    ‘‘Don’t Take on the Responsibilty of Somebody Else’s Fu**ed Up Behavior”: Responding to Online Abuse in the Context of Barriers to Support

    No full text
    Responsibilization, in a true circular fashion, is not only born of but also benefits institutional (e.g., social media companies and law enforcement) and cultural power structures (e.g., misogyny and patriarchy). When targets of online abuse take responsibility for the abuse launched against them, that assumption of responsibility requires energy, and that energy is taken away from efforts to hold institutions and perpetrators accountable. Responsibilization tries to tranquilize change in the service of power. The tricky thing about interrupting this process is that it requires more than just offering better support. It also requires exposing, challenging, and dismantling harmful ideologies, belief systems, and values that underpin the responsibility-taking that equality- seeking groups have long undergone as a way to deal with multiple forms of oppression and discrimination. Eliminating the problem may not be possible. The immediate focus instead should be on reducing harm in the here and now by offering stronger and more varied and effective support from all stakeholders, especially social media platforms

    More barriers than solutions: Women’s experiences of support with online abuse

    No full text
    Women have long fought for recognition and protection from the violence and abuse against them. This fight has only grown more complex with the introduction of digital technology and online abuse. In this thesis, I adopt a hermeneutic phenomenological approach to understand how 15 women experience barriers to support when they are targeted by online abuse and examine their responses to those barriers. Drawing from a range of theoretical frameworks and literature, this thesis contributes to the current understanding of online violence against women and girls. Overall, I found that participants experienced and responded to barriers to support in three distinct ways: first, they experienced barriers across a range of environments, but most commonly at the institutional level with social media and gaming companies presenting the most problems. Second, they experienced barriers with digital dualism, whereby online abuse was treated as less harmful than other forms of offline abuse. And third, participants experienced barriers to support as something that they must respond to and which leaves them with few other options than to take responsibility for their safety and well-being. In chapters one and two I provide an overview of relevant literature and my methodological decisions. In chapter three I use Bronfenbrenner’s (1979) ecological model to create a schema of the barriers women face when they seek support for online abuse. In chapter four I narrow in on one particular barrier discussed in chapter three—digital dualism (Jurgenson, 2011). Digital dualism is the discursive habit of conceptually splitting online and offline life into separate and opposing domains. I employ embodiment theory and digital ontology, among other frameworks, to discuss digital dualism’s consequences as a barrier to support. In chapter five I look at participants’ responses to abuse in the context of these barriers. I combine the weak support ecosystem with gendered social oppression strategies, such as victim-blaming and rape myth acceptance, to explore indicators of responsibilization among participants’ responses to barriers to support. While online abuse mirrors inequality and abuse that predates the Internet, this research provides a much-needed foundation to articulate how targets of online abuse experience support barriers to online equality

    Nothing new here: Emphasizing the social and cultural context of deepfakes

    No full text
    In the last year and a half, deepfakes have garnered a lot of attention as the newest form of digital manipulation. While not problematic in and of itself, deepfake technology exists in a social environment rife with cybermisogyny, toxic-technocultures, and attitudes that devalue, objectify, and use women’s bodies against them. The basic technology, which in fact embodies none of these characteristics, is deployed within this harmful environment to produce problematic outcomes, such as the creation of fake and non-consensual pornography. The sophisticated technology and metaphysical nature of deepfakes as both real and not real (the body of one person, the face of another) makes them impervious to many technical, legal, and regulatory solutions. For these same reasons, defining the harm deepfakes causes to those targeted is similarly difficult and very often targets of deepfakes are not afforded the protection they require. We argue that it is important to put an emphasis on the social and cultural attitudes that underscore the nefarious use of deepfakes and thus to adopt a more material-based approach, opposed to technological, to understanding the harm presented by deepfakes

    Citizenship at work. Diversity among videogame developers, 2004-2015

    Get PDF
    This report relies on the data collected in five surveys administered by the International Game Developers Association: the 2004 and 2009 Quality of Life Surveys, the 2005 Demographics Survey and the 2014 and 2015 Developer Satisfaction Surveys (DSS) and on interviews led in Canada in 2013-14. Here we summarize the demographic trends in the videogame industry, the incidence of demographic differences, and the perceptions of diversity in the game industry over an 11 year span. Throughout the report we consider diversity to refer to demographic diversity based on factors such as gender, ethnicity, age, ability, and sexual orientation

    NOT FAR ENOUGH: HOW WORKPLACE HARASSMENT POLICIES FAIL TO PROTECT SCHOLARS FROM ONLINE ABUSE

    Get PDF
    Over the last decade online spaces and digital tools have become a central part of scholarly work and research mobilization (Carrigan, 2016). However, the integration and reliance on these technologies into scholars’ work lives have heightened their online visibility, which has opened the door to new experiences of online abuse. Previous research shows that online abuse has negative impacts on scholars’ work, and that they are left to deal with the consequences of online abuse primarily on their own, with little support from their institution (Authors, 2018a; 2018b). Given the importance of online spaces/tools in scholars' lives and the detrimental impacts of harassment, colleges and universities must recognize the risks associated with online visibility and have policies in place that address those risks. In this paper we analyze 41 workplace policies that deal with harassment and discrimination from Canadian Universities and Colleges to understand what these institutions propose to do about online abuse. We use Bacchi’s (2012) ‘What’s the problem represented to be?’ (WPR) approach. This approach encourages examination of the assumptions and conceptual logics within the framing of a problem in order to understand implicit problem representations. Early analysis identified two problems common across the 41 policies that limit their ability to offer protection and/or support in many cases of online abuse: the first limitation focuses on who the policies apply to, and the second on where the policies apply

    I get by with a little help from my friends: The ecological model and support for women scholars experiencing online harassment

    No full text
    This article contributes to understanding the phenomenon of online abuse and harassment toward women scholars. We draw on data collected from 14 interviews with women scholars from the United States, Canada, and the United Kingdom, and report on the types of supports they sought during and after their experience with online abuse and harassment. We found that women scholars rely on three levels of support: the first level includes personal and social support (such as encouragement from friends and family and outsourcing comment reading to others); the second includes organizational (such as university or institutional policy), technological (such as reporting tools on Twitter or Facebook), and sectoral (such as law enforcement) support; and, the third includes larger cultural and social attitudes and discourses (such as attitudes around gendered harassment and perceptions of the online/offline divide). While participants relied on social and personal support most frequently, they commonly reported relying on multiple supports across all three levels. We use an ecological model as our framework to demonstrate how different types of support are interconnected, and recommend that support for targets of online abuse must integrate aspects of all three levels

    AI and Technology-Facilitated Violence and Abuse

    No full text
    Artificial intelligence (AI) is being used—and is in some cases specifically designed—to cause harms against members of equality-seeking communities. These harms, which we term “equality harms” have individual and collective effects, and emanate from both “direct” and “structural” violence. Discussions about the role of AI in technology-facilitated violence and abuse (TFVA) sometimes do not include equality harms specifically. When they do, they frequently focus on individual equality harms caused by “direct” violence (e.g. the use of deepfakes to create non-consensual pornography to harass or degrade individual women). Often little attention is paid to the collective equality harms that flow from structural violence, including those that arise from corporate actions motivated by the drive to profit from data flows (e.g. algorithmic profiling). Addressing TFVA in a comprehensive way means considering equality harms arising from both individual and corporate behaviours. This will require going beyond criminal law reforms to punish “bad” individual actors, since responses focused on individual wrongdoers fail to address the social impact of the structural violence that flows from some commercial uses of AI. Although, in many cases, the harms occasioned by these (ab)uses of AI are the very sort of harms that law is used to address or has been used to address, existing Canadian law is not currently well placed to meaningfully address equality harms
    corecore