Safe from “harm”: The Governance of Violence by Platforms

Abstract

A number of issues have emerged related to how platforms moderate and mitigate “harm.” Although platforms have recently developed more explicit policies in regard to what constitutes “hate speech” and “harmful content,” it appears that platforms often use subjective judgments of harm that specifically pertains to spectacular, physical violence—but harm takes on many shapes and complex forms. The politics of defining “harm” and “violence” within these platforms are complex and dynamic, and represent entrenched histories of how control over these definitions extends to people\u27s perceptions of them. Via a critical discourse analysis of policy documents from three major platforms (Facebook, Twitter, and YouTube), we argue that platforms\u27 narrow definitions of harm and violence are not just insufficient but result in these platforms engaging in a form of symbolic violence. Moreover, the platforms position harm as a floating signifier, imposing conceptions of not just what violence is and how it manifests, but who it impacts. Rather than changing the mechanisms of their design that enable harm, the platforms reconfigure intentionality and causality to try to stop users from being “harmful,” which, ironically, perpetuates harm. We provide a number of suggestions, namely a restorative justice-focused approach, in addressing platform harm

    Similar works