Moderating online child sexual abuse material (CSAM): Does self-regulation work, or is greater state regulation needed?

Abstract

Social media platforms serve a role in the Internet age as crucial public forums connecting users around the world through a decentralised cyberspace. These platforms host high volumes of content and, as such, the role of content moderators (CMs) employed to safeguard users against harmful content like child sexual abuse material and gore is critical — however, despite how essential CMs are to the social media landscape, their work as “first responders” is complicated by legal and systemic debates over whether policing cyberspace should be left to the self-regulation of tech companies, or if greater state-regulation is required. In this scoping review, major debates in this area are identified and evaluated. This includes the issue of territorial jurisdiction, and how it obstructs traditional policing online; concerns over free speech and privacy if CMs are given greater powers; debates over whether tech companies should be legally liable for user-generated content and; the impacts (mental and professional) on the very CMs now operating as the new frontline against harmful, often traumatic, materials shared on social media. In outlining these issues, our objective is to highlight issues requiring further attention in order to best support CMs, and to enhance responses to harmful online content

    Similar works