15 research outputs found
Design by Bot: Power and Resistance in the Development of Automated Software Agents
In this paper, I discuss how Application Programming Interfaces (APIs) have enabled new modes of software development that complicates traditional distinctions between developers and users. Coders can build bots, scripts, scrapers, extensions, aggregators, and other tools that change how software applications, platforms, and protocols operate – all without requiring privileged access to software codebases. In Wikipedia, user-authored bots and tools perform a staggering amount of the work required to keep the collaborative encyclopedia project operating in the manner that it does. Bots remove vandalism and spam, alert administrators to conflicts, harmonize linguistic standards, and enforce discursive norms. In reddit, bots have recently emerged to provide new functionalities to the news aggregation and discussion site. I report on an ethnographic study of bot development and bot developers in Wikipedia and reddit, demonstrating the various ways in which the rise of automated software agents has enabled new forms of both power and resistance
ArXiV Archive
This is a full archive of metadata about papers on arxiv.org, including Jupyter notebooks for unpacking and analyzing it. Current version contains papers from 1993 to 2018-10-01. Variables include:- abstract- ACM classification (if entered)- ArXiV ID- authors (comma-separated)- ArXiV categories (comma-separated)- author comments (if entered)- date created- DOI (if entered)- number of authors- number of ArXiV categories- primary ArXiV category- title- date updated</div
Recommended from our members
Bot-based collective blocklists in Twitter: The counterpublic moderation of harassment in a networked public space
This article introduces and discusses bot-based collective blocklists (or blockbots) in Twitter, which have been developed by volunteers to combat harassment in the social networking site. Blockbots support the curation of a shared blocklist of accounts, where subscribers to a blockbot will not receive any notifications or messages from those on the blocklist. Blockbots support counterpublic communities, helping people moderate their own experiences of a site. This article provides an introduction and overview of blockbots and the issues that they raise about networked publics and platform governance, extending an intersecting literature on online harassment, platform governance, and the politics of algorithms. Such projects involve a far more reflective, intentional, transparent, collaborative, and decentralized way of using algorithmic systems to respond to issues like harassment. I argue that blockbots are not just technical solutions but social ones as well, a notable exception to common technologically determinist solutions that often push responsibility for issues like harassment to the individual user. Beyond the case of Twitter, blockbots call our attention to collective, bottom-up modes of computationally assisted moderation that can be deployed by counterpublic groups who want to participate in networked publics where hegemonic and exclusionary practices are increasingly prevalent
Recommended from our members
The Lives of Bots
I describe the complex social and technical environment in which bots exist in Wikipedia, emphasizing not only how bots produce order and enforce rules, but also how humans produce bots and negotiate rules around their operation
Bot-based collective blocklists in Twitter: The counterpublic moderation of harassment in a networked public space
This article introduces and discusses bot-based collective blocklists (or blockbots) in Twitter, which have been developed by volunteers to combat harassment in the social networking site. Blockbots support the curation of a shared blocklist of accounts, where subscribers to a blockbot will not receive any notifications or messages from those on the blocklist. Blockbots support counterpublic communities, helping people moderate their own experiences of a site. This article provides an introduction and overview of blockbots and the issues that they raise about networked publics and platform governance, extending an intersecting literature on online harassment, platform governance, and the politics of algorithms. Such projects involve a far more reflective, intentional, transparent, collaborative, and decentralized way of using algorithmic systems to respond to issues like harassment. I argue that blockbots are not just technical solutions but social ones as well, a notable exception to common technologically determinist solutions that often push responsibility for issues like harassment to the individual user. Beyond the case of Twitter, blockbots call our attention to collective, bottom-up modes of computationally assisted moderation that can be deployed by counterpublic groups who want to participate in networked publics where hegemonic and exclusionary practices are increasingly prevalent
Open algorithmic systems: lessons on opening the black box from Wikipedia
This paper reports from a multi-year ethnographic study of automated software agents in Wikipedia, where bots play key roles in moderation and gatekeeping. Automated software agents are playing increasingly important roles in how networked publics are governed and gatekept, with internet researchers increasingly focusing on the politics of algorithms. Wikipedia’s bots stand in stark contrast to the algorithmic systems that have been delegated moderation or managerial work in other platforms. In most platforms, algorithmic systems are developed in-house, where there are few measures for public accountability or auditing, much less the ability for publics to shape the design or operation of such systems. However, Wikipedia’s model presents a compelling alternative, where members of the editing community heavily participate in the design and development of such algorithmic systems
Operationalizing Conflict & Coordination Between Automated Software Agents (datasets)
Datasets related to the Operationalizing Conflict & Coordination Between Automated Software Agents paper.<div><br></div><div>For the paper, see:</div><div>https://commons.wikimedia.org/wiki/File:Operationalizing-conflict-bots-wikipedia-cscw-preprint.pdf<br></div><div><br></div><div>For the code, see: </div><div>https://github.com/halfak/are-the-bots-really-fighting<br></div