8,590 research outputs found

    Even good bots fight: the case of Wikipedia

    Get PDF
    In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits and these sterile “fights” may sometimes continue for years. Unlike humans on Wikipedia, bots’ interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively “dumb” bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles

    microPhantom: Playing microRTS under uncertainty and chaos

    Full text link
    This competition paper presents microPhantom, a bot playing microRTS and participating in the 2020 microRTS AI competition. microPhantom is based on our previous bot POAdaptive which won the partially observable track of the 2018 and 2019 microRTS AI competitions. In this paper, we focus on decision-making under uncertainty, by tackling the Unit Production Problem with a method based on a combination of Constraint Programming and decision theory. We show that using our method to decide which units to train improves significantly the win rate against the second-best microRTS bot from the partially observable track. We also show that our method is resilient in chaotic environments, with a very small loss of efficiency only. To allow replicability and to facilitate further research, the source code of microPhantom is available, as well as the Constraint Programming toolkit it uses

    Automated state of play: rethinking anthropocentric rules of the game

    Get PDF
    Automation of play has become an ever more noticeable phenomenon in the domain of video games, expressed by self-playing game worlds, self-acting characters, and non-human agents traversing multiplayer spaces. This article proposes to look at AI-driven non-human play and, what follows, rethink digital games, taking into consideration their cybernetic nature, thus departing from the anthropocentric perspectives dominating the field of Game Studies. A decentralised post-humanist reading, as the author argues, not only allows to rethink digital games and play, but is a necessary condition to critically reflect AI, which due to the fictional character of video games, often plays by very different rules than the so-called “true” AI

    Reverse Engineering Socialbot Infiltration Strategies in Twitter

    Full text link
    Data extracted from social networks like Twitter are increasingly being used to build applications and services that mine and summarize public reactions to events, such as traffic monitoring platforms, identification of epidemic outbreaks, and public perception about people and brands. However, such services are vulnerable to attacks from socialbots - automated accounts that mimic real users - seeking to tamper statistics by posting messages generated automatically and interacting with legitimate users. Potentially, if created in large scale, socialbots could be used to bias or even invalidate many existing services, by infiltrating the social networks and acquiring trust of other users with time. This study aims at understanding infiltration strategies of socialbots in the Twitter microblogging platform. To this end, we create 120 socialbot accounts with different characteristics and strategies (e.g., gender specified in the profile, how active they are, the method used to generate their tweets, and the group of users they interact with), and investigate the extent to which these bots are able to infiltrate the Twitter social network. Our results show that even socialbots employing simple automated mechanisms are able to successfully infiltrate the network. Additionally, using a 2k2^k factorial design, we quantify infiltration effectiveness of different bot strategies. Our analysis unveils findings that are key for the design of detection and counter measurements approaches

    Automation of play:theorizing self-playing games and post-human ludic agents

    Get PDF
    This article offers a critical reflection on automation of play and its significance for the theoretical inquiries into digital games and play. Automation has become an ever more noticeable phenomenon in the domain of video games, expressed by self-playing game worlds, self-acting characters, and non-human agents traversing multiplayer spaces. On the following pages, the author explores various instances of automated non-human play and proposes a post-human theoretical lens, which may help to create a new framework for the understanding of videogames, renegotiate the current theories of interaction prevalent in game studies, and rethink the relationship between human players and digital games

    Do Social Bots Dream of Electric Sheep? A Categorisation of Social Media Bot Accounts

    Get PDF
    So-called 'social bots' have garnered a lot of attention lately. Previous research showed that they attempted to influence political events such as the Brexit referendum and the US presidential elections. It remains, however, somewhat unclear what exactly can be understood by the term 'social bot'. This paper addresses the need to better understand the intentions of bots on social media and to develop a shared understanding of how 'social' bots differ from other types of bots. We thus describe a systematic review of publications that researched bot accounts on social media. Based on the results of this literature review, we propose a scheme for categorising bot accounts on social media sites. Our scheme groups bot accounts by two dimensions - Imitation of human behaviour and Intent.Comment: Accepted for publication in the Proceedings of the Australasian Conference on Information Systems, 201
    corecore