Adversarial dynamics in centralized versus decentralized intelligent systems

Abstract

This article is part of the topic "Building the Socio-Cognitive Architecture of COHUMAIN: Collective Human-Machine Intelligence," Cleotilde Gonzalez, Henny Admoni, Scott Brown and Anita Williams Woolley (Topic Editors).Artificial intelligence (AI) is often used to predict human behavior, thus potentially posing limitations to individuals' and collectives' freedom to act. AI's most controversial and contested applications range from targeted advertisements to crime prevention, including the suppression of civil disorder. Scholars and civil society watchdogs are discussing the oppressive dangers of AI being used by centralized institutions, like governments or private corporations. Some suggest that AI gives asymmetrical power to governments, compared to their citizens. On the other hand, civil protests often rely on distributed networks of activists without centralized leadership or planning. Civil protests create an adversarial tension between centralized and decentralized intelligence, opening the question of how distributed human networks can collectively adapt and outperform a hostile centralized AI trying to anticipate and control their activities. This paper leverages multi-agent reinforcement learning to simulate dynamics within a human-machine hybrid society. We ask how decentralized intelligent agents can collectively adapt when competing with a centralized predictive algorithm, wherein prediction involves suppressing coordination. In particular, we investigate an adversarial game between a collective of individual learners and a central predictive algorithm, each trained through deep Q-learning. We compare different predictive architectures and showcase conditions in which the adversarial nature of this dynamic pushes each intelligence to increase its behavioral complexity to outperform its counterpart. We further show that a shared predictive algorithm drives decentralized agents to align their behavior. This work sheds light on the totalitarian danger posed by AI and provides evidence that decentrally organized humans can overcome its risks by developing increasingly complex coordination strategies.Manuel Cebrian was partially supported by the Ministry of Universities of the Government of Spain, under the program "Convocatoria de Ayudas para la recualificación del sistema universitario español para 2021-2023, de la Universidad Carlos III de Madrid, de 1 de Julio de 2021."Open access funding enabled and organized by Projekt DEAL

Similar works

Full text

thumbnail-image

e-Archivo (Univ. Carlos III de Madrid e-Archivo)

redirect
Last time updated on 18/10/2025

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.

Licence: open access