5,709 research outputs found

    Effects of Diversity on Multi-agent Systems: Minority Games

    Full text link
    We consider a version of large population games whose agents compete for resources using strategies with adaptable preferences. The games can be used to model economic markets, ecosystems or distributed control. Diversity of initial preferences of strategies is introduced by randomly assigning biases to the strategies of different agents. We find that diversity among the agents reduces their maladaptive behavior. We find interesting scaling relations with diversity for the variance and other parameters such as the convergence time, the fraction of fickle agents, and the variance of wealth, illustrating their dynamical origin. When diversity increases, the scaling dynamics is modified by kinetic sampling and waiting effects. Analyses yield excellent agreement with simulations.Comment: 41 pages, 16 figures; minor improvements in content, added references; to be published in Physical Review

    Strategies for minority game and resource allocation.

    Get PDF
    She, Yingni.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 74-78).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Scope --- p.2Chapter 1.2 --- Motivation --- p.5Chapter 1.3 --- Structure of the Thesis --- p.6Chapter 2 --- Literature Review --- p.7Chapter 2.1 --- Intelligent Agents and Multiagent Systems --- p.8Chapter 2.1.1 --- Intelligent Agents --- p.8Chapter 2.1.2 --- Multiagent Systems --- p.10Chapter 2.2 --- Minority Game --- p.13Chapter 2.2.1 --- Minority Game --- p.13Chapter 2.2.2 --- Characteristics of Minority Game --- p.14Chapter 2.2.3 --- Strategies for Agents in Minority Game --- p.18Chapter 2.3 --- Resource Allocation --- p.22Chapter 2.3.1 --- Strategies for Agents in Multiagent Resource Allocation --- p.23Chapter 3 --- Individual Agent´ةs Wealth in Minority Game --- p.26Chapter 3.1 --- The Model --- p.26Chapter 3.2 --- Motivation --- p.27Chapter 3.3 --- Inefficiency Information --- p.28Chapter 3.4 --- An Intelligent Strategy --- p.31Chapter 3.5 --- Experiment Analysis --- p.32Chapter 3.6 --- Discussions and Analysis --- p.35Chapter 3.6.1 --- Equivalence to the Experience method --- p.36Chapter 3.6.2 --- Impact of M' and S' --- p.38Chapter 3.6.3 --- Impact of M and S --- p.41Chapter 3.6.4 --- Impact of Larger Number of Privileged Agents --- p.48Chapter 3.6.5 --- Comparisons with Related Work --- p.48Chapter 4 --- An Adaptive Strategy for Resource Allocation --- p.53Chapter 4.1 --- Problem Specification --- p.53Chapter 4.2 --- An Adaptive Strategy --- p.55Chapter 4.3 --- Remarks of the Adaptive Strategy --- p.57Chapter 4.4 --- Experiment Analysis --- p.58Chapter 4.4.1 --- Simulations --- p.58Chapter 4.4.2 --- Comparisons with Related Work --- p.62Chapter 5 --- Conclusions and Future Work --- p.69Chapter 5.1 --- Conclusions --- p.69Chapter 5.2 --- Future Work --- p.71A List of Publications --- p.73Bibliography --- p.7

    Collusion in Peer-to-Peer Systems

    Get PDF
    Peer-to-peer systems have reached a widespread use, ranging from academic and industrial applications to home entertainment. The key advantage of this paradigm lies in its scalability and flexibility, consequences of the participants sharing their resources for the common welfare. Security in such systems is a desirable goal. For example, when mission-critical operations or bank transactions are involved, their effectiveness strongly depends on the perception that users have about the system dependability and trustworthiness. A major threat to the security of these systems is the phenomenon of collusion. Peers can be selfish colluders, when they try to fool the system to gain unfair advantages over other peers, or malicious, when their purpose is to subvert the system or disturb other users. The problem, however, has received so far only a marginal attention by the research community. While several solutions exist to counter attacks in peer-to-peer systems, very few of them are meant to directly counter colluders and their attacks. Reputation, micro-payments, and concepts of game theory are currently used as the main means to obtain fairness in the usage of the resources. Our goal is to provide an overview of the topic by examining the key issues involved. We measure the relevance of the problem in the current literature and the effectiveness of existing philosophies against it, to suggest fruitful directions in the further development of the field

    Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence

    Full text link
    An important challenge for safety in machine learning and artificial intelligence systems is a~set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart's or Campbell's law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.Comment: 12 Pages, This version re-submitted to Big Data and Cognitive Computing, Special Issue "Artificial Superintelligence: Coordination & Strategy
    • …
    corecore