27 research outputs found

    Maximising Influence in Non-blocking Cascades of Interacting Concepts

    Full text link
    Abstract. In large populations of autonomous individuals, the propa-gation of ideas, strategies or infections is determined by the composite effect of interactions between individuals. The propagation of concepts in a population is a form of influence spread and can be modelled as a cascade from a set of initial individuals through the population. Un-derstanding influence spread and information cascades has many appli-cations, from informing epidemic control and viral marketing strategies to understanding the emergence of conventions in multi-agent systems. Existing work on influence spread has mainly considered single concepts, or small numbers of blocking (exclusive) concepts. In this paper we focus on non-blocking cascades, and propose a new model for characterising concept interaction in an independent cascade. Furthermore, we propose two heuristics, Concept Aware Single Discount and Expected Infected, for identifying the individuals that will maximise the spread of a partic-ular concept, and show that in the non-blocking multi-concept setting our heuristics out-perform existing methods.

    How to Solve an Allocation Problem?

    Get PDF
    Game theory proposes several allocation solutions: we know (a) fairness properties, (b) how to develop (c) methods building on these properties, and (d) how to calculate (e) allocations. We also know how to influence the perceived fairness and realization of allocation solutions. However, we cannot explain properly that theoretically fair allocation methods are rarely used.\ud To obtain more insight into these issues we solved an allocation problem in a purchasing cooperative case study by confronting theory with perceptions. We find large theoretical and perception differences and inconsistencies between and within the five steps from a to e. We note that theoretically fair methods tend to be more complex than theoretically unfair methods. In addition, the allocations of some simple methods are perceived fairer than the allocations of complex methods in our case study. To improve theoretical solutions the focus should be on a and c. To influence perceptions the focus should be on b, c, and d. Finally, all five steps are modeled into comparable fairness measures and a general model. Using this model implies that both theory and perceptions are considered in solving allocation problems

    Talking Helps: Evolving Communicating Agents for the Predator-Prey Pursuit Problem

    Get PDF
    We analyze a general model of multi-agent communication in which all agents communicate simultaneously to a message board. A genetic algorithm is used to evolve multi-agent languages for the predator agents in a version of the predator-prey pursuit problem. We show that the resulting behavior of the communicating multi-agent system is equivalent to that of a Mealy finite state machine whose states are determined by the agentsā€™ usage of the evolved language. Simulations show that the evolution of a communication language improves the performance of the predators. Increasing the language size (and thus increasing the number of possible states in the Mealy machine) improves the performance even further. Furthermore, the evolved communicating predators perform significantly better than all previous work on similar preys. We introduce a method for incrementally increasing the language size which results in an effective coarse-to-fine search that significantly reduces the evolution time required to find a solution. We present some observations on the effects of language size, experimental setup, and prey difficulty on the evolved Mealy machines. In particular, we observe that the start state is often revisited, and incrementally increasing the language size results in smaller Mealy machines. Finally, a simple rule is derived that provides a pessimistic estimate on the minimum language size that should be used for any multi-agent problem

    The Development of Social Simulation as Reflected in the First Ten Years of JASSS: a Citation and Co-Citation Analysis

    Get PDF
    Social simulation is often described as a multidisciplinary and fast-moving field. This can make it difficult to obtain an overview of the field both for contributing researchers and for outsiders who are interested in social simulation. The Journal for Artificial Societies and Social Simulation (JASSS) completing its tenth year provides a good opportunity to take stock of what happened over this time period. First, we use citation analysis to identify the most influential publications and to verify characteristics of social simulation such as its multidisciplinary nature. Then, we perform a co-citation analysis to visualize the intellectual structure of social simulation and its development. Overall, the analysis shows social simulation both in its early stage and during its first steps towards becoming a more differentiated discipline.Citation Analysis, Co-Citation Analysis, Lines of Research, Multidisciplinary, Science Studies, Social Simulation

    Destabilising conventions using temporary interventions

    Get PDF
    Conventions are an important concept in multi-agent systems as they allow increased coordination amongst agents and hence a more efficient system. Encouraging and directing convention emergence has been the focus of much research, particularly through the use of fixed strategy agents. In this paper we apply temporary interventions using fixed strategy agents to destabilise an established convention by (i) replacing it with another convention of our choosing, and (ii) allowing it to destabilise in such a way that no other convention explicitly replaces it. We show that these interventions are effective and investigate the minimum level of intervention needed

    Learning Existing Social Conventions via Observationally Augmented Self-Play

    Full text link
    In order for artificial agents to coordinate effectively with people, they must act consistently with existing conventions (e.g. how to navigate in traffic, which language to speak, or how to coordinate with teammates). A group's conventions can be viewed as a choice of equilibrium in a coordination game. We consider the problem of an agent learning a policy for a coordination game in a simulated environment and then using this policy when it enters an existing group. When there are multiple possible conventions we show that learning a policy via multi-agent reinforcement learning (MARL) is likely to find policies which achieve high payoffs at training time but fail to coordinate with the real group into which the agent enters. We assume access to a small number of samples of behavior from the true convention and show that we can augment the MARL objective to help it find policies consistent with the real group's convention. In three environments from the literature - traffic, communication, and team coordination - we observe that augmenting MARL with a small amount of imitation learning greatly increases the probability that the strategy found by MARL fits well with the existing social convention. We show that this works even in an environment where standard training methods very rarely find the true convention of the agent's partners.Comment: Published in AAAI-AIES2019 - Best Pape

    Obligation Norm Identification in Agent Societies

    Get PDF
    Most works on norms have investigated how norms are regulated using institutional mechanisms. Very few works have focused on how an agent may infer the norms of a society without the norm being explicitly given to the agent. This paper describes a mechanism for identifying one type of norm, an obligation norm. The Obligation Norm Inference (ONI) algorithm described in this paper makes use of an association rule mining approach to identify obligation norms. Using agent based simulation of a virtual restaurant we demonstrate how an agent can identify the tipping norm. The experiments that we have conducted demonstrate that an agent in the system is able to add, remove and modify norms dynamically. An agent can also flexibly modify the parameters of the system based on whether it is successful in identifying a norm.Norms, Social Norms, Obligations, Norm Identification, Agent-Based Simulation, Simulation of Norms, Artificial Societies, Normative Multi-Agent Systems (NorMAS)

    Establishing norms with metanorms in distributed computational systems

    Get PDF
    Norms provide a valuable mechanism for establishing coherent cooperative behaviour in decentralised systems in which there is no central authority. One of the most influential formulations of norm emergence was proposed by Axelrod (Am Political Sci Rev 80(4):1095ā€“1111, 1986). This paper provides an empirical analysis of aspects of Axelrodā€™s approach, by exploring some of the key assumptions made in previous evaluations of the model. We explore the dynamics of norm emergence and the occurrence of norm collapse when applying the model over extended durations . It is this phenomenon of norm collapse that can motivate the emergence of a central authority to enforce laws and so preserve the norms, rather than relying on individuals to punish defection. Our findings identify characteristics that significantly influence norm establishment using Axelrodā€™s formulation, but are likely to be of importance for norm establishment more generally. Moreover, Axelrodā€™s model suffers from significant limitations in assuming that private strategies of individuals are available to others, and that agents are omniscient in being aware of all norm violations and punishments. Because this is an unreasonable expectation , the approach does not lend itself to modelling real-world systems such as online networks or electronic markets. In response, the paper proposes alternatives to Axelrodā€™s model, by replacing the evolutionary approach, enabling agents to learn, and by restricting the metapunishment of agents to cases where the original defection is observed, in order to be able to apply the model to real-world domains . This work can also help explain the formation of a ā€œsocial contractā€ to legitimate enforcement by a central authority
    corecore