217 research outputs found

    A Spatial Agent-Based Model of N-Person Prisoner's Dilemma Cooperation in a Socio-Geographic Community

    Get PDF
    The purpose of this paper is to present a spatial agent-based model of N-person prisoner's dilemma that is designed to simulate the collective communication and cooperation within a socio-geographic community. Based on a tight coupling of REPAST and a vector Geographic Information System, the model simulates the emergence of cooperation from the mobility behaviors and interaction strategies of citizen agents. To approximate human behavior, the agents are set as stochastic learning automata with Pavlovian personalities and attitudes. A review of the theory of the standard prisoner's dilemma, the iterated prisoner's dilemma, and the N-person prisoner's dilemma is given as well as an overview of the generic architecture of the agent-based model. The capabilities of the spatial N-person prisoner's dilemma component are demonstrated with several scenario simulation runs for varied initial cooperation percentages and mobility dynamics. Experimental results revealed that agent mobility and context preservation bring qualitatively different effects to the evolution of cooperative behavior in an analyzed spatial environment.Agent Based Modeling, Cooperation, Prisoners Dilemma, Spatial Interaction Model, Spatially Structured Social Dilemma, Geographic Information Systems

    Analyzing Social Network Structures in the Iterated Prisoner's Dilemma with Choice and Refusal

    Full text link
    The Iterated Prisoner's Dilemma with Choice and Refusal (IPD/CR) is an extension of the Iterated Prisoner's Dilemma with evolution that allows players to choose and to refuse their game partners. From individual behaviors, behavioral population structures emerge. In this report, we examine one particular IPD/CR environment and document the social network methods used to identify population behaviors found within this complex adaptive system. In contrast to the standard homogeneous population of nice cooperators, we have also found metastable populations of mixed strategies within this environment. In particular, the social networks of interesting populations and their evolution are examined.Comment: 37 pages, uuencoded gzip'd Postscript (1.1Mb when gunzip'd) also available via WWW at http://www.cs.wisc.edu/~smucker/ipd-cr/ipd-cr.htm

    Evolutionary Game Theory

    Get PDF
    Articl

    Simulation Models of the Evolution of Cooperation as Proofs of Logical Possibilities. How Useful Are They?

    Get PDF
    This paper discusses critically what simulation models of the evolution of cooperation can possibly prove by examining Axelrod’s “Evolution of Cooperation” (1984) and the modeling tradition it has inspired. Hardly any of the many simulation models in this tradition have been applicable empirically. Axelrod’s role model suggested a research design that seemingly allowed to draw general conclusions from simulation models even if the mechanisms that drive the simulation could not be identified empirically. But this research design was fundamentally flawed. At best such simulations can claim to prove logical possibilities, i.e. they prove that certain phenomena are possible as the consequence of the modeling assumptions built into the simulation, but not that they are possible or can be expected to occur in reality. I suggest several requirements under which proofs of logical possibilities can nevertheless be considered useful. Sadly, most Axelrod-style simulations do not meet these requirements. It would be better not to use this kind of simulations at all

    Agent-Based Models of Industrial Clusters and Districts

    Get PDF
    Agent-based models, an instance of the wider class of connectionist models, allow bottom-up simulations of organizations constituted byu a large number of interacting parts. Thus, geogrfaphical clusters of competing or collaborating firms constitute an obvious field of application. This contribution explains what agent-based models are, reviews applications in the field of industrial clusters and focuses on a simulator of infra- and inter-firm communications.Agent-based models, industrial clusters, industrial districts

    Learning and innovative elements of strategy adoption rules expand cooperative network topologies

    Get PDF
    Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent Prisoners Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing, complex systems.Comment: 14 pages, 3 Figures + a Supplementary Material with 25 pages, 3 Tables, 12 Figures and 116 reference

    Forgiver Triumphs in Alternating Prisoner's Dilemma

    Get PDF
    Cooperative behavior, where one individual incurs a cost to help another, is a wide spread phenomenon. Here we study direct reciprocity in the context of the alternating Prisoner's Dilemma. We consider all strategies that can be implemented by one and two-state automata. We calculate the payoff matrix of all pairwise encounters in the presence of noise. We explore deterministic selection dynamics with and without mutation. Using different error rates and payoff values, we observe convergence to a small number of distinct equilibria. Two of them are uncooperative strict Nash equilibria representing always-defect (ALLD) and Grim. The third equilibrium is mixed and represents a cooperative alliance of several strategies, dominated by a strategy which we call Forgiver. Forgiver cooperates whenever the opponent has cooperated; it defects once when the opponent has defected, but subsequently Forgiver attempts to re-establish cooperation even if the opponent has defected again. Forgiver is not an evolutionarily stable strategy, but the alliance, which it rules, is asymptotically stable. For a wide range of parameter values the most commonly observed outcome is convergence to the mixed equilibrium, dominated by Forgiver. Our results show that although forgiving might incur a short-term loss it can lead to a long-term gain. Forgiveness facilitates stable cooperation in the presence of exploitation and noise

    ACE Models of Endogenous Interactions

    Get PDF
    Various approaches used in Agent-based Computational Economics (ACE) to model endogenously determined interactions between agents are discussed. This concerns models in which agents not only (learn how to) play some (market or other) game, but also (learn to) decide with whom to do that (or not).Endogenous interaction, Agent-based Computational Economics (ACE)

    A simple model of cognitive processing in repeated games

    Full text link
    In repeated interactions between individuals, we do not expect that exactly the same situation will occur from one time to another. Contrary to what is common in models of repeated games in the literature, most real situations may differ a lot and they are seldom completely symmetric. The purpose of this paper is to discuss a simple model of cognitive processing in the context of a repeated interaction with varying payoffs. The interaction between players is modelled by a repeated game with random observable payoffs. Cooperation is not simply associated with a certain action but needs to be understood as a phenomenon of the behaviour in the repeated game. The players are thus faced with a more complex situation, compared to the Prisoner's Dilemma that has been widely used for investigating the conditions for cooperation in evolving populations. Still, there are robust cooperating strategies that usually evolve in a population of players. In the cooperative mode, these strategies select an action that allows for maximizing the sum of the payoff of the two players in each round, regardless of the own payoff. Two such players maximise the expected total long-term payoff. If the opponent deviates from this scheme, the strategy invokes a punishment action, which aims at lowering the opponent's score for the rest of the (possibly infinitely) repeated game. The introduction of mistakes to the game actually pushes evolution towards more cooperative strategies even though the game becomes more difficult.Comment: Accepted for publication in the conference proceedings of ECCS'0
    corecore