9,700 research outputs found

    Agent-Based Computational Economics

    Get PDF
    Agent-based computational economics (ACE) is the computational study of economies modeled as evolving systems of autonomous interacting agents. Starting from initial conditions, specified by the modeler, the computational economy evolves over time as its constituent agents repeatedly interact with each other and learn from these interactions. ACE is therefore a bottom-up culture-dish approach to the study of economic systems. This study discusses the key characteristics and goals of the ACE methodology. Eight currently active research areas are highlighted for concrete illustration. Potential advantages and disadvantages of the ACE methodology are considered, along with open questions and possible directions for future research.Agent-based computational economics; Autonomous agents; Interaction networks; Learning; Evolution; Mechanism design; Computational economics; Object-oriented programming.

    Cooperation in Networked Populations of Selfish Adaptive Agents: Sensitivity to Learning Speed

    Get PDF
    This paper investigates the evolution of cooperation in iterated Prisoner's Dilemma (IPD) games with individually learning agents, subject to the structure of the interaction network. In particular, we study how Tit-for-Tat or All-Defection comes to dominate the population on Watts-Strogatz networks, under varying learning speeds and average network path lengths. We find that the presence of a cooperative regime (where almost the entire population plays Tit-for-Tat) is dependent on the quickness of information spreading across the network. More precisely, cooperation hinges on the relation between individual adaptation speed and average path length in the interaction topology. Our results are in good agreement with previous works both on discrete choice dynamics on networks and in the evolution of cooperation literature

    Groupwise information sharing promotes ingroup favoritism in indirect reciprocity

    Get PDF
    Indirect reciprocity is a mechanism for cooperation in social dilemma situations, in which an individual is motivated to help another to acquire a good reputation and receive help from others afterwards. Ingroup favoritism is another aspect of human cooperation, whereby individuals help members in their own group more often than those in other groups. Ingroup favoritism is a puzzle for the theory of cooperation because it is not easily evolutionarily stable. In the context of indirect reciprocity, ingroup favoritism has been shown to be a consequence of employing a double standard when assigning reputations to ingroup and outgroup members; e.g., helping an ingroup member is regarded as good, whereas the same action toward an outgroup member is regarded as bad. We analyze a model of indirect reciprocity in which information sharing is conducted groupwise. In our model, individuals play social dilemma games within and across groups, and the information about their reputations is shared within each group. We show that evolutionarily stable ingroup favoritism emerges even if all the players use the same reputation assignment rule regardless of group (i.e., a single standard). Two reputation assignment rules called simple standing and stern judging yield ingroup favoritism. Stern judging induces much stronger ingroup favoritism than does simple standing. Simple standing and stern judging are evolutionarily stable against each other when groups employing different assignment rules compete and the number of groups is sufficiently large. In addition, we analytically show as a limiting case that homogeneous populations of reciprocators that use reputations are unstable when individuals independently infer reputations of individuals, which is consistent with previously reported numerical results.Comment: 25 pages, 7 figures. The Abstract is shortened to fill in arXiv's abstract for

    Iterated Prisoner\u27s Dilemma for Species

    Get PDF
    The Iterated Prisoner\u27s Dilemma (IPD) is widely used to study the evolution of cooperation between self-interested agents. Existing work asks how genes that code for cooperation arise and spread through a single-species population of IPD playing agents. In this paper, we focus on competition between different species of agents. Making this distinction allows us to separate and examine macroevolutionary phenomena. We illustrate with some species-level simulation experiments with agents that use well-known strategies, and with species of agents that use team strategies

    Dynamics of mixed Pseudomonas putida populations under neutral and selective growth conditions

    Get PDF

    Special Agents Can Promote Cooperation in the Population

    Get PDF
    Cooperation is ubiquitous in our real life but everyone would like to maximize her own profits. How does cooperation occur in the group of self-interested agents without centralized control? Furthermore, in a hostile scenario, for example, cooperation is unlikely to emerge. Is there any mechanism to promote cooperation if populations are given and play rules are not allowed to change? In this paper, numerical experiments show that complete population interaction is unfriendly to cooperation in the finite but end-unknown Repeated Prisoner's Dilemma (RPD). Then a mechanism called soft control is proposed to promote cooperation. According to the basic idea of soft control, a number of special agents are introduced to intervene in the evolution of cooperation. They comply with play rules in the original group so that they are always treated as normal agents. For our purpose, these special agents have their own strategies and share knowledge. The capability of the mechanism is studied under different settings. We find that soft control can promote cooperation and is robust to noise. Meanwhile simulation results demonstrate the applicability of the mechanism in other scenarios. Besides, the analytical proof also illustrates the effectiveness of soft control and validates simulation results. As a way of intervention in collective behaviors, soft control provides a possible direction for the study of reciprocal behaviors

    Modelling religious signalling

    Get PDF
    The origins of human social cooperation confound simple evolutionary explanation. But from Darwin and Durkheim onwards, theorists (anthropologists and sociologists especially) have posited a potential link with another curious and distinctively human social trait that cries out for explanation: religion. This dissertation explores one contemporary theory of the co-evolution of religion and human social cooperation: the signalling theory of religion, or religious signalling theory (RST). According to the signalling theory, participation in social religion (and its associated rituals and sanctions) acts as an honest signal of one's commitment to a religiously demarcated community and its way of doing things. This signal would allow prosocial individuals to positively assort with one another for mutual advantage, to the exclusion of more exploitative individuals. In effect, the theory offers a way that religion and cooperation might explain one another, but which that stays within an individualist adaptive paradigm. My approach is not to assess the empirical adequacy of the religious signalling explanation or contrast it with other explanations, but rather to deal with the theory in its own terms - isolating and fleshing out its core commitments, explanatory potential, and limitations. The key to this is acknowledging the internal complexities of signalling theory, with respect to the available models of honest signalling and the extent of their fit (or otherwise) with religion as a target system. The method is to take seriously the findings of formal modelling in animal signalling and other disciplines, and to apply these (and methods from the philosophy of biology more generally) to progressively build up a comprehensive picture of the theory, its inherent strengths and weaknesses. The first two chapters outline the dual explanatory problems that cooperation and religion present for evolutionary human science, and surveys contemporary approaches toward explaining them. Chapter three articulates an evolutionary conception of the signalling theory, and chapters four to six make the case for a series of requirements, limitations, and principles of application. Chapters seven and eight argue for the value of formal modelling to further flesh out the theory's commitments and potential and describe some simple simulation results which make progress in this regard. Though the inquiry often problematizes the signalling theory, it also shows that it should not be dismissed outright, and that it makes predictions which are apt for empirical testing
    corecore