6,922 research outputs found

    Partner Selection for the Emergence of Cooperation in Multi-Agent Systems Using Reinforcement Learning

    Get PDF
    Social dilemmas have been widely studied to explain how humans are able to cooperate in society. Considerable effort has been invested in designing artificial agents for social dilemmas that incorporate explicit agent motivations that are chosen to favor coordinated or cooperative responses. The prevalence of this general approach points towards the importance of achieving an understanding of both an agent's internal design and external environment dynamics that facilitate cooperative behavior. In this paper, we investigate how partner selection can promote cooperative behavior between agents who are trained to maximize a purely selfish objective function. Our experiments reveal that agents trained with this dynamic learn a strategy that retaliates against defectors while promoting cooperation with other agents resulting in a prosocial society.Comment:

    Punishment and Gossip: Sustaining Cooperation in a Public Goods Game

    Get PDF
    In an environment in which free-riders are better off than cooperators, social control is required to foster and maintain coopera- tion. There are two main paths through which social control can be ap- plied: punishment and reputation. Our experiments explore the efficacy of punishment and reputation on cooperation rates, both in isolation and in combination. Using a Public Goods Game, we are interested in assessing how cooperation rates change when agents can play one of two different reactive strategies, i.e., they can pay a cost in order to reduce the payoff of free-riders, or they can know others\u27 reputation and then either play defect with free-riders, or refuse to interact with them. Co- operation is maintained at a high level through punishment, but also reputation-based partner selection proves effective in maintaining coop- eration. However, when agents are informed about free-riders\u27 reputation and play Defect, cooperation decreases. Finally, a combination of punish- ment and reputation-based partner selection leads to higher cooperation rates

    Gossip for social control in natural and artificial societies

    Get PDF
    In this work we propose a theory of gossip as a means for social control. Exercising social control roughly means to isolate and to punish cheaters. However, punishment is costly and it inevitably implies the problem of second-order cooperation. Moving from a cognitive model of gossip, we report data from ethnographic studies and agent-based simulations to support our claim that gossip reduces the costs of social control without lowering its efficacy

    Informational Warfare

    Get PDF
    Recent empirical and theoretical work suggests that reputation was an important mediator of access to resources in ancestral human environments. Reputations were built and maintained by the collection, analysis, and dissemination of information about the actions and capabilities of group members-that is, by gossiping. Strategic gossiping would have been an excellent strategy for manipulating reputations and thereby competing effectively for resources and for cooperative relationships with group members who could best provide such resources. Coalitions (cliques) may have increased members' abilities to manipulate reputations by gossiping. Because, over evolutionary time, women may have experienced more within-group competition than men, and because female reputations may have been more vulnerable than male reputations to gossip, gossiping may have been a more important strategy for women than men. Consequently, women may have evolved specializations for gossiping alone and in coalitions. We develop and partially test this theory

    Reputation

    Get PDF
    In this chapter, the role of reputation as a distributed instrument for social order is addressed. A short review of the state of the art will show the role of reputation in promoting (a) social control in cooperative contexts - like social groups and subgroups - and (b) partner selection in competitive contexts, like (e-) markets and industrial districts. In the initial section, current mechanisms of reputation - be they applied to electronic markets or MAS - will be shown to have poor theoretical backgrounds, missing almost completely the cognitive and social properties of the phenomenon under study. In the rest of the chapter a social cognitive model of reputation developed in the last decade by some of the authors will be presented. Its simulation-based applications to the theoretical study of norm-abiding behaviour, partner selection and to the refinement and improvement of current reputation mechanisms will be discussed. Final remarks and ideas for future research will conclude the chapte

    Reputation for complex societies

    Get PDF
    Reputation, the germ of gossip, is addressed in this chapter as a distributed instrument for social order. In literature, reputation is shown to promote (a) social control in cooperative contexts—like social groups and subgroups—and (b) partner selection in competitive ones, like (e-) markets and industrial districts. Current technology that affects, employs and extends reputation, applied to electronic markets or multi-agent systems, is discussed in light of its theoretical background. In order to compare reputation systems with their original analogue, a social cognitive model of reputation is presented. The application of the model to the theoretical study of norm-abiding behaviour and partner selection are discussed, as well as the refinement and improvement of current reputation technology. The chapter concludes with remarks and ideas for future research.</p

    Determining service trustworthiness in inter loud computing environments

    Full text link
    Deployment of applications and scientific workflows that require resources from multiple distributed platforms are fuelling the federation of autonomous clouds to create cyber infrastructure environments. As the scope of federated cloud computing enlarges to ubiquitous and pervasive computing, there will be a need to assess and maintain the trustworthiness of the cloud computing entities. In this paper, we present a fully distributed framework that enable interested parties determine the trustworthiness of federated cloud computing entities.<br /

    Cooperation and Social Dilemmas with Reinforcement Learning

    Get PDF
    Cooperation between humans has been foundational for the development of civilisation and yet there are many questions about how it emerges from social interactions. As artificial agents begin to play a more significant role in our lives and are introduced into our societies, it is apparent that understanding the mechanisms of cooperation is important also for the design of next-generation multi-agent AI systems. Indeed, this is particularly important in the case of supporting cooperation between self-interested AI agents. In this thesis, we focus on the analysis of the application of mechanisms that are at the basis of human cooperation to the training of reinforcement learning agents. Human behaviour is a product of cultural norms, emotions and intuition amongst other things: we argue it is possible to use similar mechanisms to deal with the complexities of multi-agent cooperation. We outline the problem of cooperation in mixed-motive games, also known as social dilemmas, and we focus on the mechanisms of reputation dynamics and partner selection, two mechanisms that have been strongly linked to indirect reciprocity in Evolutionary Game Theory. A key point that we want to emphasise is the fact we assume no prior knowledge and explicit definition of strategies, which instead are fully learnt by the agents during the games. In our experimental evaluation, we demonstrate the benefits of applying these mechanisms to the training process of the agents, and we compare our findings with results presented in a variety of other disciplines, including Economics and Evolutionary Biology
    • …
    corecore