15 research outputs found

    Indirect reciprocity with trinary reputations

    Get PDF
    Indirect reciprocity is a reputation-based mechanism for cooperation in social dilemma situations when individuals do not repeatedly meet. The conditions under which cooperation based on indirect reciprocity occurs have been examined in great details. Most previous theoretical analysis assumed for mathematical tractability that an individual possesses a binary reputation value, i.e., good or bad, which depends on their past actions and other factors. However, in real situations, reputations of individuals may be multiple valued. Another puzzling discrepancy between the theory and experiments is the status of the so-called image scoring, in which cooperation and defection are judged to be good and bad, respectively, independent of other factors. Such an assessment rule is found in behavioral experiments, whereas it is known to be unstable in theory. In the present study, we fill both gaps by analyzing a trinary reputation model. By an exhaustive search, we identify all the cooperative and stable equilibria composed of a homogeneous population or a heterogeneous population containing two types of players. Some results derived for the trinary reputation model are direct extensions of those for the binary model. However, we find that the trinary model allows cooperation under image scoring under some mild conditions.Comment: 5 figures, 1 tabl

    Evolution of cooperation under indirect reciprocity and arbitrary exploration rates

    Get PDF
    Cooperation has been recognized as an evolutionary puzzle since Darwin, and remains identified as one of the biggest challenges of the XXIst century. Indirect Reciprocity (IR), a key mechanism that humans employ to cooperate with each other, establishes that individual behaviour depends on reputations, which in turn evolve depending on social norms that classify behaviours as good or bad. While it is well known that different social norms give rise to distinct cooperation levels, it remains unclear how the performance of each norm is influenced by the random exploration of new behaviours, often a key component of social dynamics where a plethora of stimuli may compel individuals to deviate from pre-defined behaviours. Here we study, for the first time, the impact of varying degrees of exploration rates - the likelihood of spontaneously adopting another strategy, akin to a mutation probability in evolutionary dynamics - in the emergence of cooperation under IR. We show that high exploration rates may either improve or harm cooperation, depending on the underlying social norm at work. Regarding some of the most popular social norms studied to date, we find that cooperation under Simple-standing and Image-score is enhanced by high exploration rates, whereas the opposite occurs for Stern-judging and Shunning.The authors thank Vitor V. Vasconcelos for fruitful discussions. This research was supported by Fundacao para a Ciencia e Tecnologia (FCT) through grants SFRH/BD/94736/2013, PTDC/EEI-SII/5081/2014, PTDC/MAT/STA/3358/2014 and by multiannual funding of CBMA and INESC-ID (under the projects UID/BIA/04050/2013 and UID/CEC/50021/2013) provided by FCT.info:eu-repo/semantics/publishedVersio

    Local stability of cooperation in a continuous model of indirect reciprocity

    Full text link
    Reputation is a powerful mechanism to enforce cooperation among unrelated individuals through indirect reciprocity, but it suffers from disagreement originating from private assessment, noise, and incomplete information. In this work, we investigate stability of cooperation in the donation game by regarding each player's reputation and behaviour as continuous variables. Through perturbative calculation, we derive a condition that a social norm should satisfy to give penalties to its close variants, provided that everyone initially cooperates with a good reputation, and this result is supported by numerical simulation. A crucial factor of the condition is whether a well-reputed player's donation to an ill-reputed co-player is appreciated by other members of the society, and the condition can be reduced to a threshold for the benefit-cost ratio of cooperation which depends on the reputational sensitivity to a donor's behaviour as well as on the behavioural sensitivity to a recipient's reputation. Our continuum formulation suggests how indirect reciprocity can work beyond the dichotomy between good and bad even in the presence of inhomogeneity, noise, and incomplete information.Comment: 13 pages, 3 figure

    Indirect reciprocity in three types of social dilemmas.

    Get PDF
    Indirect reciprocity is a key mechanism for the evolution of human cooperation. Previous studies explored indirect reciprocity in the so-called donation game, a special class of Prisoner\u27s Dilemma (PD) with unilateral decision making. A more general class of social dilemmas includes Snowdrift (SG), Stag Hunt (SH), and PD games, where two players perform actions simultaneously. In these simultaneous-move games, moral assessments need to be more complex; for example, how should we evaluate defection against an ill-reputed, but now cooperative, player? We examined indirect reciprocity in the three social dilemmas and identified twelve successful social norms for moral assessments. These successful norms have different principles in different dilemmas for suppressing cheaters. To suppress defectors, any defection against good players is prohibited in SG and PD, whereas defection against good players may be allowed in SH. To suppress unconditional cooperators, who help anyone and thereby indirectly contribute to jeopardizing indirect reciprocity, we found two mechanisms: indiscrimination between actions toward bad players (feasible in SG and PD) or punishment for cooperation with bad players (effective in any social dilemma). Moreover, we discovered that social norms that unfairly favor reciprocators enhance robustness of cooperation in SH, whereby reciprocators never lose their good reputation

    Reputation based on punishment rather than generosity allows for evolution of cooperation in sizable groups

    Get PDF
    Cooperation among unrelated individuals can arise if decisions to help others can be based on reputation. While working for dyadic interactions, reputation-use in social dilemmas involving many individuals (e.g. public goods games) becomes increasingly difficult as groups become larger and errors more frequent. Reputation is therefore believed to have played a minor role for the evolution of cooperation in collective action dilemmas such as those faced by early humans. Here, we show in computer simulations that a reputation system based on punitive actions can overcome these problems and, compared to reputation system based on generous actions, (i) is more likely to lead to the evolution of cooperation in sizable groups, (ii) more effectively sustains cooperation within larger groups, and (iii) is more robust to errors in reputation assessment. Punishment and punishment reputation could therefore have played crucial roles in the evolution of cooperation within larger groups of humans

    Evolutionary stability and resistance to cheating in an indirect reciprocity model based on reputation

    Get PDF
    Indirect reciprocity is one of the main mechanisms to explain the emergence and sustainment of altruism in societies. The standard approach to indirect reciprocity is reputation models. These are games in which players base their decisions on their opponent's reputation gained in past interactions with other players (moral assessment). The combination of actions and moral assessment leads to a large diversity of strategies; thus determining the stability of any of them against invasions by all the others is a difficult task. We use a variant of a previously introduced reputation-based model that let us systematically analyze all these invasions and determine which ones are successful. Accordingly, we are able to identify the third-order strategies (those which, apart from the action, judge considering both the reputation of the donor and that of the recipient) that are evolutionarily stable. Our results reveal that if a strategy resists the invasion of any other one sharing its same moral assessment, it can resist the invasion of any other strategy. However, if actions are not always witnessed, cheaters (i.e., individuals with a probability of defecting regardless of the opponent's reputation) have a chance to defeat the stable strategies for some choices of the probabilities of cheating and of being witnessed. Remarkably, by analyzing this issue with adaptive dynamics we find that whether an honest population resists the invasion of cheaters is determined by a Hamilton-like rule, with the probability that the cheat is discovered playing the role of the relatedness parameter.This work has been supported by the Ministerio de Ciencia e Innovación (Spain) through grants MOSAICO and PRODIEVO, by the European Research Area Complexity-Net through grant RESINEE, and by Comunidad de Madrid (Spain) through grant MODELICO-CM. L.A.M.-V. was supported by a postdoctoral fellowship from Alianza 4 Universidades

    Reputation-based Strategies for the Evolution of Cooperative Behaviour

    Get PDF
    Cooperation between strangers can be difficult to explain. Several mechanisms have been shown to sustain cooperation among which one of the most general is Indirect Reciprocity. This describes how reputation-based social norms can distinguish between appropriate and inappropriate behaviours and sustain cooperation through the promise of future reciprocity from other members of a population. We present three experiments that investigate how a social norm’s ability to sustain cooperation is affected when: information flow is restricted to between neighbours, anyone can punish and anyone can be punished, and when people are capable of fine tuning their behaviour in response to their environment. Using simulations and a series of agent-based models, we find that - in the two-person prisoner's dilemma - restricting the flow of information and ensuring people learn from their neighbours, benefits the maintenance of good behaviour. In such scenarios, the best chances for cooperation occur when actions are judged harshly, ensuring that a good reputation once lost, is difficult to regain. For social norms to sustain cooperation in collective action problems, similar harshness is required through the ongoing threat of punishment. These situations can be highly cooperative if withdrawal from the social dilemma is possible and such behaviour is not judged to be morally worse than defection. However, if people are not able to punish badly behaving peers, then free-riding runs rampant unless the population considers defection to be worse than withdrawing from the social dilemma. We show that an improvement on this state of affairs, can be obtained when agents are able to fine-tune their behaviour when confronted with various reputational environments. Regardless of how actions are morally viewed, cooperation has a good chance if people can be sufficiently deliberate

    On the stability of cooperation under indirect reciprocity with first-order information

    Get PDF
    Indirect reciprocity describes a class of reputation-based mechanisms which may explain the prevalence of cooperation in large groups where partners meet only once. The first model for which this has been demonstrated was the image scoring mechanism. But analytical work on the simplest possible case, the binary scoring model, has shown that even small errors in implementation destabilize any cooperative regime. It has thus been claimed that for indirect reciprocity to stabilize cooperation, assessments of reputation must be based on higher-order information. Is indirect reciprocity relying on frst-order information doomed to fail? We use a simple analytical model of image scoring to show that this need not be the case. Indeed, in the general image scoring model the introduction of implementation errors has just the opposite effect as in the binary scoring model: it may stabilize instead of destabilize cooperation
    corecore