4 research outputs found

    Establishing norms with metanorms in distributed computational systems

    Get PDF
    Norms provide a valuable mechanism for establishing coherent cooperative behaviour in decentralised systems in which there is no central authority. One of the most influential formulations of norm emergence was proposed by Axelrod (Am Political Sci Rev 80(4):1095–1111, 1986). This paper provides an empirical analysis of aspects of Axelrod’s approach, by exploring some of the key assumptions made in previous evaluations of the model. We explore the dynamics of norm emergence and the occurrence of norm collapse when applying the model over extended durations . It is this phenomenon of norm collapse that can motivate the emergence of a central authority to enforce laws and so preserve the norms, rather than relying on individuals to punish defection. Our findings identify characteristics that significantly influence norm establishment using Axelrod’s formulation, but are likely to be of importance for norm establishment more generally. Moreover, Axelrod’s model suffers from significant limitations in assuming that private strategies of individuals are available to others, and that agents are omniscient in being aware of all norm violations and punishments. Because this is an unreasonable expectation , the approach does not lend itself to modelling real-world systems such as online networks or electronic markets. In response, the paper proposes alternatives to Axelrod’s model, by replacing the evolutionary approach, enabling agents to learn, and by restricting the metapunishment of agents to cases where the original defection is observed, in order to be able to apply the model to real-world domains . This work can also help explain the formation of a “social contract” to legitimate enforcement by a central authority

    Thirty years of artificial intelligence and law : the third decade

    Get PDF

    Taking Account of the Actions of Others in Value-based Reasoning

    Get PDF
    Practical reasoning, reasoning about what actions should be chosen, is highly dependent both on the individual values of the agent concerned and on what others choose to do. Hitherto, computational models of value-based argumentation for practical reasoning have required assumptions to be made about the beliefs and preferences of other agents. Here we present a new method for taking the actions of others into account that does not require these assumptions: the only beliefs and preferences considered are those of the agent engaged in the reasoning. Our new formalism draws on utility-based approaches and expresses the reasoning in the form of arguments and objections, to enable full integration with value-based practical reasoning. We illustrate our approach by showing how value-based reasoning is modelled in two scenarios used in experimental economics, the Ultimatum Game and the Prisoner's Dilemma, and we present an evaluation of our approach in terms of these experiments. The evaluation demonstrates that our model is able to reproduce computationally the results of ethnographic experiments, serving as an encouraging validation exercise
    corecore