1,902 research outputs found

    Handling Norms in Multi-Agent System by Means of Formal Argumentation

    Get PDF
    International audienceFormal argumentation is used to enrich and analyse normative multi-agent systems in various ways. In this chapter, we discuss three examples from the literature of handling norms by means of formal argumentation. First, we discuss how existing ways to resolve conflicts among norms using priorities can be represented in formal argumentation, by showing that the so-called Greedy and Reduction approaches can be represented using the weakest and the last link principles respectively. Based on such representation results, formal argumentation can be used to explain the detachment of obligations and permissions from hierarchical normative systems in a new way. Second, we discuss how formal argumentation can be used as a general theory for developing new approaches for normative reasoning, using a dynamic ASPIC-based legal argumentation theory. We show how existing logics of normative systems can be used to analyse such new argumentation systems. Third, we show how argumentation can be used to reason about other challenges in the area of normative multiagent systems as well, by discussing a model for arguing about legal interpretation. In particular, we show how fuzzy logic combined with formal argumentation can be used to reason about the adoption of graded categories and thus address the problem of open texture in normative interpretation. Our aim to discuss these three examples is to inspire new applications of formal argumentation to the challenges of normative reasoning in multiagent systems

    Historical overview of formal argumentation

    Get PDF

    Historical overview of formal argumentation

    Get PDF

    Implementing Norm-Governed Multi-Agent Systems

    Get PDF
    The actions and interactions of independently acting agents in a multi-agent system must be managed if the agents are to function effectively in their shared environment. Norms, which define the obligatory, prohibited and permitted actions for an agent to perform, have been suggested as a possible method for regulating the actions of agents. Norms are local rules designed to govern the actions of individual agents whilst also allowing the agents to achieve a coherent global behaviour. However, there appear to be very few instances of norm-governed multi-agent systems beyond theoretical examples. We describe an implementation strategy for allowing autonomous agents to take a set of norms into account when determining their actions. These norms are implemented using directives, which are local rules specifying actions for an agent to perform depending on its current state. Agents using directives are implemented in a simulation test bed, called Sinatra. Using Sinatra, we investigate the ability of directives to manage agent actions. We begin with directives to manage agent interactions. We find that when agents rely on only local rules they will encounter situations where the local rules are unable to achieve the desired global behaviour. We show how a centralised control mechanism can be used to manage agent interactions that are not successfully handled by directives. Controllers, with a global view of the interaction, instruct the individual agents how to act. We also investigate the use of an existing planning tool to implement the resolution mechanism of a controller. We investigate the ability of directives to coordinate the actions of agents in order to achieve a global objective more effectively. Finally, we present a case study of how directives can be used to determine the actions of autonomous mobile robots.Open Acces

    Meta-constructs and their roles in common sense reasoning

    Full text link

    Towards Forward Responsibility in BDI Agents

    Get PDF

    Proceedings of the IJCAI-09 Workshop on Nonmonotonic Reasoning, Action and Change

    Full text link
    Copyright in each article is held by the authors. Please contact the authors directly for permission to reprint or use this material in any form for any purpose.The biennial workshop on Nonmonotonic Reasoning, Action and Change (NRAC) has an active and loyal community. Since its inception in 1995, the workshop has been held seven times in conjunction with IJCAI, and has experienced growing success. We hope to build on this success again this eighth year with an interesting and fruitful day of discussion. The areas of reasoning about action, non-monotonic reasoning and belief revision are among the most active research areas in Knowledge Representation, with rich inter-connections and practical applications including robotics, agentsystems, commonsense reasoning and the semantic web. This workshop provides a unique opportunity for researchers from all three fields to be brought together at a single forum with the prime objectives of communicating important recent advances in each field and the exchange of ideas. As these fundamental areas mature it is vital that researchers maintain a dialog through which they can cooperatively explore common links. The goal of this workshop is to work against the natural tendency of such rapidly advancing fields to drift apart into isolated islands of specialization. This year, we have accepted ten papers authored by a diverse international community. Each paper has been subject to careful peer review on the basis of innovation, significance and relevance to NRAC. The high quality selection of work could not have been achieved without the invaluable help of the international Program Committee. A highlight of the workshop will be our invited speaker Professor Hector Geffner from ICREA and UPF in Barcelona, Spain, discussing representation and inference in modern planning. Hector Geffner is a world leader in planning, reasoning, and knowledge representation; in addition to his many important publications, he is a Fellow of the AAAI, an associate editor of the Journal of Artificial Intelligence Research and won an ACM Distinguished Dissertation Award in 1990

    Historical overview of formal argumentation

    Get PDF

    Argumentation-based methods for multi-perspective cooperative planning

    Get PDF
    Through cooperation, agents can transcend their individual capabilities and achieve goals that would be unattainable otherwise. Existing multiagent planning work considers each agent’s action capabilities, but does not account for distributed knowledge and the incompatible views agents may have of the planning domain. These divergent views can be a result of faulty sensors, local and incomplete knowledge, and outdated information, or simply because each agent has conducted different inferences and their beliefs are not aligned. This thesis is concerned with Multi-Perspective Cooperative Planning (MPCP), the problem of synthesising a plan for multiple agents which share a goal but hold different views about the state of the environment and the specification of the actions they can perform to affect it. Reaching agreement on a mutually acceptable plan is important, since cautious autonomous agents will not subscribe to plans that they individually believe to be inappropriate or even potentially hazardous. We specify the MPCP problem by adapting standard set-theoretic planning notation. Based on argumentation theory we define a new notion of plan acceptability, and introduce a novel formalism that combines defeasible logic programming and situation calculus that enables the succinct axiomatisation of contradictory planning theories and allows deductive argumentation-based inference. Our work bridges research in argumentation, reasoning about action and classical planning. We present practical methods for reasoning and planning with MPCP problems that exploit the inherent structure of planning domains and efficient planning heuristics. Finally, in order to allow distribution of tasks, we introduce a family of argumentation-based dialogue protocols that enable the agents to reach agreement on plans in a decentralised manner. Based on the concrete foundation of deductive argumentation we analytically investigate important properties of our methods illustrating the correctness of the proposed planning mechanisms. We also empirically evaluate the efficiency of our algorithms in benchmark planning domains. Our results illustrate that our methods can synthesise acceptable plans within reasonable time in large-scale domains, while maintaining a level of expressiveness comparable to that of modern automated planning
    corecore