47 research outputs found

    Dialoguing DeLP-based agents

    Get PDF
    A multi-agent system is made up of multiple interacting autonomous agents. It can be viewed as a society in which each agent performs its activity cooperating to achieve common goals, or competing for them. They establish dialogues via some kind of agent-communication language, under some communication protocol. We think argumentation is suitable to model several kind of dialogues in multi-agents systems. In this paper we define dialogues and persuasion dialogues between two agents using Defeasible Logic Programs as a knowledge base, together with an algorithm defining how this dialogue may be engaged. We also show an indication of how an agent could use opponent’s information for its own benefit.Eje: AgentesRed de Universidades con Carreras en Informática (RedUNCI

    Defeasible Argumentation for Cooperative Multi-Agent Planning

    Full text link
    Tesis por compendio[EN] Multi-Agent Systems (MAS), Argumentation and Automated Planning are three lines of investigations within the field of Artificial Intelligence (AI) that have been extensively studied over the last years. A MAS is a system composed of multiple intelligent agents that interact with each other and it is used to solve problems whose solution requires the presence of various functional and autonomous entities. Multi-agent systems can be used to solve problems that are difficult or impossible to resolve for an individual agent. On the other hand, Argumentation refers to the construction and subsequent exchange (iteratively) of arguments between a group of agents, with the aim of arguing for or against a particular proposal. Regarding Automated Planning, given an initial state of the world, a goal to achieve, and a set of possible actions, the goal is to build programs that can automatically calculate a plan to reach the final state from the initial state. The main objective of this thesis is to propose a model that combines and integrates these three research lines. More specifically, we consider a MAS as a team of agents with planning and argumentation capabilities. In that sense, given a planning problem with a set of objectives, (cooperative) agents jointly construct a plan to satisfy the objectives of the problem while they defeasibly reason about the environmental conditions so as to provide a stronger guarantee of success of the plan at execution time. Therefore, the goal is to use the planning knowledge to build a plan while agents beliefs about the impact of unexpected environmental conditions is used to select the plan which is less likely to fail at execution time. Thus, the system is intended to return collaborative plans that are more robust and adapted to the circumstances of the execution environment. In this thesis, we designed, built and evaluated a model of argumentation based on defeasible reasoning for planning cooperative multi-agent system. The designed system is independent of the domain, thus demonstrating the ability to solve problems in different application contexts. Specifically, the system has been tested in context sensitive domains such as Ambient Intelligence as well as with problems used in the International Planning Competitions.[ES] Dentro de la Inteligencia Artificial (IA), existen tres ramas que han sido ampliamente estudiadas en los últimos años: Sistemas Multi-Agente (SMA), Argumentación y Planificación Automática. Un SMA es un sistema compuesto por múltiples agentes inteligentes que interactúan entre sí y se utilizan para resolver problemas cuya solución requiere la presencia de diversas entidades funcionales y autónomas. Los sistemas multiagente pueden ser utilizados para resolver problemas que son difíciles o imposibles de resolver para un agente individual. Por otra parte, la Argumentación consiste en la construcción y posterior intercambio (iterativamente) de argumentos entre un conjunto de agentes, con el objetivo de razonar a favor o en contra de una determinada propuesta. Con respecto a la Planificación Automática, dado un estado inicial del mundo, un objetivo a alcanzar, y un conjunto de acciones posibles, el objetivo es construir programas capaces de calcular de forma automática un plan que permita alcanzar el estado final a partir del estado inicial. El principal objetivo de esta tesis es proponer un modelo que combine e integre las tres líneas anteriores. Más específicamente, nosotros consideramos un SMA como un equipo de agentes con capacidades de planificación y argumentación. En ese sentido, dado un problema de planificación con un conjunto de objetivos, los agentes (cooperativos) construyen conjuntamente un plan para resolver los objetivos del problema y, al mismo tiempo, razonan sobre la viabilidad de los planes, utilizando como herramienta de diálogo la Argumentación. Por tanto, el objetivo no es sólo obtener automáticamente un plan solución generado de forma colaborativa entre los agentes, sino también utilizar las creencias de los agentes sobre la información del contexto para razonar acerca de la viabilidad de los planes en su futura etapa de ejecución. De esta forma, se pretende que el sistema sea capaz de devolver planes colaborativos más robustos y adaptados a las circunstancias del entorno de ejecución. En esta tesis se diseña, construye y evalúa un modelo de argumentación basado en razonamiento defeasible para un sistema de planificación cooperativa multiagente. El sistema diseñado es independiente del dominio, demostrando así la capacidad de resolver problemas en diferentes contextos de aplicación. Concretamente el sistema se ha evaluado en dominios sensibles al contexto como es la Inteligencia Ambiental y en problemas de las competiciones internacionales de planificación.[CA] Dins de la intel·ligència artificial (IA), hi han tres branques que han sigut àmpliament estudiades en els últims anys: Sistemes Multi-Agent (SMA), Argumentació i Planificació Automàtica. Un SMA es un sistema compost per múltiples agents intel·ligents que interactúen entre si i s'utilitzen per a resoldre problemas la solución dels quals requereix la presència de diverses entitats funcionals i autònomes. Els sistemes multiagente poden ser utilitzats per a resoldre problemes que són difícils o impossibles de resoldre per a un agent individual. D'altra banda, l'Argumentació consistiex en la construcció i posterior intercanvi (iterativament) d'arguments entre un conjunt d'agents, amb l'objectiu de raonar a favor o en contra d'una determinada proposta. Respecte a la Planificació Automàtica, donat un estat inicial del món, un objectiu a aconseguir, i un conjunt d'accions possibles, l'objectiu és construir programes capaços de calcular de forma automàtica un pla que permeta aconseguir l'estat final a partir de l'estat inicial. El principal objectiu d'aquesta tesi és proposar un model que combine i integre les tres línies anteriors. Més específicament, nosaltres considerem un SMA com un equip d'agents amb capacitats de planificació i argumentació. En aquest sentit, donat un problema de planificació amb un conjunt d'objectius, els agents (cooperatius) construeixen conjuntament un pla per a resoldre els objectius del problema i, al mateix temps, raonen sobre la viabilitat dels plans, utilitzant com a ferramenta de diàleg l'Argumentació. Per tant, l'objectiu no és només obtindre automàticament un pla solució generat de forma col·laborativa entre els agents, sinó també utilitzar les creences dels agents sobre la informació del context per a raonar sobre la viabilitat dels plans en la seua futura etapa d'execució. D'aquesta manera, es pretén que el sistema siga capaç de tornar plans col·laboratius més robustos i adaptats a les circumstàncies de l'entorn d'execució. En aquesta tesi es dissenya, construeix i avalua un model d'argumentació basat en raonament defeasible per a un sistema de planificació cooperativa multiagent. El sistema dissenyat és independent del domini, demostrant així la capacitat de resoldre problemes en diferents contextos d'aplicació. Concretament el sistema s'ha avaluat en dominis sensibles al context com és la inte·ligència Ambiental i en problemes de les competicions internacionals de planificació.Pajares Ferrando, S. (2016). Defeasible Argumentation for Cooperative Multi-Agent Planning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/60159TESISCompendi

    Defeasible-argumentation-based multi-agent planning

    Full text link
    [EN] This paper presents a planning system that uses defeasible argumentation to reason about context information during the construction of a plan. The system is designed to operate in cooperative multi-agent environments where agents are endowed with planning and argumentation capabilities. Planning allows agents to contribute with actions to the construction of the plan, and argumentation is the mechanism that agents use to defend or attack the planning choices according to their beliefs. We present the formalization of the model and we provide a novel specification of the qualification problem. The multi-agent planning system, which is designed to be domain-independent, is evaluated with two planning tasks from the problem suites of the International Planning Competition. We compare our system with a non-argumentative planning framework and with a different approach of planning and argumentation. The results will show that our system obtains less costly and more robust solution plans.This work has been partly supported by the Spanish MINECO under project TIN2014-55637-C2-2-R and the Valencian project PROMETEO II/2013/019.Pajares Ferrando, S.; Onaindia De La Rivaherrera, E. (2017). Defeasible-argumentation-based multi-agent planning. Information Sciences. 411:1-22. https://doi.org/10.1016/j.ins.2017.05.014S12241

    Dialoguing DeLP-based agents

    Get PDF
    A multi-agent system is made up of multiple interacting autonomous agents. It can be viewed as a society in which each agent performs its activity cooperating to achieve common goals, or competing for them. They establish dialogues via some kind of agent-communication language, under some communication protocol. We think argumentation is suitable to model several kind of dialogues in multi-agents systems. In this paper we define dialogues and persuasion dialogues between two agents using Defeasible Logic Programs as a knowledge base, together with an algorithm defining how this dialogue may be engaged. We also show an indication of how an agent could use opponent’s information for its own benefit.Eje: AgentesRed de Universidades con Carreras en Informática (RedUNCI

    Coordinación basada en argumentación en sistemas multi-agente

    Get PDF
    Este artículo describe, en forma resumida, parte de los trabajos de investigación y desarrollo que se están llevando a cabo en la línea “Agentes y Sistemas Multi-agente” del LIDIC, en conjunto con investigadores del LIDIA. El objetivo de este trabajo es presentar las principales temáticas que están siendo abordadas actualmente en el área de agentes cognitivos, para posibilitar un intercambio de experiencias con otros investigadores participantes del Workshop, que trabajen en líneas de investigación afines. Uno de los objetivos principales de esta línea, es el estudio y desarrollo de modelos de coordinación para agentes que forman parte de un sistema multi-agente; actualmente, uno de los objetivos parciales del grupo de trabajo, es analizar la utilización de técnicas de argumentación en modelos de coordinación de alto nivel. Este estudio se abordará con un enfoque teórico/práctico abarcando modelos teóricos de sistemas multi-agente y su aplicación en problemas complejos del mundo real. En particular, el énfasis estará puesto en problemas que involucren coordinación de múltiples robots.Eje: Agentes y Sistemas InteligentesRed de Universidades con Carreras en Informática (RedUNCI

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    A dialogical model for collaborative decision making based on compromises

    Get PDF
    Abstract. In this paper, we deal with group decision making and propose a model of dialogue among agents that have different knowledge and preferences, but are willing to compromise in order to collaboratively reach a common decision. Agents participating in the dialogue use internal reasoning to resolve conflicts emerging in their knowledge during communication and to reach a decision that requires the least compromises. Our approach has significant potential, as it may allow targeted knowledge exchange, partial disclosure of information and efficient or informed decision-making depending on the topic of the agents' discussion

    Studying the Impact of Negotiation Environments on Negotiation Teams' Performance

    Get PDF
    [EN] In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.This work is supported by TIN2011-27652-C03-01, TIN2009-13839-C03-01, CSD2007-00022 of the Spanish Government, and FPU Grant AP2008-00600 awarded to Victor Sanchez-Anguix. We would also like to thank anonymous reviewers and assistants of AAMAS 2011 who helped us to improve our previous work, making this present work possible.Sanchez-Anguix, V.; Julian Inglada, VJ.; Botti, V.; García-Fornes, A. (2013). Studying the impact of negotiation environments on negotiation teams' performance. Information Sciences. 219:17-40. https://doi.org/10.1016/j.ins.2012.07.017S174021

    Reaching unanimous agreements within agent-based negotiation teams with linear and monotonic utility functions

    Full text link
    [EN] In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.This work is supported by TIN2008-04446, PROMETEO/2008/051, TIN2009-13839-C03-01, CSD2007-00022 of the Spanish government, and FPU Grant AP2008-00600 awarded to Victor Sanchez-Anguix. This paper was recommended by Associate Editor X. Wang.Sanchez-Anguix, V.; Julian Inglada, VJ.; Botti, V.; García-Fornes, A. (2012). Reaching unanimous agreements within agent-based negotiation teams with linear and monotonic utility functions. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics. 42(3):778-792. https://doi.org/10.1109/TSMCB.2011.2177658S77879242

    Argumentation-based methods for multi-perspective cooperative planning

    Get PDF
    Through cooperation, agents can transcend their individual capabilities and achieve goals that would be unattainable otherwise. Existing multiagent planning work considers each agent’s action capabilities, but does not account for distributed knowledge and the incompatible views agents may have of the planning domain. These divergent views can be a result of faulty sensors, local and incomplete knowledge, and outdated information, or simply because each agent has conducted different inferences and their beliefs are not aligned. This thesis is concerned with Multi-Perspective Cooperative Planning (MPCP), the problem of synthesising a plan for multiple agents which share a goal but hold different views about the state of the environment and the specification of the actions they can perform to affect it. Reaching agreement on a mutually acceptable plan is important, since cautious autonomous agents will not subscribe to plans that they individually believe to be inappropriate or even potentially hazardous. We specify the MPCP problem by adapting standard set-theoretic planning notation. Based on argumentation theory we define a new notion of plan acceptability, and introduce a novel formalism that combines defeasible logic programming and situation calculus that enables the succinct axiomatisation of contradictory planning theories and allows deductive argumentation-based inference. Our work bridges research in argumentation, reasoning about action and classical planning. We present practical methods for reasoning and planning with MPCP problems that exploit the inherent structure of planning domains and efficient planning heuristics. Finally, in order to allow distribution of tasks, we introduce a family of argumentation-based dialogue protocols that enable the agents to reach agreement on plans in a decentralised manner. Based on the concrete foundation of deductive argumentation we analytically investigate important properties of our methods illustrating the correctness of the proposed planning mechanisms. We also empirically evaluate the efficiency of our algorithms in benchmark planning domains. Our results illustrate that our methods can synthesise acceptable plans within reasonable time in large-scale domains, while maintaining a level of expressiveness comparable to that of modern automated planning
    corecore