75 research outputs found

    Defeasible Argumentation for Cooperative Multi-Agent Planning

    Full text link
    Tesis por compendio[EN] Multi-Agent Systems (MAS), Argumentation and Automated Planning are three lines of investigations within the field of Artificial Intelligence (AI) that have been extensively studied over the last years. A MAS is a system composed of multiple intelligent agents that interact with each other and it is used to solve problems whose solution requires the presence of various functional and autonomous entities. Multi-agent systems can be used to solve problems that are difficult or impossible to resolve for an individual agent. On the other hand, Argumentation refers to the construction and subsequent exchange (iteratively) of arguments between a group of agents, with the aim of arguing for or against a particular proposal. Regarding Automated Planning, given an initial state of the world, a goal to achieve, and a set of possible actions, the goal is to build programs that can automatically calculate a plan to reach the final state from the initial state. The main objective of this thesis is to propose a model that combines and integrates these three research lines. More specifically, we consider a MAS as a team of agents with planning and argumentation capabilities. In that sense, given a planning problem with a set of objectives, (cooperative) agents jointly construct a plan to satisfy the objectives of the problem while they defeasibly reason about the environmental conditions so as to provide a stronger guarantee of success of the plan at execution time. Therefore, the goal is to use the planning knowledge to build a plan while agents beliefs about the impact of unexpected environmental conditions is used to select the plan which is less likely to fail at execution time. Thus, the system is intended to return collaborative plans that are more robust and adapted to the circumstances of the execution environment. In this thesis, we designed, built and evaluated a model of argumentation based on defeasible reasoning for planning cooperative multi-agent system. The designed system is independent of the domain, thus demonstrating the ability to solve problems in different application contexts. Specifically, the system has been tested in context sensitive domains such as Ambient Intelligence as well as with problems used in the International Planning Competitions.[ES] Dentro de la Inteligencia Artificial (IA), existen tres ramas que han sido ampliamente estudiadas en los últimos años: Sistemas Multi-Agente (SMA), Argumentación y Planificación Automática. Un SMA es un sistema compuesto por múltiples agentes inteligentes que interactúan entre sí y se utilizan para resolver problemas cuya solución requiere la presencia de diversas entidades funcionales y autónomas. Los sistemas multiagente pueden ser utilizados para resolver problemas que son difíciles o imposibles de resolver para un agente individual. Por otra parte, la Argumentación consiste en la construcción y posterior intercambio (iterativamente) de argumentos entre un conjunto de agentes, con el objetivo de razonar a favor o en contra de una determinada propuesta. Con respecto a la Planificación Automática, dado un estado inicial del mundo, un objetivo a alcanzar, y un conjunto de acciones posibles, el objetivo es construir programas capaces de calcular de forma automática un plan que permita alcanzar el estado final a partir del estado inicial. El principal objetivo de esta tesis es proponer un modelo que combine e integre las tres líneas anteriores. Más específicamente, nosotros consideramos un SMA como un equipo de agentes con capacidades de planificación y argumentación. En ese sentido, dado un problema de planificación con un conjunto de objetivos, los agentes (cooperativos) construyen conjuntamente un plan para resolver los objetivos del problema y, al mismo tiempo, razonan sobre la viabilidad de los planes, utilizando como herramienta de diálogo la Argumentación. Por tanto, el objetivo no es sólo obtener automáticamente un plan solución generado de forma colaborativa entre los agentes, sino también utilizar las creencias de los agentes sobre la información del contexto para razonar acerca de la viabilidad de los planes en su futura etapa de ejecución. De esta forma, se pretende que el sistema sea capaz de devolver planes colaborativos más robustos y adaptados a las circunstancias del entorno de ejecución. En esta tesis se diseña, construye y evalúa un modelo de argumentación basado en razonamiento defeasible para un sistema de planificación cooperativa multiagente. El sistema diseñado es independiente del dominio, demostrando así la capacidad de resolver problemas en diferentes contextos de aplicación. Concretamente el sistema se ha evaluado en dominios sensibles al contexto como es la Inteligencia Ambiental y en problemas de las competiciones internacionales de planificación.[CA] Dins de la intel·ligència artificial (IA), hi han tres branques que han sigut àmpliament estudiades en els últims anys: Sistemes Multi-Agent (SMA), Argumentació i Planificació Automàtica. Un SMA es un sistema compost per múltiples agents intel·ligents que interactúen entre si i s'utilitzen per a resoldre problemas la solución dels quals requereix la presència de diverses entitats funcionals i autònomes. Els sistemes multiagente poden ser utilitzats per a resoldre problemes que són difícils o impossibles de resoldre per a un agent individual. D'altra banda, l'Argumentació consistiex en la construcció i posterior intercanvi (iterativament) d'arguments entre un conjunt d'agents, amb l'objectiu de raonar a favor o en contra d'una determinada proposta. Respecte a la Planificació Automàtica, donat un estat inicial del món, un objectiu a aconseguir, i un conjunt d'accions possibles, l'objectiu és construir programes capaços de calcular de forma automàtica un pla que permeta aconseguir l'estat final a partir de l'estat inicial. El principal objectiu d'aquesta tesi és proposar un model que combine i integre les tres línies anteriors. Més específicament, nosaltres considerem un SMA com un equip d'agents amb capacitats de planificació i argumentació. En aquest sentit, donat un problema de planificació amb un conjunt d'objectius, els agents (cooperatius) construeixen conjuntament un pla per a resoldre els objectius del problema i, al mateix temps, raonen sobre la viabilitat dels plans, utilitzant com a ferramenta de diàleg l'Argumentació. Per tant, l'objectiu no és només obtindre automàticament un pla solució generat de forma col·laborativa entre els agents, sinó també utilitzar les creences dels agents sobre la informació del context per a raonar sobre la viabilitat dels plans en la seua futura etapa d'execució. D'aquesta manera, es pretén que el sistema siga capaç de tornar plans col·laboratius més robustos i adaptats a les circumstàncies de l'entorn d'execució. En aquesta tesi es dissenya, construeix i avalua un model d'argumentació basat en raonament defeasible per a un sistema de planificació cooperativa multiagent. El sistema dissenyat és independent del domini, demostrant així la capacitat de resoldre problemes en diferents contextos d'aplicació. Concretament el sistema s'ha avaluat en dominis sensibles al context com és la inte·ligència Ambiental i en problemes de les competicions internacionals de planificació.Pajares Ferrando, S. (2016). Defeasible Argumentation for Cooperative Multi-Agent Planning [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/60159TESISCompendi

    Defeasible-argumentation-based multi-agent planning

    Full text link
    [EN] This paper presents a planning system that uses defeasible argumentation to reason about context information during the construction of a plan. The system is designed to operate in cooperative multi-agent environments where agents are endowed with planning and argumentation capabilities. Planning allows agents to contribute with actions to the construction of the plan, and argumentation is the mechanism that agents use to defend or attack the planning choices according to their beliefs. We present the formalization of the model and we provide a novel specification of the qualification problem. The multi-agent planning system, which is designed to be domain-independent, is evaluated with two planning tasks from the problem suites of the International Planning Competition. We compare our system with a non-argumentative planning framework and with a different approach of planning and argumentation. The results will show that our system obtains less costly and more robust solution plans.This work has been partly supported by the Spanish MINECO under project TIN2014-55637-C2-2-R and the Valencian project PROMETEO II/2013/019.Pajares Ferrando, S.; Onaindia De La Rivaherrera, E. (2017). Defeasible-argumentation-based multi-agent planning. Information Sciences. 411:1-22. https://doi.org/10.1016/j.ins.2017.05.014S12241

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    Context-Aware Multi-Agent Planning in intelligent environments

    Full text link
    A system is context-aware if it can extract, interpret and use context information and adapt its functionality to the current context of use. Multi-agent planning generalizes the problem of planning in domains where several agents plan and act together, and share resources, activities, and goals. This contribution presents a practical extension of a formal theoretical model for Context-Aware Multi-Agent Planning based upon an argumentationbased defeasible logic. Our framework, named CAMAP, is implemented on a platform for open multiagent systems and has been experimentally tested, among others, in applications of ambient intelligence in the field of health-care. CAMAP is based on a multi-agent partial-order planning paradigm in which agents have diverse abilities, use an argumentation-based defeasible contextual reasoning to support their own beliefs and refute the beliefs of the others according to their context knowledge during the plan search process. CAMAP shows to be an adequate approach to tackle ambient intelligence problems as it gathers together in a single framework the ability of planning while it allows agents to put forward arguments that support or argue upon the accuracy, unambiguity and reliability of the context-aware information.This work is mainly supported by the Spanish Ministry of Science and Education under the FPU Grant Reference AP2009-1896 awarded to Sergio Pajares Ferrando, and Projects, TIN2011-27652-C03-01, and Consolider Ingenio 2010 CSD2007-00022.Pajares Ferrando, S.; Onaindia De La Rivaherrera, E. (2013). Context-Aware Multi-Agent Planning in intelligent environments. Information Sciences. 227:22-42. https://doi.org/10.1016/j.ins.2012.11.021S224222

    t-DeLP: An argumentation-based Temporal Defeasible Logic Programming framework

    Get PDF
    The aim of this paper is to propose an argumentation-based defeasible logic, called t-DeLP, that focuses on forward temporal reasoning for causal inference. We extend the language of the DeLP logical framework by associating temporal parameters to literals. A temporal logic program is a set of basic temporal facts and (strict or defeasible) durative rules. Facts and rules combine into durative arguments representing temporal processes. As usual, a dialectical procedure determines which arguments are undefeated, and hence which literals are warranted, or defeasibly follow from the program. t-DeLP, though, slightly differs from DeLP in order to accommodate temporal aspects, like the persistence of facts. The output of a t-DeLP program is a set of warranted literals, which is first shown to be non-contradictory and be closed under sub-arguments. This basic framework is then modified to deal with programs whose strict rules encode mutex constraints. The resulting framework is shown to satisfy stronger logical properties like indirect consistency and closure. © 2013 Springer Science+Business Media Dordrecht.This work has been partially supported by the Spanish MICINN projects CONSOLIDER-INGENIO 2010 Agreement Technologies CSD2007-00022 and ARINF TIN2009-14704-C03-03, with FEDER funds of the EU, and by the Generalitat de Catalunya grant 2009-SGR-1434Peer Reviewe

    Reasoning about Cyber Threat Actors

    Get PDF
    abstract: Reasoning about the activities of cyber threat actors is critical to defend against cyber attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult to determine who the attacker is, what the desired goals are of the attacker, and how they will carry out their attacks. These three questions essentially entail understanding the attacker’s use of deception, the capabilities available, and the intent of launching the attack. These three issues are highly inter-related. If an adversary can hide their intent, they can better deceive a defender. If an adversary’s capabilities are not well understood, then determining what their goals are becomes difficult as the defender is uncertain if they have the necessary tools to accomplish them. However, the understanding of these aspects are also mutually supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we understand intent and capabilities, a defender may be able to see through deception schemes. In this dissertation, I present three pieces of work to tackle these questions to obtain a better understanding of cyber threats. First, we introduce a new reasoning framework to address deception. We evaluate the framework by building a dataset from DEFCON capture-the-flag exercise to identify the person or group responsible for a cyber attack. We demonstrate that the framework not only handles cases of deception but also provides transparent decision making in identifying the threat actor. The second task uses a cognitive learning model to determine the intent – goals of the threat actor on the target system. The third task looks at understanding the capabilities of threat actors to target systems by identifying at-risk systems from hacker discussions on darkweb websites. To achieve this task we gather discussions from more than 300 darkweb websites relating to malicious hacking.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Computer Science and Technology Series : XV Argentine Congress of Computer Science. Selected papers

    Get PDF
    CACIC'09 was the fifteenth Congress in the CACIC series. It was organized by the School of Engineering of the National University of Jujuy. The Congress included 9 Workshops with 130 accepted papers, 1 main Conference, 4 invited tutorials, different meetings related with Computer Science Education (Professors, PhD students, Curricula) and an International School with 5 courses. CACIC 2009 was organized following the traditional Congress format, with 9 Workshops covering a diversity of dimensions of Computer Science Research. Each topic was supervised by a committee of three chairs of different Universities. The call for papers attracted a total of 267 submissions. An average of 2.7 review reports were collected for each paper, for a grand total of 720 review reports that involved about 300 different reviewers. A total of 130 full papers were accepted and 20 of them were selected for this book.Red de Universidades con Carreras en Informática (RedUNCI

    Argumentation-based methods for multi-perspective cooperative planning

    Get PDF
    Through cooperation, agents can transcend their individual capabilities and achieve goals that would be unattainable otherwise. Existing multiagent planning work considers each agent’s action capabilities, but does not account for distributed knowledge and the incompatible views agents may have of the planning domain. These divergent views can be a result of faulty sensors, local and incomplete knowledge, and outdated information, or simply because each agent has conducted different inferences and their beliefs are not aligned. This thesis is concerned with Multi-Perspective Cooperative Planning (MPCP), the problem of synthesising a plan for multiple agents which share a goal but hold different views about the state of the environment and the specification of the actions they can perform to affect it. Reaching agreement on a mutually acceptable plan is important, since cautious autonomous agents will not subscribe to plans that they individually believe to be inappropriate or even potentially hazardous. We specify the MPCP problem by adapting standard set-theoretic planning notation. Based on argumentation theory we define a new notion of plan acceptability, and introduce a novel formalism that combines defeasible logic programming and situation calculus that enables the succinct axiomatisation of contradictory planning theories and allows deductive argumentation-based inference. Our work bridges research in argumentation, reasoning about action and classical planning. We present practical methods for reasoning and planning with MPCP problems that exploit the inherent structure of planning domains and efficient planning heuristics. Finally, in order to allow distribution of tasks, we introduce a family of argumentation-based dialogue protocols that enable the agents to reach agreement on plans in a decentralised manner. Based on the concrete foundation of deductive argumentation we analytically investigate important properties of our methods illustrating the correctness of the proposed planning mechanisms. We also empirically evaluate the efficiency of our algorithms in benchmark planning domains. Our results illustrate that our methods can synthesise acceptable plans within reasonable time in large-scale domains, while maintaining a level of expressiveness comparable to that of modern automated planning
    corecore