8 research outputs found

    An architecture for rational agents interacting with complex environments

    Get PDF
    In this paper we sketch an agent architecture suitable to be used as a tool for exploring agent perception and multiagent interaction. Nowadays, there is no strict correspondence between the theoretical work in rational agents and their implementation. In this respect, it is our intention to reach a good trade-off between expressiveness and implementability.Eje: Inteligencia artificialRed de Universidades con Carreras en Inform谩tica (RedUNCI

    An architecture for rational agents interacting with complex environments

    Get PDF
    In this paper we sketch an agent architecture suitable to be used as a tool for exploring agent perception and multiagent interaction. Nowadays, there is no strict correspondence between the theoretical work in rational agents and their implementation. In this respect, it is our intention to reach a good trade-off between expressiveness and implementability.Eje: Inteligencia artificialRed de Universidades con Carreras en Inform谩tica (RedUNCI

    Paracomplete logic Kl: natural deduction, its automation, complexity and applications

    Get PDF
    In the development of many modern software solutions where the underlying systems are complex, dynamic and heterogeneous, the significance of specification-based verification is well accepted. However, often parts of the specification may not be known. Yet reasoning based on such incomplete specifications is very desirable. Here, paracomplete logics seem to be an appropriate formal setup: opposite to Tarski鈥檚 theory of truth with its principle of bivalence, in these logics a statement and its negation may be both untrue. An immediate result is that the law of excluded middle becomes invalid. In this paper we show a way to apply an automatic proof searching procedure for the paracomplete logic Kl to reason about incomplete information systems. We provide an original account of complexity of natural deduction systems, leading us closer to the efficiency of the presented proof search algorithm. Moreover, we have turned the assumptions management into an advantage showing the applicability of the proposed technique to assume-guarantee reasoning

    Defeasible-argumentation-based multi-agent planning

    Full text link
    [EN] This paper presents a planning system that uses defeasible argumentation to reason about context information during the construction of a plan. The system is designed to operate in cooperative multi-agent environments where agents are endowed with planning and argumentation capabilities. Planning allows agents to contribute with actions to the construction of the plan, and argumentation is the mechanism that agents use to defend or attack the planning choices according to their beliefs. We present the formalization of the model and we provide a novel specification of the qualification problem. The multi-agent planning system, which is designed to be domain-independent, is evaluated with two planning tasks from the problem suites of the International Planning Competition. We compare our system with a non-argumentative planning framework and with a different approach of planning and argumentation. The results will show that our system obtains less costly and more robust solution plans.This work has been partly supported by the Spanish MINECO under project TIN2014-55637-C2-2-R and the Valencian project PROMETEO II/2013/019.Pajares Ferrando, S.; Onaindia De La Rivaherrera, E. (2017). Defeasible-argumentation-based multi-agent planning. Information Sciences. 411:1-22. https://doi.org/10.1016/j.ins.2017.05.014S12241

    Defeasible Argumentation for Cooperative Multi-Agent Planning

    Full text link
    Tesis por compendio[EN] Multi-Agent Systems (MAS), Argumentation and Automated Planning are three lines of investigations within the field of Artificial Intelligence (AI) that have been extensively studied over the last years. A MAS is a system composed of multiple intelligent agents that interact with each other and it is used to solve problems whose solution requires the presence of various functional and autonomous entities. Multi-agent systems can be used to solve problems that are difficult or impossible to resolve for an individual agent. On the other hand, Argumentation refers to the construction and subsequent exchange (iteratively) of arguments between a group of agents, with the aim of arguing for or against a particular proposal. Regarding Automated Planning, given an initial state of the world, a goal to achieve, and a set of possible actions, the goal is to build programs that can automatically calculate a plan to reach the final state from the initial state. The main objective of this thesis is to propose a model that combines and integrates these three research lines. More specifically, we consider a MAS as a team of agents with planning and argumentation capabilities. In that sense, given a planning problem with a set of objectives, (cooperative) agents jointly construct a plan to satisfy the objectives of the problem while they defeasibly reason about the environmental conditions so as to provide a stronger guarantee of success of the plan at execution time. Therefore, the goal is to use the planning knowledge to build a plan while agents beliefs about the impact of unexpected environmental conditions is used to select the plan which is less likely to fail at execution time. Thus, the system is intended to return collaborative plans that are more robust and adapted to the circumstances of the execution environment. In this thesis, we designed, built and evaluated a model of argumentation based on defeasible reasoning for planning cooperative multi-agent system. The designed system is independent of the domain, thus demonstrating the ability to solve problems in different application contexts. Specifically, the system has been tested in context sensitive domains such as Ambient Intelligence as well as with problems used in the International Planning Competitions.[ES] Dentro de la Inteligencia Artificial (IA), existen tres ramas que han sido ampliamente estudiadas en los 煤ltimos a帽os: Sistemas Multi-Agente (SMA), Argumentaci贸n y Planificaci贸n Autom谩tica. Un SMA es un sistema compuesto por m煤ltiples agentes inteligentes que interact煤an entre s铆 y se utilizan para resolver problemas cuya soluci贸n requiere la presencia de diversas entidades funcionales y aut贸nomas. Los sistemas multiagente pueden ser utilizados para resolver problemas que son dif铆ciles o imposibles de resolver para un agente individual. Por otra parte, la Argumentaci贸n consiste en la construcci贸n y posterior intercambio (iterativamente) de argumentos entre un conjunto de agentes, con el objetivo de razonar a favor o en contra de una determinada propuesta. Con respecto a la Planificaci贸n Autom谩tica, dado un estado inicial del mundo, un objetivo a alcanzar, y un conjunto de acciones posibles, el objetivo es construir programas capaces de calcular de forma autom谩tica un plan que permita alcanzar el estado final a partir del estado inicial. El principal objetivo de esta tesis es proponer un modelo que combine e integre las tres l铆neas anteriores. M谩s espec铆ficamente, nosotros consideramos un SMA como un equipo de agentes con capacidades de planificaci贸n y argumentaci贸n. En ese sentido, dado un problema de planificaci贸n con un conjunto de objetivos, los agentes (cooperativos) construyen conjuntamente un plan para resolver los objetivos del problema y, al mismo tiempo, razonan sobre la viabilidad de los planes, utilizando como herramienta de di谩logo la Argumentaci贸n. Por tanto, el objetivo no es s贸lo obtener autom谩ticamente un plan soluci贸n generado de forma colaborativa entre los agentes, sino tambi茅n utilizar las creencias de los agentes sobre la informaci贸n del contexto para razonar acerca de la viabilidad de los planes en su futura etapa de ejecuci贸n. De esta forma, se pretende que el sistema sea capaz de devolver planes colaborativos m谩s robustos y adaptados a las circunstancias del entorno de ejecuci贸n. En esta tesis se dise帽a, construye y eval煤a un modelo de argumentaci贸n basado en razonamiento defeasible para un sistema de planificaci贸n cooperativa multiagente. El sistema dise帽ado es independiente del dominio, demostrando as铆 la capacidad de resolver problemas en diferentes contextos de aplicaci贸n. Concretamente el sistema se ha evaluado en dominios sensibles al contexto como es la Inteligencia Ambiental y en problemas de las competiciones internacionales de planificaci贸n.[CA] Dins de la intel路lig猫ncia artificial (IA), hi han tres branques que han sigut 脿mpliament estudiades en els 煤ltims anys: Sistemes Multi-Agent (SMA), Argumentaci贸 i Planificaci贸 Autom脿tica. Un SMA es un sistema compost per m煤ltiples agents intel路ligents que interact煤en entre si i s'utilitzen per a resoldre problemas la soluci贸n dels quals requereix la pres猫ncia de diverses entitats funcionals i aut貌nomes. Els sistemes multiagente poden ser utilitzats per a resoldre problemes que s贸n dif铆cils o impossibles de resoldre per a un agent individual. D'altra banda, l'Argumentaci贸 consistiex en la construcci贸 i posterior intercanvi (iterativament) d'arguments entre un conjunt d'agents, amb l'objectiu de raonar a favor o en contra d'una determinada proposta. Respecte a la Planificaci贸 Autom脿tica, donat un estat inicial del m贸n, un objectiu a aconseguir, i un conjunt d'accions possibles, l'objectiu 茅s construir programes capa莽os de calcular de forma autom脿tica un pla que permeta aconseguir l'estat final a partir de l'estat inicial. El principal objectiu d'aquesta tesi 茅s proposar un model que combine i integre les tres l铆nies anteriors. M茅s espec铆ficament, nosaltres considerem un SMA com un equip d'agents amb capacitats de planificaci贸 i argumentaci贸. En aquest sentit, donat un problema de planificaci贸 amb un conjunt d'objectius, els agents (cooperatius) construeixen conjuntament un pla per a resoldre els objectius del problema i, al mateix temps, raonen sobre la viabilitat dels plans, utilitzant com a ferramenta de di脿leg l'Argumentaci贸. Per tant, l'objectiu no 茅s nom茅s obtindre autom脿ticament un pla soluci贸 generat de forma col路laborativa entre els agents, sin贸 tamb茅 utilitzar les creences dels agents sobre la informaci贸 del context per a raonar sobre la viabilitat dels plans en la seua futura etapa d'execuci贸. D'aquesta manera, es pret茅n que el sistema siga capa莽 de tornar plans col路laboratius m茅s robustos i adaptats a les circumst脿ncies de l'entorn d'execuci贸. En aquesta tesi es dissenya, construeix i avalua un model d'argumentaci贸 basat en raonament defeasible per a un sistema de planificaci贸 cooperativa multiagent. El sistema dissenyat 茅s independent del domini, demostrant aix铆 la capacitat de resoldre problemes en diferents contextos d'aplicaci贸. Concretament el sistema s'ha avaluat en dominis sensibles al context com 茅s la inte路lig猫ncia Ambiental i en problemes de les competicions internacionals de planificaci贸.Pajares Ferrando, S. (2016). Defeasible Argumentation for Cooperative Multi-Agent Planning [Tesis doctoral no publicada]. Universitat Polit猫cnica de Val猫ncia. https://doi.org/10.4995/Thesis/10251/60159TESISCompendi

    Integrating ontologies and argumentation for decision-making in breast cancer

    Get PDF
    This thesis describes some of the problems in providing care for patients with breast cancer. These are then used to motivate the development of an extension to an existing theory of argumentation, which I call the Ontology-based Argumentation Formalism (OAF). The work is assessed in both theoretical and empirical ways. From a clinical perspective, there is a problem with the provision of care. Numerous reports have noted the failure to provide uniformly high quality care, as well as the number of deaths caused by medical care. The medical profession has responded in various ways, but one of these has been the development of Decision Support Systems (DSS). The evidence for the effectiveness of such systems is mixed, and the technical basis of such systems remains open to debate. However, one basis that has been used is argumentation. An important aspect of clinical practice is the use of the evidence from clinical trials, but these trials are based on the results in defined groups of patients. Thus when we use the results of clinical trials to reason about treatments, there are two forms of information we are interested in - the evidence from trials and the relationships between groups of patients and treatments. The relational information can be captured in an ontology about the groups of patients and treatments, and the information from the trials captured as a set of defeasible rules. OAF is an extension of an existing argumentation system, and provides the basis for an argumentation-based Knowledge Representation system which could serve as the basis for future DSS. In OAF, the ontology provides a repository of facts, both asserted and inferred on the basis of formulae in the ontology, as well as defining the language of the defeasible rules. The defeasible rules are used in a process of defeasible reasoning, where monotonic consistent chains of reasoning are used to draw plausible conclusions. This defeasible reasoning is used to generate arguments and counter-arguments. Conflict between arguments is defined in terms of inconsistent formulae in the ontology, and by using existing proposals for ontology languages we are able to make use of existing proposals and technologies for ontological reasoning. There are three substantial areas of novel work: I develop an extension to an existing argumentation formalism, and prove some simple properties of the formalism. I also provide a novel formalism of the practical syllogism and related hypothetical reasoning, and compare my approach to two other proposals in the literature. I conclude with a substantial case study based on a breast cancer guideline, and in order to do so I describe a methodology for comparing formal and informal arguments, and use the results of this to discuss the strengths and weaknesses of OAF. In order to develop the case study, I provide a prototype implementation. The prototype uses a novel incremental algorithm to construct arguments and I give soundness, completeness and time-complexity results. The final chapter of the thesis discusses some general lessons from the development of OAF and gives ideas for future work
    corecore