15 research outputs found

    Belief Revision in Multi-Agent Systems

    Get PDF
    The ability to respond sensibly to changing and conflicting beliefs is an integral part of intelligent agency. To this end, we outline the design and implementation of a Distributed Assumption-based Truth Maintenance System (DATMS) appropriate for controlling cooperative problem solving in a dynamic real world multi-agent community. Our DATMS works on the principle of local coherence which means that different agents can have different perspectives on the same fact provided that these stances are appropriately justified. The belief revision algorithm is presented, the meta-level code needed to ensure that all system-wide queries can be uniquely answered is described, and the DATMS’ implementation in a general purpose multi-agent shell is discussed

    An comparative analysis of different models of belief revision using information from multiple sources

    Get PDF
    In this work we analyze the problem of knowledge representation in a collaborative multi-agent system where agents can obtain new information from others through communication. Namely, we analyze several approaches of belief revision in multi-agent systems. We will describe different research lines in this topic and we will focus on Belief Revision using Information from Multiple Sources. For this, we are going to accomplish a comparative analysis of different models of belief revision that use information from multiple sources.Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    An comparative analysis of different models of belief revision using information from multiple sources

    Get PDF
    In this work we analyze the problem of knowledge representation in a collaborative multi-agent system where agents can obtain new information from others through communication. Namely, we analyze several approaches of belief revision in multi-agent systems. We will describe different research lines in this topic and we will focus on Belief Revision using Information from Multiple Sources. For this, we are going to accomplish a comparative analysis of different models of belief revision that use information from multiple sources.Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Improving Assumption based Distributed Belief Revision

    Get PDF
    Belief revision is a critical issue in real world DAI applications. A Multi-Agent System not only has to cope with the intrinsic incompleteness and the constant change of the available knowledge (as in the case of its stand alone counterparts), but also has to deal with possible conflicts between the agents’ perspectives. Each semi-autonomous agent, designed as a combination of a problem solver – assumption based truth maintenance system (ATMS), was enriched with improved capabilities: a distributed context management facility allowing the user to dynamically focus on the more pertinent contexts, and a distributed belief revision algorithm with two levels of consistency. This work contributions include: (i) a concise representation of the shared external facts; (ii) a simple and innovative methodology to achieve distributed context management; and (iii) a reduced inter-agent data exchange format. The different levels of consistency adopted were based on the relevance of the data under consideration: higher relevance data (detected inconsistencies) was granted global consistency while less relevant data (system facts) was assigned local consistency. These abilities are fully supported by the ATMS standard functionalities

    Caracterización ontológica de equipos de agentes para AGM

    Get PDF
    Modelos de revisión de creencias como el originado a partir del trabajo conjunto de Alchourrón, Gärdenfors y Makinson (conocido como paradigma AGM de revisión de creencias) nos permiten una representación adecuada del proceso de transformación de un estado de creencias atento a la presencia de nueva información. En ellos, el único sujeto de cambio de creencias es el agente individual

    Argumentation and data-oriented belief revision: On the two-sided nature of epistemic change

    Get PDF
    This paper aims to bring together two separate threads in the formal study of epistemic change: belief revision and argumentation theories. Belief revision describes the way in which an agent is supposed to change his own mind, while argumentation deals with persuasive strategies employed to change the mind of other agents. Belief change and argumentation are two sides (cognitive and social) of the same epistemic coin. Argumentation theories are therefore incomplete, if they cannot be grounded in belief revision models - and vice versa. Nonetheless, so far the formal treatment of belief revision widely neglected any systematic comparison with argumentation theories. Such lack of integration poses severe limitations to our understanding of epistemic change, and more comprehensive models should instead be devised. After a short critical review of the literature (cf. 1), we outline an alternative model of belief revision whose main claim is the distinction between data and beliefs (cf. 2), and we discuss in detail its expressivity with respect to argumentation (cf. 3): finally, we summarize our conclusions and future works on the interface between belief revision and argumentation (cf. 4)

    Belief Change in Reasoning Agents: Axiomatizations, Semantics and Computations

    Get PDF
    The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model

    Design for manufacturability : a feature-based agent-driven approach

    Get PDF

    Belief Revision in Multi-Agent Systems

    Get PDF
    Abstract. The ability to respond sensibly to changing and conflicting beliefs is an integral part of intelligent agency. To this end, we outline the design and implementation of a Distributed Assumption-based Truth Maintenance System (DATMS) appropriate for controlling cooperative problem solving in a dynamic real world multi-agent community. Our DATMS works on the principle of local coherence which means that different agents can have different perspectives on the same fact provided that these stances are appropriately justified. The belief revision algorithm is presented, the meta-level code needed to ensure that all system-wide queries can be uniquely answered is described, and the DATMS’ implementation in a general purpose multi-agent shell is discussed
    corecore