6 research outputs found

    Towards Closed World Reasoning in Dynamic Open Worlds (Extended Version)

    Full text link
    The need for integration of ontologies with nonmonotonic rules has been gaining importance in a number of areas, such as the Semantic Web. A number of researchers addressed this problem by proposing a unified semantics for hybrid knowledge bases composed of both an ontology (expressed in a fragment of first-order logic) and nonmonotonic rules. These semantics have matured over the years, but only provide solutions for the static case when knowledge does not need to evolve. In this paper we take a first step towards addressing the dynamics of hybrid knowledge bases. We focus on knowledge updates and, considering the state of the art of belief update, ontology update and rule update, we show that current solutions are only partial and difficult to combine. Then we extend the existing work on ABox updates with rules, provide a semantics for such evolving hybrid knowledge bases and study its basic properties. To the best of our knowledge, this is the first time that an update operator is proposed for hybrid knowledge bases.Comment: 40 pages; an extended version of the article published in Theory and Practice of Logic Programming, 10 (4-6): 547 - 564, July. Copyright 2010 Cambridge University Pres

    Argumentation update in YALLA (Yet Another Logic Language for Argumentation)

    Get PDF
    International audienceThis article proposes a complete framework for handling the dynamics of an abstract argumentation system. This frame can encompass several belief bases under the form of several argumentation systems, more precisely it is possible to express and study how an agent who has her own argumentation system can interact on a target argumentation system (that may represent a state of knowledge at a given stage of a debate). The two argumentation systems are defined inside a reference argumentation system called the universe which constitutes a kind of “common language”. This paper establishes three main results. First, we show that change in argumentation in such a framework can be seen as a particular case of belief update. Second, we have introduced a new logical language called YALLA in which the structure of an argumentation system can be encoded, enabling to express all the basic notions of argumentation theory (defense, conflict-freeness, extensions) by formulae of YALLA. Third, due to previous works about dynamics in argumentation we have been in position to provide a set of new properties that are specific for argumentation update

    Belief change operations under confidentiality requirements in multiagent systems

    Get PDF
    Multiagent systems are populated with autonomous computing entities called agents which pro-actively pursue their goals. The design of such systems is an active field within artificial intelligence research with one objective being flexible and adaptive agents in dynamic and inaccessible environments. An agent's decision-making and finally its success in achieving its goals crucially depends on the agent's information about its environment and the sharing of information with other agents in the multiagent system. For this and other reasons, an agent's information is a valuable asset and thus the agent is often interested in the confidentiality of parts of this information. From research in computer security it is well-known that confidentiality is not only achieved by the agent's control of access to its data, but by its control of the flow of information when processing the data during the interaction with other agents. This thesis investigates how to specify and enforce the confidentiality interests of an agent D while it reacts to iterated query, revision and update requests from another agent A for the purpose of information sharing. First, we will enable the agent D to specify in a dedicated confidentiality policy that parts of its previous or current belief about its environment should be hidden from the other requesting agent A. To formalize the requirement of hiding belief, we will in particular postulate agent A's capabilities for reasoning about D's belief and about D's processing of information to form its belief. Then, we will relate the requirements imposed by a confidentiality policy to others in the research of information flow control and inference control in computer security. Second, we will enable the agent D to enforce its confidentiality aims as expressed by its policy by refusing requests from A at a potential violation of its policy. A crucial part of the enforcement is D's simulation of A's postulated reasoning about D's belief and the change of this belief. In this thesis, we consider two particular operators of belief change: an update operator for a simple logic-oriented database model and a revision operator for D's assertions about its environment that yield the agent's belief after its nonmonotonic reasoning. To prove the effectiveness of D's means of enforcement, we study necessary properties of D's simulation of A and then based on these properties show that D's enforcement is effective according to the formal requirements of its policy

    Etude du changement en argumentation : de la théorie à la pratique

    Get PDF
    L'argumentation, au sens de l'intelligence artificielle, est un formalisme permettant de raisonner Ă  partir d'informations incomplĂštes et/ou contradictoires ainsi que de modĂ©liser un Ă©change d'arguments entre plusieurs agents. Un systĂšme d'argumentation consiste gĂ©nĂ©ralement en un ensemble d'arguments interagissant les uns avec les autres, et duquel il est possible d'extraire un ou plusieurs points de vue cohĂ©rents. Dans cette thĂšse, nous nous plaçons dans le cadre de l'argumentation abstraite dans lequel les arguments sont manipulĂ©s en tant qu'entitĂ©s abstraites dont le sens nous est inconnu et dans lequel les interactions reprĂ©sentent des conflits. Ceci nous permet de nous concentrer sur le point particulier de la dynamique dans les systĂšmes d'argumentation abstraits, c'est-Ă -dire les changements pouvant impacter ces systĂšmes, notamment dans le cadre d'un dialogue. Nous commençons par justifier l'intĂ©rĂȘt d'un tel cadre formel puis nous nous intĂ©ressons au comment et au pourquoi du changement en argumentation abstraite. Le comment est approchĂ© en Ă©tablissant une liste des modifications que peut subir un systĂšme d'argumentation et en Ă©tudiant sous quelles conditions elles peuvent survenir. Le pourquoi est abordĂ© par l'introduction de la notion de but motivant un changement et le choix du meilleur changement Ă  faire pour satisfaire un but en prenant en considĂ©ration des contraintes portant sur l'agent Ă  convaincre. Enfin, nous concrĂ©tisons notre Ă©tude en proposant un outil logiciel implĂ©mentant les notions introduites et nous Ă©tudions ses performances.Argumentation, in the field of artificial intelligence, is a formalism allowing to reason with incomplete and/or contradictory information as well as to model an exchange of arguments between several agents. An argumentation system usually consists of a set of arguments interacting with each other, and from which it is possible to extract one or several consistent points of view. In this thesis, we are mainly concerned with the abstract argumentation in which arguments are handled as abstract entities whose meaning is unknown and in which the interactions represent conflicts. This allows us to focus on the particular point of the dynamics in abstract argumentation systems, that is to say the changes that could impact these systems, particularly in the context of a dialogue. We start with justifying the interest of such a formal framework, then we study the how and the why of change in abstract argumentation. The how is tackled by establishing a list of changes that an argumentation system can undergo and by studying the conditions under which they may occur. The why is addressed by introducing the notion of goal motivating a change and by choosing the best change to make in order to satisfy a goal, taking into account constraints on the agent to convince. Finally, we make our study concrete by proposing a tool that implements the concepts introduced and we study its performance

    On updates with integrity constraints

    Get PDF
    In his paper “Making Counterfactual Assumptions ” Frank Veltman has proposed a new semantics for counterfactual conditionals. It is based on a particular update operation, and we show that it provides a new and interesting way of updating logical databases under integrity constraints which generalizes in particular Winslett’s PMA.

    On updates with integrity constraints (Belief Change in Rational Agents: Perspectives from Artificial Intelligence, Philosophy, and Economics , Dagstuhl (Germany), 07/08/05-12/08/05)

    No full text
    International audienceIn his paper ``Making Counterfactual Assumptions'' Frank Veltman has proposed a new semantics for counterfactual conditionals. It is based on a particular update operation, and we show that it provides a new and interesting way of updating logical databases under integrity constraints which generalizes in particular Winslett's PMA
    corecore