71 research outputs found

    Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

    Get PDF
    Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents. However, planning involving nested beliefs is known to be computationally challenging. In this work, we address the task of synthesizing plans that necessitate reasoning about the beliefs of other agents. We plan from the perspective of a single agent with the potential for goals and actions that involve nested beliefs, non-homogeneous agents, co-present observations, and the ability for one agent to reason as if it were another. We formally characterize our notion of planning with nested belief, and subsequently demonstrate how to automatically convert such problems into problems that appeal to classical planning technology for solving efficiently. Our approach represents an important step towards applying the well-established field of automated planning to the challenging task of planning involving nested beliefs of multiple agents

    Propositional update operators based on formula/literal dependence

    Get PDF
    International audienceWe present and study a general family of belief update operators in a propositional setting. Its operators are based on formula/literal dependence, which is more fine-grained than the notion of formula/variable dependence that was proposed in the literature: formula/variable dependence is a particular case of formula/literal dependence. Our update operators are defined according to the "forget-then-conjoin" scheme: updating a belief base by an input formula consists in first forgetting in the base every literal on which the input formula has a negative influence, and then conjoining the resulting base with the input formula. The operators of our family differ by the underlying notion of formula/literal dependence, which may be defined syntactically or semantically, and which may or may not exploit further information like known persistent literals and pre-set dependencies. We argue that this allows to handle the frame problem and the ramification problem in a more appropriate way. We evaluate the update operators of our family w.r.t. two important dimensions: the logical dimension, by checking the status of the Katsuno-Mendelzon postulates for update, and the computational dimension, by identifying the complexity of a number of decision problems (including model checking, consistency and inference), both in the general case and in some restricted cases, as well as by studying compactability issues. It follows that several operators of our family are interesting alternatives to previous belief update operators

    Standpoint Logic: A Logic for Handling Semantic Variability, with Applications to Forestry Information

    Get PDF
    It is widely accepted that most natural language expressions do not have precise universally agreed definitions that fix their meanings. Except in the case of certain technical terminology, humans use terms in a variety of ways that are adapted to different contexts and perspectives. Hence, even when conversation participants share the same vocabulary and agree on fundamental taxonomic relationships (such as subsumption and mutual exclusivity), their view on the specific meaning of terms may differ significantly. Moreover, even individuals themselves may not hold permanent points of view, but rather adopt different semantics depending on the particular features of the situation and what they wish to communicate. In this thesis, we analyse logical and representational aspects of the semantic variability of natural language terms. In particular, we aim to provide a formal language adequate for reasoning in settings where different agents may adopt particular standpoints or perspectives, thereby narrowing the semantic variability of the vague language predicates in different ways. For that purpose, we present standpoint logic, a framework for interpreting languages in the presence of semantic variability. We build on supervaluationist accounts of vagueness, which explain linguistic indeterminacy in terms of a collection of possible interpretations of the terms of the language (precisifications). This is extended by adding the notion of standpoint, which intuitively corresponds to a particular point of view on how to interpret vague terminology, and may be taken by a person or institution in a relevant context. A standpoint is modelled by sets of precisifications compatible with that point of view and does not need to be fully precise. In this way, standpoint logic allows one to articulate finely grained and structured stipulations of the varieties of interpretation that can be given to a vague concept or a set of related concepts and also provides means to express relationships between different systems of interpretation. After the specification of precisifications and standpoints and the consideration of the relevant notions of truth and validity, a multi-modal logic language for describing standpoints is presented. The language includes a modal operator for each standpoint, such that \standb{s}\phi means that a proposition ϕ\phi is unequivocally true according to the standpoint ss --- i.e.\ ϕ\phi is true at all precisifications compatible with ss. We provide the logic with a Kripke semantics and examine the characteristics of its intended models. Furthermore, we prove the soundness, completeness and decidability of standpoint logic with an underlying propositional language, and show that the satisfiability problem is NP-complete. We subsequently illustrate how this language can be used to represent logical properties and connections between alternative partial models of a domain and different accounts of the semantics of terms. As proof of concept, we explore the application of our formal framework to the domain of forestry, and in particular, we focus on the semantic variability of `forest'. In this scenario, the problematic arising of the assignation of different meanings has been repeatedly reported in the literature, and it is especially relevant in the context of the unprecedented scale of publicly available geographic data, where information and databases, even when ostensibly linked to ontologies, may present substantial semantic variation, which obstructs interoperability and confounds knowledge exchange

    Understanding Inconsistency -- A Contribution to the Field of Non-monotonic Reasoning

    Get PDF
    Conflicting information in an agent's knowledge base may lead to a semantical defect, that is, a situation where it is impossible to draw any plausible conclusion. Finding out the reasons for the observed inconsistency and restoring consistency in a certain minimal way are frequently occurring issues in the research area of knowledge representation and reasoning. In a seminal paper Raymond Reiter proves a duality between maximal consistent subsets of a propositional knowledge base and minimal hitting sets of each minimal conflict -- the famous hitting set duality. We extend Reiter's result to arbitrary non-monotonic logics. To this end, we develop a refined notion of inconsistency, called strong inconsistency. We show that minimal strongly inconsistent subsets play a similar role as minimal inconsistent subsets in propositional logic. In particular, the duality between hitting sets of minimal inconsistent subsets and maximal consistent subsets generalizes to arbitrary logics if the stronger notion of inconsistency is used. We cover various notions of repairs and characterize them using analogous hitting set dualities. Our analysis also includes an investigation of structural properties of knowledge bases with respect to our notions. Minimal inconsistent subsets of knowledge bases in monotonic logics play an important role when investigating the reasons for conflicts and trying to handle them, but also for inconsistency measurement. Our notion of strong inconsistency thus allows us to extend existing results to non-monotonic logics. While measuring inconsistency in propositional logic has been investigated for some time now, taking the non-monotony into account poses new challenges. In order to tackle them, we focus on the structure of minimal strongly inconsistent subsets of a knowledge base. We propose measures based on this notion and investigate their behavior in a non-monotonic setting by revisiting existing rationality postulates, and analyzing the compliance of the proposed measures with these postulates. We provide a series of first results in the context of inconsistency in abstract argumentation theory regarding the two most important reasoning modes, namely credulous as well as skeptical acceptance. Our analysis includes the following problems regarding minimal repairs: existence, verification, computation of one and characterization of all solutions. The latter will be tackled with our previously obtained duality results. Finally, we investigate the complexity of various related reasoning problems and compare our results to existing ones for monotonic logics

    Intuitionism and logical revision.

    Get PDF
    The topic of this thesis is logical revision: should we revise the canons of classical reasoning in favour of a weaker logic, such as intuitionistic logic? In the first part of the thesis, I consider two metaphysical arguments against the classical Law of Excluded Middle-arguments whose main premise is the metaphysical claim that truth is knowable. I argue that the first argument, the Basic Revisionary Argument, validates a parallel argument for a conclusion that is unwelcome to classicists and intuitionists alike: that the dual of the Law of Excluded Middle, the Law of Non-Contradiction, is either unknown, or both known and not known to be true. As for the second argument, the Paradox of Knowability, I offer new reasons for thinking that adopting intuitionistic logic does not go to the heart of the matter. In the second part of the thesis, I motivate an inferentialist framework for assessing competing logics-one on which the meaning of the logical vocabulary is determined by the rules for its correct use. I defend the inferentialist account of understanding from the contention that it is inadequate in principle, and I offer reasons for thinking that the inferentialist approach to logic can help model theorists and proof-theorists alike justify their logical choices. I then scrutinize the main meaning-theoretic principles on which the inferentialist approach to logic rests: the requirements of harmony and separability. I show that these principles are motivated by the assumption that inference rules are complete, and that the kind of completeness that is necessary for imposing separability is strictly stronger than the completeness needed for requiring harmony. This allows me to reconcile the inferentialist assumption that inference rules are complete with the inherent incompleteness of higher-order logics-an apparent tension that has sometimes been thought to undermine the entire inferentialist project. I finally turn to the question whether the inferentialist framework is inhospitable in principle to classical logical principles. I compare three different regimentations of classical logic: two old, the multiple-conclusions and the bilateralist ones, and one new. Each of them satisfies the requirements of harmony and separability, but each of them also invokes structural principles that are not accepted by the intuitionist logician. I offer reasons for dismissing multiple-conclusions and bilateralist formalizations of logic, and I argue that we can nevertheless be in harmony with classical logic, if we are prepared to adopt classical rules for disjunction, and if we are willing to treat absurdity as a logical punctuation sign

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    TOPIC SENSITIVE BELIEF REVISION

    Get PDF
    When asked to change one's beliefs in the face of new information, or to revise a book given errata, we commonly strive to keep our changes relevant, that is, we try to restrict the beliefs (or chapters) we change to those that bear some content relation to the new information. One kind of relevance, topicality, is interesting for two reasons: First, topicality tends to be strongly encapsulating, e.g., we shouldn't make any off-topic changes. Second, topicality tends to be weaker than strict relevance. Consider a panel of three papers on the topic of Kant's life and works. It would be entirely possible for each of the papers to have no bearing on the truth of any sentence in any of the other papers, and yet for all of the papers to be on topic. In this dissertation, I explore theories of logical topicality and their effect on formal theories of belief revision. Formal theories of belief revision (in the AlchourrĂłn, GĂ€rdenfors, and Makinson (AGM) tradition) model the object of change (my beliefs, a book) as a collection of formulae in a supra-classical logic and provide a set of postulates which express constraints on the sorts of change that are, in principle, formally rational. In 1999, Rohit Parikh proposed that signature disjointness captured a reasonable notion of topicality but that taking topicality into account required changes in the standard AGM postulates (and thus, the notion of rational change). He, and subsequent theorists, abandoned this notion of topicality in order to deal with the revision of inconsistent objects of change. In this thesis, I show 1) that a disjoint signature account of topicality does not require changes to the AGM rationality postulates and 2) a disjoint signature account of topicality can apply to inconsistent objects of change. Additionally, I argue that signature disjointness has a strong claim to being at least a sufficient condition of logical topicality

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 24th International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2021, which was held during March 27 until April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The 28 regular papers presented in this volume were carefully reviewed and selected from 88 submissions. They deal with research on theories and methods to support the analysis, integration, synthesis, transformation, and verification of programs and software systems

    Towards a cognitive linguistic approach to language comprehension

    Get PDF
    This thesis develops a cognitive linguistic approach to language comprehension. The cognitive approach differs from traditional linguistic approaches in that linguistic description is seen as an integral part of the description of cognition, and that the object of description is the nature of conceptual structures, the processes which relate these conceptual structures, and the effect of context upon these processes. As a cognitive description within cognitive science, a computational approach is adopted: language comprehension is described in terms of two modules, a linguistic processing module and a discourse processing module. Within these modules, conceptual structures and processes are given a uniform characterization: structures are characterized as partial objects which are extended by processes into (potentially) less partial objects. In the linguistic processing module, linguistic expressions are characterized as signs which combine as head and modifier. The conceptual structu..

    Fuzzy Mathematics

    Get PDF
    This book provides a timely overview of topics in fuzzy mathematics. It lays the foundation for further research and applications in a broad range of areas. It contains break-through analysis on how results from the many variations and extensions of fuzzy set theory can be obtained from known results of traditional fuzzy set theory. The book contains not only theoretical results, but a wide range of applications in areas such as decision analysis, optimal allocation in possibilistics and mixed models, pattern classification, credibility measures, algorithms for modeling uncertain data, and numerical methods for solving fuzzy linear systems. The book offers an excellent reference for advanced undergraduate and graduate students in applied and theoretical fuzzy mathematics. Researchers and referees in fuzzy set theory will find the book to be of extreme value
    • 

    corecore