566 research outputs found

    On Properties of Update Sequences Based on Causal Rejection

    Full text link
    We consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under answer set semantics. New information is incorporated into the current knowledge base subject to a causal rejection principle enforcing that, in case of conflicts, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, e.g., in dynamic logic programming by Alferes et al. We give a thorough analysis of properties of our approach, to get a better understanding of the causal rejection principle. We review postulates for update and revision operators from the area of theory change and nonmonotonic reasoning, and some new properties are considered as well. We then consider refinements of our semantics which incorporate a notion of minimality of change. As well, we investigate the relationship to other approaches, showing that our approach is semantically equivalent to inheritance programs by Buccafurri et al. and that it coincides with certain classes of dynamic logic programs, for which we provide characterizations in terms of graph conditions. Therefore, most of our results about properties of causal rejection principle apply to these approaches as well. Finally, we deal with computational complexity of our approach, and outline how the update semantics and its refinements can be implemented on top of existing logic programming engines.Comment: 59 pages, 2 figures, 3 tables, to be published in "Theory and Practice of Logic Programming

    Impredicative Encodings of (Higher) Inductive Types

    Full text link
    Postulating an impredicative universe in dependent type theory allows System F style encodings of finitary inductive types, but these fail to satisfy the relevant {\eta}-equalities and consequently do not admit dependent eliminators. To recover {\eta} and dependent elimination, we present a method to construct refinements of these impredicative encodings, using ideas from homotopy type theory. We then extend our method to construct impredicative encodings of some higher inductive types, such as 1-truncation and the unit circle S1

    Modularity in answer set programs

    Get PDF
    Answer set programming (ASP) is an approach to rule-based constraint programming allowing flexible knowledge representation in variety of application areas. The declarative nature of ASP is reflected in problem solving. First, a programmer writes down a logic program the answer sets of which correspond to the solutions of the problem. The answer sets of the program are then computed using a special purpose search engine, an ASP solver. The development of efficient ASP solvers has enabled the use of answer set programming in various application domains such as planning, product configuration, computer aided verification, and bioinformatics. The topic of this thesis is modularity in answer set programming. While modern programming languages typically provide means to exploit modularity in a number of ways to govern the complexity of programs and their development process, relatively little attention has been paid to modularity in ASP. When designing a module architecture for ASP, it is essential to establish full compositionality of the semantics with respect to the module system. A balance is sought between introducing restrictions that guarantee the compositionality of the semantics and enforce a good programming style in ASP, and avoiding restrictions on the module hierarchy for the sake of flexibility of knowledge representation. To justify a replacement of a module with another, that is, to be able to guarantee that changes made on the level of modules do not alter the semantics of the program when seen as an entity, a notion of equivalence for modules is provided. In close connection with the development of the compositional module architecture, a transformation from verification of equivalence to search for answer sets is developed. The translation-based approach makes it unnecessary to develop a dedicated tool for the equivalence verification task by allowing the direct use of existing ASP solvers. Translations and transformations between different problems, program classes, and formalisms are another central theme in the thesis. To guarantee efficiency and soundness of the translation-based approach, certain syntactical and semantical properties of transformations are desirable, in terms of translation time, solution correspondence between the original and the transformed problem, and locality/globality of a particular transformation. In certain cases a more refined notion of minimality than that inherent in ASP can make program encodings more intuitive. Lifschitz' parallel and prioritized circumscription offer a solution in which certain atoms are allowed to vary or to have fixed values while others are falsified as far as possible according to priority classes. In this thesis a linear and faithful transformation embedding parallel and prioritized circumscription into ASP is provided. This enhances the knowledge representation capabilities of answer set programming by allowing the use of existing ASP solvers for computing parallel and prioritized circumscription

    Relating Justification Logic Modality and Type Theory in Curry–Howard Fashion

    Full text link
    This dissertation is a work in the intersection of Justification Logic and Curry--Howard Isomorphism. Justification logic is an umbrella of modal logics of knowledge with explicit evidence. Justification logics have been used to tackle traditional problems in proof theory (in relation to Godel\u27s provability) and philosophy (Gettier examples, Russel\u27s barn paradox). The Curry--Howard Isomorphism or proofs-as-programs is an understanding of logic that places logical studies in conjunction with type theory and -- in current developments -- category theory. The point being that understanding a system as a logic, a typed calculus and, a language of a class of categories constitutes a useful discovery that can have many applications. The applications we will be mainly concerned with are type systems for useful programming language constructs. This work is structured in three parts: The first part is a a bird\u27s eye view into my research topics: intuitionistic logic, justified modality and type theory. The relevant systems are introduced syntactically together with main metatheoretic proof techniques which will be useful in the rest of the thesis. The second part features my main contributions. I will propose a modal type system that extends simple type theory (or, isomorphically, intuitionistic propositional logic) with elements of justification logic and will argue about its computational significance. More specifically, I will show that the obtained calculus characterizes certain computational phenomena related to linking (e.g. module mechanisms, foreign function interfaces) that abound in semantics of modern programming languages. I will present full metatheoretic results obtained for this logic/ calculus utilizing techniques from the first part and will provide proofs in the Appendix. The Appendix contains also information about an implementation of our calculus in the metaprogramming framework Makam. Finally, I conclude this work with a small ``outro\u27\u27, where I informally show that the ideas underlying my contributions can be extended in interesting ways

    REVISION PROGRAMMING: A KNOWLEDGE REPRESENTATION FORMALISM

    Get PDF
    The topic of the dissertation is revision programming. It is a knowledge representation formalismfor describing constraints on databases, knowledge bases, and belief sets, and providing acomputational mechanism to enforce them. Constraints are represented by sets of revision rules.Revision rules could be quite complex and are usually in a form of conditions (for instance, ifthese elements are present and those elements are absent, then this element must be absent). Inaddition to being a logical constraint, a revision rule specify a preferred way to satisfy the constraint.Justified revisions semantics assigns to any database a set (possibly empty) of revisions.Each revision satisfies the constraints, and all deletions and additions of elements in a transitionfrom initial database to the revision are derived from revision rules.Revision programming and logic programming are closely related. We established an elegantembedding of revision programs into logic programs, which does not increase the size of a program.Initial database is used in transformation of a revision program into the corresponding logicprogram, but it is not represented in the logic program.The connection naturally led to extensions of revision programming formalism which correspondto existing extensions of logic programming. More specific, a disjunctive and a nestedversions of revision programming were introduced.Also, we studied annotated revision programs, which allow annotations like confidence factors,multiple experts, etc. Annotations were assumed to be elements of a complete infinitely distributivelattice. We proposed a justified revisions semantics for annotated revision programs which agreedwith intuitions.Next, we introduced definitions of well-founded semantics for revision programming. It assignsto a revision problem a single intended model which is computable in polynomial time.Finally, we extended syntax of revision problems by allowing variables and implemented translatorsof revision programs into logic programs and a grounder for revision programs. The implementationallows us to compute justified revisions using existing implementations of the stablemodel semantics for logic programs

    The Proscriptive Principle and Logics of Analytic Implication

    Full text link
    The analogy between inference and mereological containment goes at least back to Aristotle, whose discussion in the Prior Analytics motivates the validity of the syllogism by way of talk of parts and wholes. On this picture, the application of syllogistic is merely the analysis of concepts, a term that presupposes—through the root ἀνά + λύω —a mereological background. In the 1930s, such considerations led William T. Parry to attempt to codify this notion of logical containment in his system of analytic implication AI. Parry’s original system AI was later expanded to the system PAI. The hallmark of Parry’s systems—and of what may be thought of as containment logics or Parry systems in general—is a strong relevance property called the ‘Proscriptive Principle’ (PP) described by Parry as the thesis that: No formula with analytic implication as main relation holds universally if it has a free variable occurring in the consequent but not the antecedent. This type of proscription is on its face justified, as the presence of a novel parameter in the consequent corresponds to the introduction of new subject matter. The plausibility of the thesis that the content of a statement is related to its subject matter thus appears also to support the validity of the formal principle. Primarily due to the perception that Parry’s formal systems were intended to accurately model Kant’s notion of an analytic judgment, Parry’s deductive systems—and the suitability of the Proscriptive Principle in general—were met with severe criticism. While Anderson and Belnap argued that Parry’s criterion failed to account for a number of prima facie analytic judgments, others—such as Sylvan and Brady—argued that the utility of the criterion was impeded by its reliance on a ‘syntactical’ device. But these arguments are restricted to Parry’s work qua exegesis of Kant and fail to take into account the breadth of applications in which the Proscriptive Principle emerges. It is the goal of the present work to explore themes related to deductive systems satisfying one form of the Proscriptive Principle or other, with a special emphasis placed on the rehabilitation of their study to some degree. The structure of the dissertation is as follows: In Chapter 2, we identify and develop the relationship between Parry-type deductive systems and the field of ‘logics of nonsense.’ Of particular importance is Dmitri Bochvar’s ‘internal’ nonsense logic Σ0, and we observe that two ⊢-Parry subsystems of Σ0 (Harry Deutsch’s Sfde and Frederick Johnson’s RC) can be considered to be the products of particular ‘strategies’ of eliminating problematic inferences from Bochvar’s system. The material of Chapter 3 considers Kit Fine’s program of state space semantics in the context of Parry logics. Recently, Fine—who had already provided the first intuitive semantics for Parry’s PAI—has offered a formal model of truthmaking (and falsemaking) that provides one of the first natural semantics for Richard B. Angell’s logic of analytic containment AC, itself a ⊢-Parry system. After discussing the relationship between state space semantics and nonsense, we observe that Fabrice Correia’s weaker framework—introduced as a semantics for a containment logic weaker than AC—tacitly endorses an implausible feature of allowing hypernonsensical statements. By modelling Correia’s containment logic within the stronger setting of Fine’s semantics, we are able to retain Correia’s intuitions about factual equivalence without such a commitment. As a further application, we observe that Fine’s setting can resolve some ambiguities in Greg Restall’s own truthmaker semantics. In Chapter 4, we consider interpretations of disjunction that accord with the characteristic failure of Addition in which the evaluation of a disjunction A ∨ B requires not only the truth of one disjunct, but also that both disjuncts satisfy some further property. In the setting of computation, such an analysis requires the existence of some procedure tasked with ensuring the satisfaction of this property by both disjuncts. This observation leads to a computational analysis of the relationship between Parry logics and logics of nonsense in which the semantic category of ‘nonsense’ is associated with catastrophic faults in computer programs. In this spirit, we examine semantics for several ⊢-Parry logics in terms of the successful execution of certain types of programs and the consequences of extending this analysis to dynamic logic and constructive logic. Chapter 5 considers these faults in the particular case in which Nuel Belnap’s ‘artificial reasoner’ is unable to retrieve the value assigned to a variable. This leads not only to a natural interpretation of Graham Priest’s semantics for the ⊢-Parry system S⋆fde but also a novel, many-valued semantics for Angell’s AC, completeness of which is proven by establishing a correspondence with Correia’s semantics for AC. These many-valued semantics have the additional benefit of allowing us to apply the material in Chapter 2 to the case of AC to define intensional extensions of AC in the spirit of Parry’s PAI. One particular instance of the type of disjunction central to Chapter 4 is Melvin Fitting’s cut-down disjunction. Chapter 6 examines cut-down operations in more detail and provides bilattice and trilattice semantics for the ⊢-Parry systems Sfde and AC in the style of Ofer Arieli and Arnon Avron’s logical bilattices. The elegant connection between these systems and logical multilattices supports the fundamentality and naturalness of these logics and, additionally, allows us to extend epistemic interpretation of bilattices in the tradition of artificial intelligence to these systems. Finally, the correspondence between the present many-valued semantics for AC and those of Correia is revisited in Chapter 7. The technique that plays an essential role in Chapter 4 is used to characterize a wide class of first-degree calculi intermediate between AC and classical logic in Correia’s setting. This correspondence allows the correction of an incorrect characterization of classical logic given by Correia and leads to the question of how to characterize hybrid systems extending Angell’s AC∗. Finally, we consider whether this correspondence aids in providing an interpretation to Correia’s first semantics for AC

    Normalization of IZF with Replacement

    Full text link
    ZF is a well investigated impredicative constructive version of Zermelo-Fraenkel set theory. Using set terms, we axiomatize IZF with Replacement, which we call \izfr, along with its intensional counterpart \iizfr. We define a typed lambda calculus \li corresponding to proofs in \iizfr according to the Curry-Howard isomorphism principle. Using realizability for \iizfr, we show weak normalization of \li. We use normalization to prove the disjunction, numerical existence and term existence properties. An inner extensional model is used to show these properties, along with the set existence property, for full, extensional \izfr
    • …
    corecore