43 research outputs found

    A Tour on Ecumenical Systems

    Get PDF
    Ecumenism can be understood as a pursuit of unity, where diverse thoughts, ideas, or points of view coexist harmoniously. In logic, ecumenical systems refer, in a broad sense, to proof systems for combining logics. One captivating area of research over the past few decades has been the exploration of seamlessly merging classical and intuitionistic connectives, allowing them to coexist peacefully. In this paper, we will embark on a journey through ecumenical systems, drawing inspiration from Prawitz' seminal work [35]. We will begin by elucidating Prawitz' concept of “ecumenism” and present a pure sequent calculus version of his system. Building upon this foundation, we will expand our discussion to incorporate alethic modalities, leveraging Simpson's meta-logical characterization. This will enable us to propose several proof systems for ecumenical modal logics. We will conclude our tour with some discussion towards a term calculus proposal for the implicational propositional fragment of the ecumenical logic, the quest of automation using a framework based in rewriting logic, and an ecumenical view of proof-theoretic semantics

    Expressing Ecumenical Systems in the ??-Calculus Modulo Theory

    Get PDF
    Systems in which classical and intuitionistic logics coexist are called ecumenical. Such a system allows for interoperability and hybridization between classical and constructive propositions and proofs. We study Ecumenical STT, a theory expressed in the logical framework of the ??-calculus modulo theory. We prove soudness and conservativity of four subtheories of Ecumenical STT with respect to constructive and classical predicate logic and simple type theory. We also prove the weak normalization of well-typed terms and thus the consistency of Ecumenical STT

    Analysing Parallel Complexity of Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for publication in the Proceedings of the 32nd International Symposium on Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page

    Agent programming in the cognitive era

    Get PDF
    It is claimed that, in the nascent ‘Cognitive Era’, intelligent systems will be trained using machine learning techniques rather than programmed by software developers. A contrary point of view argues that machine learning has limitations, and, taken in isolation, cannot form the basis of autonomous systems capable of intelligent behaviour in complex environments. In this paper, we explore the contributions that agent-oriented programming can make to the development of future intelligent systems. We briefly review the state of the art in agent programming, focussing particularly on BDI-based agent programming languages, and discuss previous work on integrating AI techniques (including machine learning) in agent-oriented programming. We argue that the unique strengths of BDI agent languages provide an ideal framework for integrating the wide range of AI capabilities necessary for progress towards the next-generation of intelligent systems. We identify a range of possible approaches to integrating AI into a BDI agent architecture. Some of these approaches, e.g., ‘AI as a service’, exploit immediate synergies between rapidly maturing AI techniques and agent programming, while others, e.g., ‘AI embedded into agents’ raise more fundamental research questions, and we sketch a programme of research directed towards identifying the most appropriate ways of integrating AI capabilities into agent programs

    On Complexity Bounds and Confluence of Parallel Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. Our approach to find lower bounds requires confluence of the parallel-innermost rewrite relation, thus we also provide effective sufficient criteria for proving confluence. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Under submission to Fundamenta Informaticae. arXiv admin note: substantial text overlap with arXiv:2208.0100

    Programming with narrowing: A tutorial

    Get PDF
    AbstractNarrowing is a computation implemented by some declarative programming languages. Research in the last decade has produced significant results on the theory and foundation of narrowing, but little has been published on the use of narrowing in programming. This paper introduces narrowing from a programmer’s viewpoint; shows, by means of examples, when, why and how to use narrowing in a program; and discusses the impact of narrowing on software development activities such as design and maintenance. The examples are coded in the programming language Curry, which provides narrowing as a first class feature

    A Modular Associative Commutative (AC) Congruence Closure Algorithm

    Get PDF

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Fifth Biennial Report : June 1999 - August 2001

    No full text
    corecore