5,073 research outputs found

    Implicit complexity for coinductive data: a characterization of corecurrence

    Full text link
    We propose a framework for reasoning about programs that manipulate coinductive data as well as inductive data. Our approach is based on using equational programs, which support a seamless combination of computation and reasoning, and using productivity (fairness) as the fundamental assertion, rather than bi-simulation. The latter is expressible in terms of the former. As an application to this framework, we give an implicit characterization of corecurrence: a function is definable using corecurrence iff its productivity is provable using coinduction for formulas in which data-predicates do not occur negatively. This is an analog, albeit in weaker form, of a characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034

    Global semantic typing for inductive and coinductive computing

    Get PDF
    Inductive and coinductive types are commonly construed as ontological (Church-style) types, denoting canonical data-sets such as natural numbers, lists, and streams. For various purposes, notably the study of programs in the context of global semantics, it is preferable to think of types as semantical properties (Curry-style). Intrinsic theories were introduced in the late 1990s to provide a purely logical framework for reasoning about programs and their semantic types. We extend them here to data given by any combination of inductive and coinductive definitions. This approach is of interest because it fits tightly with syntactic, semantic, and proof theoretic fundamentals of formal logic, with potential applications in implicit computational complexity as well as extraction of programs from proofs. We prove a Canonicity Theorem, showing that the global definition of program typing, via the usual (Tarskian) semantics of first-order logic, agrees with their operational semantics in the intended model. Finally, we show that every intrinsic theory is interpretable in a conservative extension of first-order arithmetic. This means that quantification over infinite data objects does not lead, on its own, to proof-theoretic strength beyond that of Peano Arithmetic. Intrinsic theories are perfectly amenable to formulas-as-types Curry-Howard morphisms, and were used to characterize major computational complexity classes Their extensions described here have similar potential which has already been applied

    An Effective Fixpoint Semantics for Linear Logic Programs

    Full text link
    In this paper we investigate the theoretical foundation of a new bottom-up semantics for linear logic programs, and more precisely for the fragment of LinLog that consists of the language LO enriched with the constant 1. We use constraints to symbolically and finitely represent possibly infinite collections of provable goals. We define a fixpoint semantics based on a new operator in the style of Tp working over constraints. An application of the fixpoint operator can be computed algorithmically. As sufficient conditions for termination, we show that the fixpoint computation is guaranteed to converge for propositional LO. To our knowledge, this is the first attempt to define an effective fixpoint semantics for linear logic programs. As an application of our framework, we also present a formal investigation of the relations between LO and Disjunctive Logic Programming. Using an approach based on abstract interpretation, we show that DLP fixpoint semantics can be viewed as an abstraction of our semantics for LO. We prove that the resulting abstraction is correct and complete for an interesting class of LO programs encoding Petri Nets.Comment: 39 pages, 5 figures. To appear in Theory and Practice of Logic Programmin

    The Structure of Differential Invariants and Differential Cut Elimination

    Full text link
    The biggest challenge in hybrid systems verification is the handling of differential equations. Because computable closed-form solutions only exist for very simple differential equations, proof certificates have been proposed for more scalable verification. Search procedures for these proof certificates are still rather ad-hoc, though, because the problem structure is only understood poorly. We investigate differential invariants, which define an induction principle for differential equations and which can be checked for invariance along a differential equation just by using their differential structure, without having to solve them. We study the structural properties of differential invariants. To analyze trade-offs for proof search complexity, we identify more than a dozen relations between several classes of differential invariants and compare their deductive power. As our main results, we analyze the deductive power of differential cuts and the deductive power of differential invariants with auxiliary differential variables. We refute the differential cut elimination hypothesis and show that, unlike standard cuts, differential cuts are fundamental proof principles that strictly increase the deductive power. We also prove that the deductive power increases further when adding auxiliary differential variables to the dynamics

    Computational reverse mathematics and foundational analysis

    Get PDF
    Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can be fruitfully applied in the evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert's program due to Simpson [1988], and predicativism in the extended form due to Feferman and Sch\"{u}tte. Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only ω\omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Π11\Pi^1_1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Π11-CA0\Pi^1_1\text{-}\mathsf{CA}_0.Comment: Submitted. 41 page

    Model Checking Linear Logic Specifications

    Full text link
    The overall goal of this paper is to investigate the theoretical foundations of algorithmic verification techniques for first order linear logic specifications. The fragment of linear logic we consider in this paper is based on the linear logic programming language called LO enriched with universally quantified goal formulas. Although LO was originally introduced as a theoretical foundation for extensions of logic programming languages, it can also be viewed as a very general language to specify a wide range of infinite-state concurrent systems. Our approach is based on the relation between backward reachability and provability highlighted in our previous work on propositional LO programs. Following this line of research, we define here a general framework for the bottom-up evaluation of first order linear logic specifications. The evaluation procedure is based on an effective fixpoint operator working on a symbolic representation of infinite collections of first order linear logic formulas. The theory of well quasi-orderings can be used to provide sufficient conditions for the termination of the evaluation of non trivial fragments of first order linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory and Practice of Logic Programming

    Relating Nominal and Higher-order Abstract Syntax Specifications

    Full text link
    Nominal abstract syntax and higher-order abstract syntax provide a means for describing binding structure which is higher-level than traditional techniques. These approaches have spawned two different communities which have developed along similar lines but with subtle differences that make them difficult to relate. The nominal abstract syntax community has devices like names, freshness, name-abstractions with variable capture, and the new-quantifier, whereas the higher-order abstract syntax community has devices like lambda-binders, lambda-conversion, raising, and the nabla-quantifier. This paper aims to unify these communities and provide a concrete correspondence between their different devices. In particular, we develop a semantics-preserving translation from alpha-Prolog, a nominal abstract syntax based logic programming language, to G-, a higher-order abstract syntax based logic programming language. We also discuss higher-order judgments, a common and powerful tool for specifications with higher-order abstract syntax, and we show how these can be incorporated into G-. This establishes G- as a language with the power of higher-order abstract syntax, the fine-grained variable control of nominal specifications, and the desirable properties of higher-order judgments.Comment: To appear in PPDP 201
    • …
    corecore