1,025 research outputs found

    Cut Elimination for a Logic with Induction and Co-induction

    Full text link
    Proof search has been used to specify a wide range of computation systems. In order to build a framework for reasoning about such specifications, we make use of a sequent calculus involving induction and co-induction. These proof principles are based on a proof theoretic (rather than set-theoretic) notion of definition. Definitions are akin to logic programs, where the left and right rules for defined atoms allow one to view theories as "closed" or defining fixed points. The use of definitions and free equality makes it possible to reason intentionally about syntax. We add in a consistent way rules for pre and post fixed points, thus allowing the user to reason inductively and co-inductively about properties of computational system making full use of higher-order abstract syntax. Consistency is guaranteed via cut-elimination, where we give the first, to our knowledge, cut-elimination procedure in the presence of general inductive and co-inductive definitions.Comment: 42 pages, submitted to the Journal of Applied Logi

    Confluence via strong normalisation in an algebraic \lambda-calculus with rewriting

    Full text link
    The linear-algebraic lambda-calculus and the algebraic lambda-calculus are untyped lambda-calculi extended with arbitrary linear combinations of terms. The former presents the axioms of linear algebra in the form of a rewrite system, while the latter uses equalities. When given by rewrites, algebraic lambda-calculi are not confluent unless further restrictions are added. We provide a type system for the linear-algebraic lambda-calculus enforcing strong normalisation, which gives back confluence. The type system allows an abstract interpretation in System F.Comment: In Proceedings LSFA 2011, arXiv:1203.542

    Linear Logic and Strong Normalization

    Get PDF
    Strong normalization for linear logic requires elaborated rewriting techniques. In this paper we give a new presentation of MELL proof nets, without any commutative cut-elimination rule. We show how this feature induces a compact and simple proof of strong normalization, via reducibility candidates. It is the first proof of strong normalization for MELL which does not rely on any form of confluence, and so it smoothly scales up to full linear logic. Moreover, it is an axiomatic proof, as more generally it holds for every set of rewriting rules satisfying three very natural requirements with respect to substitution, commutation with promotion, full composition, and Kesner\u27s IE property. The insight indeed comes from the theory of explicit substitutions, and from looking at the exponentials as a substitution device

    Nominal Abstraction

    Get PDF
    Recursive relational specifications are commonly used to describe the computational structure of formal systems. Recent research in proof theory has identified two features that facilitate direct, logic-based reasoning about such descriptions: the interpretation of atomic judgments through recursive definitions and an encoding of binding constructs via generic judgments. However, logics encompassing these two features do not currently allow for the definition of relations that embody dynamic aspects related to binding, a capability needed in many reasoning tasks. We propose a new relation between terms called nominal abstraction as a means for overcoming this deficiency. We incorporate nominal abstraction into a rich logic also including definitions, generic quantification, induction, and co-induction that we then prove to be consistent. We present examples to show that this logic can provide elegant treatments of binding contexts that appear in many proofs, such as those establishing properties of typing calculi and of arbitrarily cascading substitutions that play a role in reducibility arguments.Comment: To appear in the Journal of Information and Computatio

    Survey on counting special types of polynomials

    Full text link
    Most integers are composite and most univariate polynomials over a finite field are reducible. The Prime Number Theorem and a classical result of Gau{\ss} count the remaining ones, approximately and exactly. For polynomials in two or more variables, the situation changes dramatically. Most multivariate polynomials are irreducible. This survey presents counting results for some special classes of multivariate polynomials over a finite field, namely the the reducible ones, the s-powerful ones (divisible by the s-th power of a nonconstant polynomial), the relatively irreducible ones (irreducible but reducible over an extension field), the decomposable ones, and also for reducible space curves. These come as exact formulas and as approximations with relative errors that essentially decrease exponentially in the input size. Furthermore, a univariate polynomial f is decomposable if f = g o h for some nonlinear polynomials g and h. It is intuitively clear that the decomposable polynomials form a small minority among all polynomials. The tame case, where the characteristic p of Fq does not divide n = deg f, is fairly well-understood, and we obtain closely matching upper and lower bounds on the number of decomposable polynomials. In the wild case, where p does divide n, the bounds are less satisfactory, in particular when p is the smallest prime divisor of n and divides n exactly twice. The crux of the matter is to count the number of collisions, where essentially different (g, h) yield the same f. We present a classification of all collisions at degree n = p^2 which yields an exact count of those decomposable polynomials.Comment: to appear in Jaime Gutierrez, Josef Schicho & Martin Weimann (editors), Computer Algebra and Polynomials, Lecture Notes in Computer Scienc

    Higher-Order Termination: from Kruskal to Computability

    Get PDF
    Termination is a major question in both logic and computer science. In logic, termination is at the heart of proof theory where it is usually called strong normalization (of cut elimination). In computer science, termination has always been an important issue for showing programs correct. In the early days of logic, strong normalization was usually shown by assigning ordinals to expressions in such a way that eliminating a cut would yield an expression with a smaller ordinal. In the early days of verification, computer scientists used similar ideas, interpreting the arguments of a program call by a natural number, such as their size. Showing the size of the arguments to decrease for each recursive call gives a termination proof of the program, which is however rather weak since it can only yield quite small ordinals. In the sixties, Tait invented a new method for showing cut elimination of natural deduction, based on a predicate over the set of terms, such that the membership of an expression to the predicate implied the strong normalization property for that expression. The predicate being defined by induction on types, or even as a fixpoint, this method could yield much larger ordinals. Later generalized by Girard under the name of reducibility or computability candidates, it showed very effective in proving the strong normalization property of typed lambda-calculi..
    • …
    corecore