583 research outputs found

    Extensions of nominal terms

    Get PDF
    This thesis studies two major extensions of nominal terms. In particular, we study an extension with -abstraction over nominal unknowns and atoms, and an extension with an arguably better theory of freshness and -equivalence. Nominal terms possess two levels of variable: atoms a represent variable symbols, and unknowns X are `real' variables. As a syntax, they are designed to facilitate metaprogramming; unknowns are used to program on syntax with variable symbols. Originally, the role of nominal terms was interpreted narrowly. That is, they were seen solely as a syntax for representing partially-speci ed abstract syntax with binding. The main motivation of this thesis is to extend nominal terms so that they can be used for metaprogramming on proofs, programs, etc. and not just for metaprogramming on abstract syntax with binding. We therefore extend nominal terms in two signi cant ways: adding -abstraction over nominal unknowns and atoms| facilitating functional programing|and improving the theory of -equivalence that nominal terms possesses. Neither of the two extensions considered are trivial. The capturing substitution action of nominal unknowns implies that our notions of scope, intuited from working with syntax possessing a non-capturing substitution, such as the -calculus, is no longer applicable. As a result, notions of -abstraction and -equivalence must be carefully reconsidered. In particular, the rst research contribution of this thesis is the two-level - calculus, intuitively an intertwined pair of -calculi. As the name suggests, the two-level -calculus has two level of variable, modelled by nominal atoms and unknowns, respectively. Both levels of variable can be -abstracted, and requisite notions of -reduction are provided. The result is an expressive context-calculus. The traditional problems of handling -equivalence and the failure of commutation between instantiation and -reduction in context-calculi are handled through the use of two distinct levels of variable, swappings, and freshness side-conditions on unknowns, i.e. `nominal technology'. The second research contribution of this thesis is permissive nominal terms, an alternative form of nominal term. They retain the `nominal' rst-order avour of nominal terms (in fact, their grammars are almost identical) but forego the use of explicit freshness contexts. Instead, permissive nominal terms label unknowns with a permission sort, where permission sorts are in nite and coin nite sets of atoms. This in nite-coin nite nature means that permissive nominal terms recover two properties|we call them the `always-fresh' and `always-rename' properties that nominal terms lack. We argue that these two properties bring the theory of -equivalence on permissive nominal terms closer to `informal practice'. The reader may consider -abstraction and -equivalence so familiar as to be `solved problems'. The work embodied in this thesis stands testament to the fact that this isn't the case. Considering -abstraction and -equivalence in the context of two levels of variable poses some new and interesting problems and throws light on some deep questions related to scope and binding

    Towards a canonical classical natural deduction system

    Get PDF
    Preprint submitted to Elsevier, 6 July 2012This paper studies a new classical natural deduction system, presented as a typed calculus named lambda-mu- let. It is designed to be isomorphic to Curien and Herbelin's lambda-mu-mu~-calculus, both at the level of proofs and reduction, and the isomorphism is based on the correct correspondence between cut (resp. left-introduction) in sequent calculus, and substitution (resp. elimination) in natural deduction. It is a combination of Parigot's lambda-mu -calculus with the idea of "coercion calculus" due to Cervesato and Pfenning, accommodating let-expressions in a surprising way: they expand Parigot's syntactic class of named terms. This calculus and the mentioned isomorphism Theta offer three missing components of the proof theory of classical logic: a canonical natural deduction system; a robust process of "read-back" of calculi in the sequent calculus format into natural deduction syntax; a formalization of the usual semantics of the lambda-mu-mu~-calculus, that explains co-terms and cuts as, respectively, contexts and hole- filling instructions. lambda-mu-let is not yet another classical calculus, but rather a canonical reflection in natural deduction of the impeccable treatment of classical logic by sequent calculus; and provides the "read-back" map and the formalized semantics, based on the precise notions of context and "hole-expression" provided by lambda-mu-let. We use "read-back" to achieve a precise connection with Parigot's lambda-mu , and to derive lambda-calculi for call-by-value combining control and let-expressions in a logically founded way. Finally, the semantics , when fully developed, can be inverted at each syntactic category. This development gives us license to see sequent calculus as the semantics of natural deduction; and uncovers a new syntactic concept in lambda-mu-mu~ ("co-context"), with which one can give a new de nition of eta-reduction

    Conservative extensions of the λ-calculus for the computational interpretation of sequent calculus

    Get PDF
    This thesis offers a study of the Curry-Howard correspondence for a certain fragment (the canonical fragment) of sequent calculus based on an investigation of the relationship between cut elimination in that fragment and normalisation. The output of this study may be summarised in a new assignment θ, to proofs in the canonical fragment, of terms from certain conservative extensions of the λ-calculus. This assignment, in a sense, is an optimal improvement over the traditional assignment φ, in that it is an isomorphism both in the sense of sound bijection of proofs and isomorphism of normalisation procedures.First, a systematic definition of calculi of cut-elimination for the canonical fragment is carried out. We study various right protocols, i.e. cut-elimination procedures which give priority to right permutation. We pay particular attention to the issue of what parts of the procedure are to be implicit, that is, performed by meta-operators in the style of natural deduction. Next, a comprehensive study of the relationship between normalisation and these calculi of cut-elimination is done, producing several new insight of independent interest, particularly concerning a generalisation of Prawitz’s mapping of normal natural deduction proofs into sequent calculus.This study suggests the definition of conservative extensions of natural deduc­tion (and λ-calculus) based on the idea of a built-in distinction between applica­tive term and application, and also between head and tail application. These extensions offer perfect counterparts to the calculi in the canonical fragment, as established by the mentioned mapping θ . Conceptual rearrangements in proof- theory deriving from these extensions of natural deduction are discussed.Finally, we argue that, computationally, both the canonical fragment and natural deduction (in the extended sense introduced here) correspond to extensions of the λ-calculus with applicative terms; and that what distinguishes them is the way applicative terms are structured. In the canonical fragment, the head application of an applicative term is “focused” . This, in turn, explains the following observation: some reduction rules of calculi in the canonical fragment may be interpreted as transition rules for abstract call-by-name machines

    Axiomatic Architecture of Scientific Theories

    Get PDF
    The received concepts of axiomatic theory and axiomatic method, which stem from David Hilbert, need a systematic revision in view of more recent mathematical and scientific axiomatic practices, which do not fully follow in Hilbert’s steps and re-establish some older historical patterns of axiomatic thinking in unexpected new forms. In this work I motivate, formulate and justify such a revised concept of axiomatic theory, which for a variety of reasons I call constructive, and then argue that it can better serve as a formal representational tool in mathematics and science than the received concept

    The use of proof plans in tactic synthesis

    Get PDF
    We undertake a programme of tactic synthesis. We first formalize the notion of a tactic as a rewrite rule, then give a correctness criterion for this by means of a reflection mechanism in the constructive type theory OYSTER. We further formalize the notion of a tactic specification, given as a synthesis goal and a decidability goal. We use a proof planner. CIAM. to guide the search for inductive proofs of these, and are able to successfully synthesize several tactics in this fashion. This involves two extensions to existing methods: context-sensitive rewriting and higher-order wave rules. Further, we show that from a proof of the decidability goal one may compile to a Prolog program a pseudo- tactic which may be run to efficiently simulate the input/output behaviour of the synthetic tacti

    Beyond Logic. Proceedings of the Conference held in Cerisy-la-Salle, 22-27 May 2017

    Get PDF
    The project "Beyond Logic" is devoted to what hypothetical reasoning is all about when we go beyond the realm of "pure" logic into the world where logic is applied. As such extralogical areas we have chosen philosophy of science as an application within philosophy, informatics as an application within the formal sciences, and law as an application within the field of social interaction. The aim of the conference was to allow philosophers, logicians and computer scientists to present their work in connection with these three areas. The conference took place 22-27 May, 2017 in Cerisy-la-Salle at the Centre Culturel International de Cerisy. The proceedings collect abstracts, slides and papers of the presentations given, as well as a contribution from a speaker who was unable to attend

    Advances in Proof-Theoretic Semantics

    Get PDF
    Logic; Mathematical Logic and Foundations; Mathematical Logic and Formal Language

    Towards a logical foundation of randomized computation

    Get PDF
    This dissertation investigates the relations between logic and TCS in the probabilistic setting. It is motivated by two main considerations. On the one hand, since their appearance in the 1960s-1970s, probabilistic models have become increasingly pervasive in several fast-growing areas of CS. On the other, the study and development of (deterministic) computational models has considerably benefitted from the mutual interchanges between logic and CS. Nevertheless, probabilistic computation was only marginally touched by such fruitful interactions. The goal of this thesis is precisely to (start) bring(ing) this gap, by developing logical systems corresponding to specific aspects of randomized computation and, therefore, by generalizing standard achievements to the probabilistic realm. To do so, our key ingredient is the introduction of new, measure-sensitive quantifiers associated with quantitative interpretations. The dissertation is tripartite. In the first part, we focus on the relation between logic and counting complexity classes. We show that, due to our classical counting propositional logic, it is possible to generalize to counting classes, the standard results by Cook and Meyer and Stockmeyer linking propositional logic and the polynomial hierarchy. Indeed, we show that the validity problem for counting-quantified formulae captures the corresponding level in Wagner's hierarchy. In the second part, we consider programming language theory. Type systems for randomized \lambda-calculi, also guaranteeing various forms of termination properties, were introduced in the last decades, but these are not "logically oriented" and no Curry-Howard correspondence is known for them. Following intuitions coming from counting logics, we define the first probabilistic version of the correspondence. Finally, we consider the relationship between arithmetic and computation. We present a quantitative extension of the language of arithmetic able to formalize basic results from probability theory. This language is also our starting point to define randomized bounded theories and, so, to generalize canonical results by Buss
    corecore