84 research outputs found
'Galculator': functional prototype of a Galois-connection based proof assistant
Galculator is the name of the prototype of a proof assistant of a special brand: it is solely based on the algebra of Galois connections. When combined with the pointfree transform and tactics such as the indirect equality principle, Galois connections offer a very powerful, generic device to tackle the complexity of proofs in program verification. The paper describes the architecture of the current Galculator prototype, which is implemented in Haskell in order to steer types as much as possible. The prospect of integrating the Galculator with other proof assistants such as e.g. Coq is also discussed.(undefined
Relational Parametricity and Control
We study the equational theory of Parigot's second-order
λμ-calculus in connection with a call-by-name continuation-passing
style (CPS) translation into a fragment of the second-order λ-calculus.
It is observed that the relational parametricity on the target calculus induces
a natural notion of equivalence on the λμ-terms. On the other hand,
the unconstrained relational parametricity on the λμ-calculus turns
out to be inconsistent with this CPS semantics. Following these facts, we
propose to formulate the relational parametricity on the λμ-calculus
in a constrained way, which might be called ``focal parametricity''.Comment: 22 pages, for Logical Methods in Computer Scienc
Relation partition algebra â mathematical aspects of uses and part-of relations
AbstractManaging complexity in software engineering involves modularisation, grouping design objects into modules, subsystems, etc. This gives rise to new design objects with new âuse relationsâ. The lower-level design objects relate to these in a âpart-ofâ relation. But how do âuse relationsâ at different levels of the âpart-of hierarchyâ relate? We formalise our knowledge on uses and part-of relations, looking for mathematical laws about relations and partitions. A central role is played by an operator /. For a âusesâ relation r on a set of objects X and a partitioning into modules viewed as an equivalence θ, we form a relation rθ on the set Xθ. We adopt an axiomatic point of view and investigate a variety of models, corresponding to different abstraction mechanisms and different ways of relating high- and low-level uses relations
Mathematical Logic: Proof Theory, Constructive Mathematics
[no abstract available
Reasoning with Polarity in Categorial Type Logic
The research presented in this thesis follows the parsing as deduction approach to lin-
guistics. We use the tools of Categorial Type Logic (CTL) to study the interface of
natural language syntax and semantics. Our aim is to investigate the mathematical
structure of CTL and explore the possibilities it offers for analyzing natural language
structures and their interpretation.
The thesis is divided into three parts. Each of them has an introductory chapter.
In Chapter 1, we introduce the background assumptions of the categorial approach in
linguistics, and we sketch the developments that have led to the introduction of CTL.
We discuss the motivation for using logical methods in linguistic analysis. In Chapter 3,
we propose our view on the use of unary modalities as `logical features'. In Chapter 5,
we set up a general notion of grammatical composition taking into account the form
and the meaning dimensions of linguistic expressions. We develop a logical theory of
licensing and antilicensing relations that cross-cuts the form and meaning dimensions.
Throughout the thesis we focus attention on polarity. This term refers both to the
polarity of the logical operators of CTL and to the polarity items one finds in natural
language, which, furthermore, are closely connected to natural reasoning. Therefore,
the title of this thesis Reasoning with Polarity in Categorial Type Logic is intended to
express three meanings.
Firstly, we reason with the polarity of the logical operators of CTL and study their
derivability patterns. In Chapter 2, we explore the algebraic principles that govern
the behavior of the type-forming operations of the Lambek calculus. We extend the
categorial vocabulary with downward entailing unary operations obtaining the full tool-
kit that we use in the rest of the thesis. We employ unary operators to encode and
compute monotonicity information (Chapter 4), to account for the different ways of scope
taking of generalized quantifiers (Chapter 6), and to model licensing and antilicensing
relations (Chapter 7).
Secondly, in Chapter 4, we model natural reasoning inferences drawn from structures
suitable for negative polarity item occurrences. In particular, we describe a system
of inference based on CTL. By decorating functional types with unary operators we
encode the semantic distinction between upward and downward monotone functions.
Moreover, we study the advantages of this encoding by exploring the contribution of
v
monotone functions to the study of natural reasoning and to the analysis of the syntactic
distribution of negative polarity items.
Thirdly, in Chapter 7, we study the distribution of polarity-sensitive expressions. We
show how our theory of licensing and antilicensing relations successfully differentiates
between negative polarity items, which are `attracted' by their triggers, and positive
polarity items, which are `repelled' by them. We investigate these compatibility and
incompatibility relations from a cross-linguistic perspective, and show how we reduce
distributional differences between polarity-sensitive items in Dutch, Greek and Italian
to differences in the lexical type assignments of these languages
MacNeille Completion and Buchholz\u27 Omega Rule for Parameter-Free Second Order Logics
Buchholz\u27 Omega-rule is a way to give a syntactic, possibly ordinal-free proof of cut elimination for various subsystems of second order arithmetic. Our goal is to understand it from an algebraic point of view. Among many proofs of cut elimination for higher order logics, Maehara and Okada\u27s algebraic proofs are of particular interest, since the essence of their arguments can be algebraically described as the (Dedekind-)MacNeille completion together with Girard\u27s reducibility candidates. Interestingly, it turns out that the Omega-rule, formulated as a rule of logical inference, finds its algebraic foundation in the MacNeille completion.
In this paper, we consider a family of sequent calculi LIP = cup_{n >= -1} LIP_n for the parameter-free fragments of second order intuitionistic logic, that corresponds to the family ID_{<omega} = cup_{n <omega} ID_n of arithmetical theories of inductive definitions up to omega. In this setting, we observe a formal connection between the Omega-rule and the MacNeille completion, that leads to a way of interpreting second order quantifiers in a first order way in Heyting-valued semantics, called the Omega-interpretation. Based on this, we give a (partly) algebraic proof of cut elimination for LIP_n, in which quantification over reducibility candidates, that are genuinely second order, is replaced by the Omega-interpretation, that is essentially first order. As a consequence, our proof is locally formalizable in ID-theories
Gradual Liquid Type Inference
Liquid typing provides a decidable refinement inference mechanism that is
convenient but subject to two major issues: (1) inference is global and
requires top-level annotations, making it unsuitable for inference of modular
code components and prohibiting its applicability to library code, and (2)
inference failure results in obscure error messages. These difficulties
seriously hamper the migration of existing code to use refinements. This paper
shows that gradual liquid type inference---a novel combination of liquid
inference and gradual refinement types---addresses both issues. Gradual
refinement types, which support imprecise predicates that are optimistically
interpreted, can be used in argument positions to constrain liquid inference so
that the global inference process e effectively infers modular specifications
usable for library components. Dually, when gradual refinements appear as the
result of inference, they signal an inconsistency in the use of static
refinements. Because liquid refinements are drawn from a nite set of
predicates, in gradual liquid type inference we can enumerate the safe
concretizations of each imprecise refinement, i.e. the static refinements that
justify why a program is gradually well-typed. This enumeration is useful for
static liquid type error explanation, since the safe concretizations exhibit
all the potential inconsistencies that lead to static type errors. We develop
the theory of gradual liquid type inference and explore its pragmatics in the
setting of Liquid Haskell.Comment: To appear at OOPSLA 201
- âŚ