27 research outputs found
Recommended from our members
Programming and proving with classical types
The propositions-as-types correspondence is ordinarily presen-
ted as linking the metatheory of typed λ-calculi and the proof theory
of intuitionistic logic. Griffin observed that this correspondence could
be extended to classical logic through the use of control operators. This
observation set off a flurry of further research, leading to the development
of Parigot’s λμ-calculus. In this work, we use the λμ-calculus as the
foundation for a system of proof terms for classical first-order logic. In
particular, we define an extended call-by-value λμ-calculus with a type
system in correspondence with full classical logic. We extend the language
with polymorphic types, add a host of data types in ‘direct style’, and
prove several metatheoretical properties. All of our proofs and definitions
are mechanised in Isabelle/HOL, and we automatically obtain an inter-
preter for a system of proof terms cum programming language—called
μML—using Isabelle’s code generation mechanism. Atop our proof terms,
we build a prototype LCF-style interactive theorem prover—called μTP—
for classical first-order logic, capable of synthesising μML programs from
completed tactic-driven proofs. We present example closed μML programs
with classical tautologies for types, including some inexpressible as closed
programs in the original λμ-calculus, and some example tactic-driven
μTP proofs of classical tautologies
A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions
The paper describes the refinement algorithm for the Calculus of
(Co)Inductive Constructions (CIC) implemented in the interactive theorem prover
Matita. The refinement algorithm is in charge of giving a meaning to the terms,
types and proof terms directly written by the user or generated by using
tactics, decision procedures or general automation. The terms are written in an
"external syntax" meant to be user friendly that allows omission of
information, untyped binders and a certain liberal use of user defined
sub-typing. The refiner modifies the terms to obtain related well typed terms
in the internal syntax understood by the kernel of the ITP. In particular, it
acts as a type inference algorithm when all the binders are untyped. The
proposed algorithm is bi-directional: given a term in external syntax and a
type expected for the term, it propagates as much typing information as
possible towards the leaves of the term. Traditional mono-directional
algorithms, instead, proceed in a bottom-up way by inferring the type of a
sub-term and comparing (unifying) it with the type expected by its context only
at the end. We propose some novel bi-directional rules for CIC that are
particularly effective. Among the benefits of bi-directionality we have better
error message reporting and better inference of dependent types. Moreover,
thanks to bi-directionality, the coercion system for sub-typing is more
effective and type inference generates simpler unification problems that are
more likely to be solved by the inherently incomplete higher order unification
algorithms implemented. Finally we introduce in the external syntax the notion
of vector of placeholders that enables to omit at once an arbitrary number of
arguments. Vectors of placeholders allow a trivial implementation of implicit
arguments and greatly simplify the implementation of primitive and simple
tactics
A formalization of multi-tape Turing machines
We discuss the formalization, in the Matita Theorem Prover, of basic results on multi-tapes Turing machines, up to the existence of a (certified) Universal Machine, and propose it as a natural benchmark for comparing different interactive provers and assessing the state of the art in the mechanization of formal reasoning. The work is meant to be a preliminary step towards the creation of a formal repository in Complexity Theory, and is a small piece in our long-term Reverse Complexity program, aiming to a comfortable, machine independent axiomatization of the field
Foundational Extensible Corecursion
This paper presents a formalized framework for defining corecursive functions
safely in a total setting, based on corecursion up-to and relational
parametricity. The end product is a general corecursor that allows corecursive
(and even recursive) calls under well-behaved operations, including
constructors. Corecursive functions that are well behaved can be registered as
such, thereby increasing the corecursor's expressiveness. The metatheory is
formalized in the Isabelle proof assistant and forms the core of a prototype
tool. The corecursor is derived from first principles, without requiring new
axioms or extensions of the logic
A Type Checker for a Logical Framework with Union and Intersection Types
We present the syntax, semantics, and typing rules of Bull, a prototype
theorem prover based on the Delta-Framework, i.e. a fully-typed lambda-calculus
decorated with union and intersection types, as described in previous papers by
the authors. Bull also implements a subtyping algorithm for the Type Theory Xi
of Barbanera-Dezani-de'Liguoro. Bull has a command-line interface where the
user can declare axioms, terms, and perform computations and some basic
terminal-style features like error pretty-printing, subexpressions
highlighting, and file loading. Moreover, it can typecheck a proof or normalize
it. These terms can be incomplete, therefore the typechecking algorithm uses
unification to try to construct the missing subterms. Bull uses the syntax of
Berardi's Pure Type Systems to improve the compactness and the modularity of
the kernel. Abstract and concrete syntax are mostly aligned and similar to the
concrete syntax of Coq. Bull uses a higher-order unification algorithm for
terms, while typechecking and partial type inference are done by a
bidirectional refinement algorithm, similar to the one found in Matita and
Beluga. The refinement can be split into two parts: the essence refinement and
the typing refinement. Binders are implemented using commonly-used de Bruijn
indices. We have defined a concrete language syntax that will allow the user to
write Delta-terms. We have defined the reduction rules and an evaluator. We
have implemented from scratch a refiner which does partial typechecking and
type reconstruction. We have experimented Bull with classical examples of the
intersection and union literature, such as the ones formalized by Pfenning with
his Refinement Types in LF. We hope that this research vein could be useful to
experiment, in a proof theoretical setting, forms of polymorphism alternatives
to Girard's parametric one
Foundational extensible corecursion: a proof assistant perspective
This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under “friendly” operations, including constructors. Friendly corecursive functions can be registered as such, thereby increasing the corecursor’s expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic
Foundational extensible corecursion: a proof assistant perspective
This paper presents a formalized framework for defining corecursive functions safely in a total setting, based on corecursion up-to and relational parametricity. The end product is a general corecursor that allows corecursive (and even recursive) calls under “friendly” operations, including constructors. Friendly corecursive functions can be registered as such, thereby increasing the corecursor’s expressiveness. The metatheory is formalized in the Isabelle proof assistant and forms the core of a prototype tool. The corecursor is derived from first principles, without requiring new axioms or extensions of the logic
Foundational (co)datatypes and (co)recursion for higher-order logic
We describe a line of work that started in 2011 towards enriching Isabelle/HOL's language with coinductive datatypes, which allow infinite values, and with a more expressive notion of inductive datatype than previously supported by any system based on higher-order logic. These (co)datatypes are complemented by definitional principles for (co)recursive functions and reasoning principles for (co)induction. In contrast with other systems offering codatatypes, no additional axioms or logic extensions are necessary with our approach
A Type Checker for a Logical Framework with Union and Intersection Types
International audienceWe present the syntax, semantics, typing, subtyping, unification, refinement, and REPL of Bull, a prototype theorem prover based on the ∆-Framework, i.e. a fully-typed Logical Framework à la Edinburgh LF decorated with union and intersection types, as described in previous papers by the authors. Bull also implements a subtyping algorithm for the Type Theory Ξ of Barbanera-Dezani-de'Liguoro. Bull has a command-line interface where the user can declare axioms, terms, and perform computations and some basic terminal-style features like error pretty-printing, subexpressions highlighting, and file loading. Moreover, it can typecheck a proof or normalize it. These terms can be incomplete, therefore the typechecking algorithm uses unification to try to construct the missing subterms. Bull uses the syntax of Berardi's Pure Type Systems to improve the compactness and the modularity of the kernel. Abstract and concrete syntax are mostly aligned and similar to the concrete syntax of Coq. Bull uses a higher-order unification algorithm for terms, while typechecking and partial type inference are done by a bidirectional refinement algorithm, similar to the one found in Matita and Beluga. The refinement can be split into two parts: the essence refinement and the typing refinement. Binders are implemented using commonly-used de Bruijn indices. We have defined a concrete language syntax that will allow user to write ∆-terms. We have defined the reduction rules and an evaluator. We have implemented from scratch a refiner which does partial typechecking and type reconstruction. We have experimented Bull with classical examples of the intersection and union literature, such as the ones formalized by Pfenning with his Refinement Types in LF and by Pierce. We hope that this research vein could be useful to experiment, in a proof theoretical setting, forms of polymorphism alternatives to Girard's parametric one