6,254 research outputs found
A Bi-Directional Refinement Algorithm for the Calculus of (Co)Inductive Constructions
The paper describes the refinement algorithm for the Calculus of
(Co)Inductive Constructions (CIC) implemented in the interactive theorem prover
Matita. The refinement algorithm is in charge of giving a meaning to the terms,
types and proof terms directly written by the user or generated by using
tactics, decision procedures or general automation. The terms are written in an
"external syntax" meant to be user friendly that allows omission of
information, untyped binders and a certain liberal use of user defined
sub-typing. The refiner modifies the terms to obtain related well typed terms
in the internal syntax understood by the kernel of the ITP. In particular, it
acts as a type inference algorithm when all the binders are untyped. The
proposed algorithm is bi-directional: given a term in external syntax and a
type expected for the term, it propagates as much typing information as
possible towards the leaves of the term. Traditional mono-directional
algorithms, instead, proceed in a bottom-up way by inferring the type of a
sub-term and comparing (unifying) it with the type expected by its context only
at the end. We propose some novel bi-directional rules for CIC that are
particularly effective. Among the benefits of bi-directionality we have better
error message reporting and better inference of dependent types. Moreover,
thanks to bi-directionality, the coercion system for sub-typing is more
effective and type inference generates simpler unification problems that are
more likely to be solved by the inherently incomplete higher order unification
algorithms implemented. Finally we introduce in the external syntax the notion
of vector of placeholders that enables to omit at once an arbitrary number of
arguments. Vectors of placeholders allow a trivial implementation of implicit
arguments and greatly simplify the implementation of primitive and simple
tactics
Higher-Order Termination: from Kruskal to Computability
Termination is a major question in both logic and computer science. In logic,
termination is at the heart of proof theory where it is usually called strong
normalization (of cut elimination). In computer science, termination has always
been an important issue for showing programs correct. In the early days of
logic, strong normalization was usually shown by assigning ordinals to
expressions in such a way that eliminating a cut would yield an expression with
a smaller ordinal. In the early days of verification, computer scientists used
similar ideas, interpreting the arguments of a program call by a natural
number, such as their size. Showing the size of the arguments to decrease for
each recursive call gives a termination proof of the program, which is however
rather weak since it can only yield quite small ordinals. In the sixties, Tait
invented a new method for showing cut elimination of natural deduction, based
on a predicate over the set of terms, such that the membership of an expression
to the predicate implied the strong normalization property for that expression.
The predicate being defined by induction on types, or even as a fixpoint, this
method could yield much larger ordinals. Later generalized by Girard under the
name of reducibility or computability candidates, it showed very effective in
proving the strong normalization property of typed lambda-calculi..
Reasoning about modular datatypes with Mendler induction
In functional programming, datatypes a la carte provide a convenient modular
representation of recursive datatypes, based on their initial algebra
semantics. Unfortunately it is highly challenging to implement this technique
in proof assistants that are based on type theory, like Coq. The reason is that
it involves type definitions, such as those of type-level fixpoint operators,
that are not strictly positive. The known work-around of impredicative
encodings is problematic, insofar as it impedes conventional inductive
reasoning. Weak induction principles can be used instead, but they considerably
complicate proofs.
This paper proposes a novel and simpler technique to reason inductively about
impredicative encodings, based on Mendler-style induction. This technique
involves dispensing with dependent induction, ensuring that datatypes can be
lifted to predicates and relying on relational formulations. A case study on
proving subject reduction for structural operational semantics illustrates that
the approach enables modular proofs, and that these proofs are essentially
similar to conventional ones.Comment: In Proceedings FICS 2015, arXiv:1509.0282
Consistency and Completeness of Rewriting in the Calculus of Constructions
Adding rewriting to a proof assistant based on the Curry-Howard isomorphism,
such as Coq, may greatly improve usability of the tool. Unfortunately adding an
arbitrary set of rewrite rules may render the underlying formal system
undecidable and inconsistent. While ways to ensure termination and confluence,
and hence decidability of type-checking, have already been studied to some
extent, logical consistency has got little attention so far. In this paper we
show that consistency is a consequence of canonicity, which in turn follows
from the assumption that all functions defined by rewrite rules are complete.
We provide a sound and terminating, but necessarily incomplete algorithm to
verify this property. The algorithm accepts all definitions that follow
dependent pattern matching schemes presented by Coquand and studied by McBride
in his PhD thesis. It also accepts many definitions by rewriting, containing
rules which depart from standard pattern matching.Comment: 20 page
- …