9 research outputs found
Mechanizing Refinement Types (extended)
Practical checkers based on refinement types use the combination of implicit
semantic sub-typing and parametric polymorphism to simplify the specification
and automate the verification of sophisticated properties of programs. However,
a formal meta-theoretic accounting of the soundness of refinement type systems
using this combination has proved elusive. We present \lambda_RF a core
refinement calculus that combines semantic sub-typing and parametric
polymorphism. We develop a meta-theory for this calculus and prove soundness of
the type system. Finally, we give a full mechanization of our meta-theory using
the refinement-type based LiquidHaskell as a proof checker, showing how
refinements can be used for mechanization.Comment: 32 pages, under revie
Verifiable certificates for predicate subtyping
Adding predicate subtyping to higher-order logic yields a very expressive language in which type-checking is undecidable, making the definition of a system of verifiable certificates challenging. This work presents a solution to this issue with a minimal formalization of predicate subtyping, named PVS-Core, together with a system of verifiable certificates for PVS-Core, named PVS-Cert. PVS-Cert is based on the introduction of proof terms and explicit coercions. Its design is similar to that of PTSs with dependent pairs, at the exception of the definition of conversion, which is based on a specific notion of reduction → β * , corresponding to β-reduction combined with the erasure of coercions. The use of this reduction instead of the more standard reduction → βσ allows to establish a simple correspondence between PVS-Core and PVS-Cert. On the other hand, a type-checking algorithm is designed for PVS-Cert, built on proofs of type preservation of → βσ and strong normalization of both → βσ and → β *. Using these results, PVS-Cert judgements are used as verifiable certificates for predicate subtyping. In addition, the reduction → βσ is used to define a cut elimination procedure adapted to predicate subtyping. Its use to study the properties of predicate subtyp-ing is illustrated with a proof of consistency
Revisiting Occurrence Typing
We revisit occurrence typing, a technique to re ne the type of variables occurring in type-cases and, thus, capture some programming patterns used in untyped languages. Although occurrence typing was tied from its inception to set-theoretic types-union types, in particular-it never fully exploited the capabilities of these types. Here we show how, by using set-theoretic types, it is possible to develop a general typing framemork that encompasses and generalizes several aspects of current occurrence typing proposals and that can be applied to tackle other problems such as the inference of intersection types for functions and the optimization of the compilation of gradually typed languages
Refinement Reflection:Complete Verification with SMT
We introduce Refinement Reflection, a new framework for building SMT-based
deductive verifiers. The key idea is to reflect the code implementing a
user-defined function into the function's (output) refinement type. As a
consequence, at uses of the function, the function definition is instantiated
in the SMT logic in a precise fashion that permits decidable verification.
Reflection allows the user to write equational proofs of programs just by
writing other programs using pattern-matching and recursion to perform
case-splitting and induction. Thus, via the propositions-as-types principle, we
show that reflection permits the specification of arbitrary functional
correctness properties. Finally, we introduce a proof-search algorithm called
Proof by Logical Evaluation that uses techniques from model checking and
abstract interpretation, to completely automate equational reasoning. We have
implemented reflection in Liquid Haskell and used it to verify that the widely
used instances of the Monoid, Applicative, Functor, and Monad typeclasses
actually satisfy key algebraic laws required to make the clients safe, and have
used reflection to build the first library that actually verifies assumptions
about associativity and ordering that are crucial for safe deterministic
parallelism.Comment: 29 pages plus appendices, to appear in POPL 2018. arXiv admin note:
text overlap with arXiv:1610.0464