29 research outputs found
Type Targeted Testing
We present a new technique called type targeted testing, which translates
precise refinement types into comprehensive test-suites. The key insight behind
our approach is that through the lens of SMT solvers, refinement types can also
be viewed as a high-level, declarative, test generation technique, wherein
types are converted to SMT queries whose models can be decoded into concrete
program inputs. Our approach enables the systematic and exhaustive testing of
implementations from high-level declarative specifications, and furthermore,
provides a gradual path from testing to full verification. We have implemented
our approach as a Haskell testing tool called TARGET, and present an evaluation
that shows how TARGET can be used to test a wide variety of properties and how
it compares against state-of-the-art testing approaches
REST: Integrating Term Rewriting with Program Verification (Extended Version)
We introduce REST, a novel term rewriting technique for theorem proving that uses online termination checking and can be integrated with existing program verifiers. REST enables flexible but terminating term rewriting for theorem proving by: (1) exploiting newly-introduced term orderings that are more permissive than standard rewrite simplification orderings; (2) dynamically and iteratively selecting orderings based on the path of rewrites taken so far; and (3) integrating external oracles that allow steps that cannot be justified with rewrite rules. Our REST approach is designed around an easily implementable core algorithm, parameterizable by choices of term orderings and their implementations; in this way our approach can be easily integrated into existing tools. We implemented REST as a Haskell library and incorporated it into Liquid Haskell's evaluation strategy, extending Liquid Haskell with rewriting rules. We evaluated our REST implementation by comparing it against both existing rewriting techniques and E-matching and by showing that it can be used to supplant manual lemma application in many existing Liquid Haskell proofs
Predicate Abstraction for Linked Data Structures
We present Alias Refinement Types (ART), a new approach to the verification
of correctness properties of linked data structures. While there are many
techniques for checking that a heap-manipulating program adheres to its
specification, they often require that the programmer annotate the behavior of
each procedure, for example, in the form of loop invariants and pre- and
post-conditions. Predicate abstraction would be an attractive abstract domain
for performing invariant inference, existing techniques are not able to reason
about the heap with enough precision to verify functional properties of data
structure manipulating programs. In this paper, we propose a technique that
lifts predicate abstraction to the heap by factoring the analysis of data
structures into two orthogonal components: (1) Alias Types, which reason about
the physical shape of heap structures, and (2) Refinement Types, which use
simple predicates from an SMT decidable theory to capture the logical or
semantic properties of the structures. We prove ART sound by translating types
into separation logic assertions, thus translating typing derivations in ART
into separation logic proofs. We evaluate ART by implementing a tool that
performs type inference for an imperative language, and empirically show, using
a suite of data-structure benchmarks, that ART requires only 21% of the
annotations needed by other state-of-the-art verification techniques
Proving Type Class Laws for Haskell
Type classes in Haskell are used to implement ad-hoc polymorphism, i.e. a way
to ensure both to the programmer and the compiler that a set of functions are
defined for a specific data type. All instances of such type classes are
expected to behave in a certain way and satisfy laws associated with the
respective class. These are however typically just stated in comments and as
such, there is no real way to enforce that they hold. In this paper we describe
a system which allows the user to write down type class laws which are then
automatically instantiated and sent to an inductive theorem prover when
declaring a new instance of a type class.Comment: Presented at the Symposium for Trends in Functional Programming, 201
Verifying Higher-Order Functions with Tree Automata
International audienceThis paper describes a fully automatic technique for verifying safety properties of higher-order functional programs. Tree automata are used to represent sets of reachable states and functional programs are modeled using term rewriting systems. From a tree automaton representing the initial state, a completion algorithm iteratively computes an automaton which over-approximates the output set of the program to verify. We identify a subclass of higher-order functional programs for which the completion is guaranteed to terminate. Precision and termination are obtained conjointly by a careful choice of equations between terms. The verification objective can be used to generate sets of equations automatically. Our experiments show that tree automata are sufficiently expressive to prove intricate safety properties and sufficiently simple for the verification result to be certified in Coq
JayHorn : A framework for verifying Java programs
Building a competitive program verifiers is becoming cheaper. On the front-end side, openly available compiler infrastructure and optimization frameworks take care of hairy problems such as alias analysis, and break down the subtleties of modern languages into a handful of simple instructions that need to be handled. On the back-end side, theorem provers start providing full-fledged model checking algorithms, such as PDR, that take care looping control-flow. In this spirit, we developed JayHorn, a verification framework for Java with the goal of having as few moving parts as possible. Most steps of the translation from Java into logic are implemented as bytecode transformations, with the implication that their soundness can be tested easily. From the transformed bytecode, we generate a set of constrained Horn clauses that are verified using state-of-the-art Horn solvers. We report on our implementation experience and evaluate JayHorn on benchmarks