21 research outputs found
A Deductive Verification Infrastructure for Probabilistic Programs
This paper presents a quantitative program verification infrastructure for discrete probabilistic programs. Our infrastructure can be viewed as the probabilistic analogue of Boogie: its central components are an intermediate verification language (IVL) together with a real-valued logic. Our IVL provides a programming-language-style for expressing verification conditions whose validity implies the correctness of a program under investigation. As our focus is on verifying quantitative properties such as bounds on expected outcomes, expected run-times, or termination probabilities, off-the-shelf IVLs based on Boolean first-order logic do not suffice. Instead, a paradigm shift from the standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Verification conditions are generated by a weakest-precondition-style semantics, based on our real-valued logic. We show that our verification infrastructure supports natural encodings of numerous verification techniques from the literature. With our SMT-based implementation, we automatically verify a variety of benchmarks. To the best of our knowledge, this establishes the first deductive verification infrastructure for expectation-based reasoning about probabilistic programs
A Deductive Verification Infrastructure for Probabilistic Programs
This paper presents a quantitative program verification infrastructure for
discrete probabilistic programs. Our infrastructure can be viewed as the
probabilistic analogue of Boogie: its central components are an intermediate
verification language (IVL) together with a real-valued logic. Our IVL provides
a programming-language-style for expressing verification conditions whose
validity implies the correctness of a program under investigation. As our focus
is on verifying quantitative properties such as bounds on expected outcomes,
expected run-times, or termination probabilities, off-the-shelf IVLs based on
Boolean first-order logic do not suffice. Instead, a paradigm shift from the
standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification
constructs such as assume- and assert-statements. Verification conditions are
generated by a weakest-precondition-style semantics, based on our real-valued
logic. We show that our verification infrastructure supports natural encodings
of numerous verification techniques from the literature. With our SMT-based
implementation, we automatically verify a variety of benchmarks. To the best of
our knowledge, this establishes the first deductive verification infrastructure
for expectation-based reasoning about probabilistic programs
Predicting SMT solver performance for software verification
The approach Why3 takes to interfacing with a wide variety of interactive
and automatic theorem provers works well: it is designed to overcome
limitations on what can be proved by a system which relies on a single
tightly-integrated solver. In common with other systems, however, the degree
to which proof obligations (or “goals”) are proved depends as much on
the SMT solver as the properties of the goal itself. In this work, we present a
method to use syntactic analysis to characterise goals and predict the most
appropriate solver via machine-learning techniques.
Combining solvers in this way - a portfolio-solving approach - maximises
the number of goals which can be proved. The driver-based architecture of
Why3 presents a unique opportunity to use a portfolio of SMT solvers for
software verification. The intelligent scheduling of solvers minimises the
time it takes to prove these goals by avoiding solvers which return Timeout
and Unknown responses. We assess the suitability of a number of machinelearning
algorithms for this scheduling task.
The performance of our tool Where4 is evaluated on a dataset of proof
obligations. We compare Where4 to a range of SMT solvers and theoretical
scheduling strategies. We find that Where4 can out-perform individual
solvers by proving a greater number of goals in a shorter average time.
Furthermore, Where4 can integrate into a Why3 user’s normal workflow -
simplifying and automating the non-expert use of SMT solvers for software
verification
Predicting SMT solver performance for software verification
The approach Why3 takes to interfacing with a wide variety of interactive
and automatic theorem provers works well: it is designed to overcome
limitations on what can be proved by a system which relies on a single
tightly-integrated solver. In common with other systems, however, the degree
to which proof obligations (or “goals”) are proved depends as much on
the SMT solver as the properties of the goal itself. In this work, we present a
method to use syntactic analysis to characterise goals and predict the most
appropriate solver via machine-learning techniques.
Combining solvers in this way - a portfolio-solving approach - maximises
the number of goals which can be proved. The driver-based architecture of
Why3 presents a unique opportunity to use a portfolio of SMT solvers for
software verification. The intelligent scheduling of solvers minimises the
time it takes to prove these goals by avoiding solvers which return Timeout
and Unknown responses. We assess the suitability of a number of machinelearning
algorithms for this scheduling task.
The performance of our tool Where4 is evaluated on a dataset of proof
obligations. We compare Where4 to a range of SMT solvers and theoretical
scheduling strategies. We find that Where4 can out-perform individual
solvers by proving a greater number of goals in a shorter average time.
Furthermore, Where4 can integrate into a Why3 user’s normal workflow -
simplifying and automating the non-expert use of SMT solvers for software
verification
Automated Algebraic Reasoning for Collections and Local Variables with Lenses
Lenses are a useful algebraic structure for giving a unifying semantics to program variables in a variety of store models. They support efficient automated proof in the Isabelle/UTP verification framework. In this paper, we expand our lens library with (1) dynamic lenses, that support mutable indexed collections, such as arrays, and (2) symmetric lenses, that allow partitioning of a state space into disjoint local and global regions to support variable scopes. From this basis, we provide an enriched program model in Isabelle/UTP for collection variables and variable blocks. For the latter, we adopt an approach first used by Back and von Wright, and derive weakest precondition and Hoare calculi. We demonstrate several examples, including verification of insertion sor
Scaling Up Automated Verification: A Case Study and a Formalization IDE for Building High Integrity Software
Component-based software verification is a difficult challenge because developers must specify components formally and annotate implementations with suitable assertions that are amenable to automation. This research investigates the intrinsic complexity in this challenge using a component-based case study. Simultaneously, this work also seeks to minimize the extrinsic complexities of this challenge through the development and usage of a formalization integrated development environment (F-IDE) built for specifying, developing, and using verified reusable software components.
The first contribution is an F-IDE built to support formal specification and automated verification of object-based software for the integrated specification and programming language RESOLVE. The F-IDE is novel, as it integrates a verifying compiler with a user-friendly interface that provides a number of amenities including responsive editing for model-based mathematical contracts and code, assistance for design by contract, verification, responsive error handling, and generation of property-preserving Java code that can be run within the F-IDE.
The second contribution is a case study built using the F-IDE that involves an interplay of multiple artifacts encompassing mathematical units, component interfaces, and realizations. The object-based interfaces involved are specified in terms of new mathematical models and non-trivial theories designed to encapsulate data structures and algorithms. The components are designed to be amenable to modular verification and analysis
Symbooglix: A Symbolic Execution Engine for Boogie Programs
Abstract-We present the design and implementation of Symbooglix, a symbolic execution engine for the Boogie intermediate verification language. Symbooglix aims to find bugs in Boogie programs efficiently, providing bug-finding capabilities for any program analysis framework that uses Boogie as a target language. We discuss the technical challenges associated with handling Boogie, and describe how we optimised Symbooglix using a small training set of benchmarks. This empiricallydriven optimisation approach avoids over-fitting Symbooglix to our benchmarks, enabling a fair comparison with other tools. We present an evaluation across 3749 Boogie programs generated from the SV-COMP suite of C programs using the SMACK frontend, and 579 Boogie programs originating from several OpenCL and CUDA GPU benchmark suites, translated by the GPUVerify front-end. Our results show that Symbooglix significantly outperforms Boogaloo, an existing symbolic execution tool for Boogie, and is competitive with GPUVerify on benchmarks for which GPUVerify is highly optimised. While generally less effective than the Corral and Duality tools on the SV-COMP suite, Symbooglix is complementary to them in terms of bug-finding ability
Comparative Verification of the Digital Library of Mathematical Functions and Computer Algebra Systems
Digital mathematical libraries assemble the knowledge of years of mathematical research. Numerous disciplines (e.g., physics, engineering, pure and applied mathematics) rely heavily on compendia gathered findings. Likewise, modern research applications rely more and more on computational solutions, which are often calculated and verified by computer algebra systems. Hence, the correctness, accuracy, and reliability of both digital mathematical libraries and computer algebra systems is a crucial attribute for modern research. In this paper, we present a novel approach to verify a digital mathematical library and two computer algebra systems with one another by converting mathematical expressions from one system to the other. We use our previously developed conversion tool (referred to as ) to translate formulae from the NIST Digital Library of Mathematical Functions to the computer algebra systems Maple and Mathematica. The contributions of our presented work are as follows:Â (1) we present the most comprehensive verification of computer algebra systems and digital mathematical libraries with one another; (2) we significantly enhance the performance of the underlying translator in terms of coverage and accuracy; and (3) we provide open access to translations for Maple and Mathematica of the formulae in the NIST Digital Library of Mathematical Functions