40 research outputs found

### A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations

We present a weakest-precondition-style calculus for reasoning about the
expected values (pre-expectations) of \emph{mixed-sign unbounded} random
variables after execution of a probabilistic program. The semantics of a
while-loop is well-defined as the limit of iteratively applying a functional to
a zero-element just as in the traditional weakest pre-expectation calculus,
even though a standard least fixed point argument is not applicable in this
context. A striking feature of our semantics is that it is always well-defined,
even if the expected values do not exist. We show that the calculus is sound,
allows for compositional reasoning, and present an invariant-based approach for
reasoning about pre-expectations of loops

### Optimistic Value Iteration

Markov decision processes are widely used for planning and verification in
settings that combine controllable or adversarial choices with probabilistic
behaviour. The standard analysis algorithm, value iteration, only provides a
lower bound on unbounded probabilities or reward values. Two "sound"
variations, which also deliver an upper bound, have recently appeared. In this
paper, we present optimistic value iteration, a new sound approach that
leverages value iteration's ability to usually deliver tight lower bounds: we
obtain a lower bound via standard value iteration, use the result to "guess" an
upper bound, and prove the latter's correctness. Optimistic value iteration is
easy to implement, does not require extra precomputations or a priori state
space transformations, and works for computing reachability probabilities as
well as expected rewards. It is also fast, as we show via an extensive
experimental evaluation using our publicly available implementation within the
Modest Toolset

### A New Proof Rule for Almost-Sure Termination

An important question for a probabilistic program is whether the probability
mass of all its diverging runs is zero, that is that it terminates "almost
surely". Proving that can be hard, and this paper presents a new method for
doing so; it is expressed in a program logic, and so applies directly to source
code. The programs may contain both probabilistic- and demonic choice, and the
probabilistic choices may depend on the current state.
As do other researchers, we use variant functions (a.k.a.
"super-martingales") that are real-valued and probabilistically might decrease
on each loop iteration; but our key innovation is that the amount as well as
the probability of the decrease are parametric.
We prove the soundness of the new rule, indicate where its applicability goes
beyond existing rules, and explain its connection to classical results on
denumerable (non-demonic) Markov chains.Comment: V1 to appear in PoPL18. This version collects some existing text into
new example subsection 5.5 and adds a new example 5.6 and makes further
remarks about uncountable branching. The new example 5.6 relates to work on
lexicographic termination methods, also to appear in PoPL18 [Agrawal et al,
2018

### How long, O Bayesian network, will I sample thee? A program analysis perspective on expected sampling times

Bayesian networks (BNs) are probabilistic graphical models for describing
complex joint probability distributions. The main problem for BNs is inference:
Determine the probability of an event given observed evidence. Since exact
inference is often infeasible for large BNs, popular approximate inference
methods rely on sampling.
We study the problem of determining the expected time to obtain a single
valid sample from a BN. To this end, we translate the BN together with
observations into a probabilistic program. We provide proof rules that yield
the exact expected runtime of this program in a fully automated fashion. We
implemented our approach and successfully analyzed various real-world BNs taken
from the Bayesian network repository

### A Deductive Verification Infrastructure for Probabilistic Programs

This paper presents a quantitative program verification infrastructure for
discrete probabilistic programs. Our infrastructure can be viewed as the
probabilistic analogue of Boogie: its central components are an intermediate
verification language (IVL) together with a real-valued logic. Our IVL provides
a programming-language-style for expressing verification conditions whose
validity implies the correctness of a program under investigation. As our focus
is on verifying quantitative properties such as bounds on expected outcomes,
expected run-times, or termination probabilities, off-the-shelf IVLs based on
Boolean first-order logic do not suffice. Instead, a paradigm shift from the
standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification
constructs such as assume- and assert-statements. Verification conditions are
generated by a weakest-precondition-style semantics, based on our real-valued
logic. We show that our verification infrastructure supports natural encodings
of numerous verification techniques from the literature. With our SMT-based
implementation, we automatically verify a variety of benchmarks. To the best of
our knowledge, this establishes the first deductive verification infrastructure
for expectation-based reasoning about probabilistic programs

### Quantitative Separation Logic - A Logic for Reasoning about Probabilistic Programs

We present quantitative separation logic ($\mathsf{QSL}$). In contrast to
classical separation logic, $\mathsf{QSL}$ employs quantities which evaluate to
real numbers instead of predicates which evaluate to Boolean values. The
connectives of classical separation logic, separating conjunction and
separating implication, are lifted from predicates to quantities. This
extension is conservative: Both connectives are backward compatible to their
classical analogs and obey the same laws, e.g. modus ponens, adjointness, etc.
Furthermore, we develop a weakest precondition calculus for quantitative
reasoning about probabilistic pointer programs in $\mathsf{QSL}$. This calculus
is a conservative extension of both Reynolds' separation logic for
heap-manipulating programs and Kozen's / McIver and Morgan's weakest
preexpectations for probabilistic programs. Soundness is proven with respect to
an operational semantics based on Markov decision processes. Our calculus
preserves O'Hearn's frame rule, which enables local reasoning. We demonstrate
that our calculus enables reasoning about quantities such as the probability of
terminating with an empty heap, the probability of reaching a certain array
permutation, or the expected length of a list

### A Calculus for Amortized Expected Runtimes

We develop a weakest-precondition-style calculus Ã la Dijkstra for reasoning about amortized expected runtimes of randomized algorithms with access to dynamic memory â€” the aert calculus. Our calculus is truly quantitative, i.e. instead of Boolean valued predicates, it manipulates real-valued functions. En route to the aert calculus, we study the ert calculus for reasoning about expected runtimes of Kaminski et al. [2018] extended by capabilities for handling dynamic memory, thus enabling compositional and local reasoning about randomized data structures. This extension employs runtime separation logic, which has been foreshadowed by Matheja [2020] and then implemented in Isabelle/HOL by Haslbeck [2021]. In addition to Haslbeckâ€™s results, we further prove soundness of the so-extended ert calculus with respect to an operational Markov decision process model featuring countably-branching nondeterminism, provide extensive intuitive explanations, and provide proof rules enabling separation logic-style verification for upper bounds on expected runtimes. Finally, we build the so-called potential method for amortized analysis into the ert calculus, thus obtaining the aert calculus. Soundness of the aert calculus is obtained from the soundness of the ert calculus and some probabilistic form of telescoping. Since one needs to be able to handle changes in potential which can in principle be both positive or negative, the aert calculus needs to be â€” essentially â€” capable of handling certain signed random variables. A particularly pleasing feature of our solution is that, unlike e.g. Kozen [1985], we obtain a loop rule for our signed random variables, and furthermore, unlike e.g. Kaminski and Katoen [2017], the aert calculus makes do without the need for involved technical machinery keeping track of the integrability of the random variables.
Finally, we present case studies, including a formal analysis of a randomized delete-insert-find-any set data structure [Brodal et al. 1996], which yields a constant expected runtime per operation, whereas no deterministic algorithm can achieve this

### A Deductive Verification Infrastructure for Probabilistic Programs

This paper presents a quantitative program verification infrastructure for discrete probabilistic programs. Our infrastructure can be viewed as the probabilistic analogue of Boogie: its central components are an intermediate verification language (IVL) together with a real-valued logic. Our IVL provides a programming-language-style for expressing verification conditions whose validity implies the correctness of a program under investigation. As our focus is on verifying quantitative properties such as bounds on expected outcomes, expected run-times, or termination probabilities, off-the-shelf IVLs based on Boolean first-order logic do not suffice. Instead, a paradigm shift from the standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Verification conditions are generated by a weakest-precondition-style semantics, based on our real-valued logic. We show that our verification infrastructure supports natural encodings of numerous verification techniques from the literature. With our SMT-based implementation, we automatically verify a variety of benchmarks. To the best of our knowledge, this establishes the first deductive verification infrastructure for expectation-based reasoning about probabilistic programs

### Lower Bounds for Possibly Divergent Probabilistic Programs

We present a new proof rule for verifying lower bounds on quantities of probabilistic programs. Our proof rule is not confined to almost-surely terminating programs -- as is the case for existing rules -- and can be used to establish non-trivial lower bounds on, e.g., termination probabilities and expected values, for possibly divergent probabilistic loops, e.g., the well-known three-dimensional random walk on a lattice