395 research outputs found
Stochastic Invariants for Probabilistic Termination
Termination is one of the basic liveness properties, and we study the
termination problem for probabilistic programs with real-valued variables.
Previous works focused on the qualitative problem that asks whether an input
program terminates with probability~1 (almost-sure termination). A powerful
approach for this qualitative problem is the notion of ranking supermartingales
with respect to a given set of invariants. The quantitative problem
(probabilistic termination) asks for bounds on the termination probability. A
fundamental and conceptual drawback of the existing approaches to address
probabilistic termination is that even though the supermartingales consider the
probabilistic behavior of the programs, the invariants are obtained completely
ignoring the probabilistic aspect.
In this work we address the probabilistic termination problem for
linear-arithmetic probabilistic programs with nondeterminism. We define the
notion of {\em stochastic invariants}, which are constraints along with a
probability bound that the constraints hold. We introduce a concept of {\em
repulsing supermartingales}. First, we show that repulsing supermartingales can
be used to obtain bounds on the probability of the stochastic invariants.
Second, we show the effectiveness of repulsing supermartingales in the
following three ways: (1)~With a combination of ranking and repulsing
supermartingales we can compute lower bounds on the probability of termination;
(2)~repulsing supermartingales provide witnesses for refutation of almost-sure
termination; and (3)~with a combination of ranking and repulsing
supermartingales we can establish persistence properties of probabilistic
programs.
We also present results on related computational problems and an experimental
evaluation of our approach on academic examples.Comment: Full version of a paper published at POPL 2017. 20 page
Cost Analysis of Nondeterministic Probabilistic Programs
We consider the problem of expected cost analysis over nondeterministic
probabilistic programs, which aims at automated methods for analyzing the
resource-usage of such programs. Previous approaches for this problem could
only handle nonnegative bounded costs. However, in many scenarios, such as
queuing networks or analysis of cryptocurrency protocols, both positive and
negative costs are necessary and the costs are unbounded as well.
In this work, we present a sound and efficient approach to obtain polynomial
bounds on the expected accumulated cost of nondeterministic probabilistic
programs. Our approach can handle (a) general positive and negative costs with
bounded updates in variables; and (b) nonnegative costs with general updates to
variables. We show that several natural examples which could not be handled by
previous approaches are captured in our framework.
Moreover, our approach leads to an efficient polynomial-time algorithm, while
no previous approach for cost analysis of probabilistic programs could
guarantee polynomial runtime. Finally, we show the effectiveness of our
approach by presenting experimental results on a variety of programs, motivated
by real-world applications, for which we efficiently synthesize tight
resource-usage bounds.Comment: A conference version will appear in the 40th ACM Conference on
Programming Language Design and Implementation (PLDI 2019
Aiming Low Is Harder -- Induction for Lower Bounds in Probabilistic Program Verification
We present a new inductive rule for verifying lower bounds on expected values of random variables after execution of probabilistic loops as well as on their expected runtimes. Our rule is simple in the sense that loop body semantics need to be applied only finitely often in order to verify that the candidates are indeed lower bounds. In particular, it is not necessary to find the limit of a sequence as in many previous rules
Cost analysis of nondeterministic probabilistic programs
We consider the problem of expected cost analysis over nondeterministic probabilistic programs,
which aims at automated methods for analyzing the resource-usage of such programs.
Previous approaches for this problem could only handle nonnegative bounded costs.
However, in many scenarios, such as queuing networks or analysis of cryptocurrency protocols,
both positive and negative costs are necessary and the costs are unbounded as well.
In this work, we present a sound and efficient approach to obtain polynomial bounds on the
expected accumulated cost of nondeterministic probabilistic programs.
Our approach can handle (a) general positive and negative costs with bounded updates in
variables; and (b) nonnegative costs with general updates to variables.
We show that several natural examples which could not be
handled by previous approaches are captured in our framework.
Moreover, our approach leads to an efficient polynomial-time algorithm, while no
previous approach for cost analysis of probabilistic programs could guarantee polynomial runtime.
Finally, we show the effectiveness of our approach using experimental results on a variety of programs for which we efficiently synthesize tight resource-usage bounds
IST Austria Technical Report
We consider the problem of expected cost analysis over nondeterministic probabilistic programs, which aims at automated methods for analyzing the resource-usage of such programs. Previous approaches for this problem could only handle nonnegative bounded costs. However, in many scenarios, such as queuing networks or analysis of cryptocurrency protocols, both positive and negative costs are necessary and the costs are unbounded as well.
In this work, we present a sound and efficient approach to obtain polynomial bounds on the expected accumulated cost of nondeterministic probabilistic programs. Our approach can handle (a) general positive and negative costs with bounded updates in variables; and (b) nonnegative costs with general updates to variables. We show that several natural examples which could not be handled by previous approaches are captured in our framework.
Moreover, our approach leads to an efficient polynomial-time algorithm, while no previous approach for cost analysis of probabilistic programs could guarantee polynomial runtime. Finally, we show the effectiveness of our approach by presenting experimental results on a variety of programs, motivated by real-world applications, for which we efficiently synthesize tight resource-usage bounds
Template-Based Static Posterior Inference for Bayesian Probabilistic Programming
In Bayesian probabilistic programming, a central problem is to estimate the
normalised posterior distribution (NPD) of a probabilistic program with
conditioning. Prominent approximate approaches to address this problem include
Markov chain Monte Carlo and variational inference, but neither can generate
guaranteed outcomes within limited time. Moreover, most existing formal
approaches that perform exact inference for NPD are restricted to programs with
closed-form solutions or bounded loops/recursion. A recent work (Beutner et
al., PLDI 2022) derived guaranteed bounds for NPD over programs with unbounded
recursion. However, as this approach requires recursion unrolling, it suffers
from the path explosion problem. Furthermore, previous approaches do not
consider score-recursive probabilistic programs that allow score statements
inside loops, which is non-trivial and requires careful treatment to ensure the
integrability of the normalising constant in NPD.
In this work, we propose a novel automated approach to derive bounds for NPD
via polynomial templates. Our approach can handle probabilistic programs with
unbounded while loops and continuous distributions with infinite supports. The
novelties in our approach are three-fold: First, we use polynomial templates to
circumvent the path explosion problem from recursion unrolling; Second, we
derive a novel multiplicative variant of Optional Stopping Theorem that
addresses the integrability issue in score-recursive programs; Third, to
increase the accuracy of the derived bounds via polynomial templates, we
propose a novel technique of truncation that truncates a program into a bounded
range of program values. Experiments over a wide range of benchmarks
demonstrate that our approach is time-efficient and can derive bounds for NPD
that are comparable with (or tighter than) the recursion-unrolling approach
(Beutner et al., PLDI 2022)
Programmatic Strategy Synthesis: Resolving Nondeterminism in Probabilistic Programs
We consider imperative programs that involve both randomization and pure
nondeterminism. The central question is how to find a strategy resolving the
pure nondeterminism such that the so-obtained determinized program satisfies a
given quantitative specification, i.e., bounds on expected outcomes such as the
expected final value of a program variable or the probability to terminate in a
given set of states. We show how memoryless and deterministic (MD) strategies
can be obtained in a semi-automatic fashion using deductive verification
techniques. For loop-free programs, the MD strategies resulting from our
weakest precondition-style framework are correct by construction. This extends
to loopy programs, provided the loops are equipped with suitable loop
invariants - just like in program verification. We show how our technique
relates to the well-studied problem of obtaining strategies in countably
infinite Markov decision processes with reachability-reward objectives.
Finally, we apply our technique to several case studies
A Deductive Verification Infrastructure for Probabilistic Programs
This paper presents a quantitative program verification infrastructure for
discrete probabilistic programs. Our infrastructure can be viewed as the
probabilistic analogue of Boogie: its central components are an intermediate
verification language (IVL) together with a real-valued logic. Our IVL provides
a programming-language-style for expressing verification conditions whose
validity implies the correctness of a program under investigation. As our focus
is on verifying quantitative properties such as bounds on expected outcomes,
expected run-times, or termination probabilities, off-the-shelf IVLs based on
Boolean first-order logic do not suffice. Instead, a paradigm shift from the
standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification
constructs such as assume- and assert-statements. Verification conditions are
generated by a weakest-precondition-style semantics, based on our real-valued
logic. We show that our verification infrastructure supports natural encodings
of numerous verification techniques from the literature. With our SMT-based
implementation, we automatically verify a variety of benchmarks. To the best of
our knowledge, this establishes the first deductive verification infrastructure
for expectation-based reasoning about probabilistic programs
- …