10 research outputs found
ParaGnosis:A Tool for Parallel Knowledge Compilation
ParaGnosis (https://doi.org/10.5281/zenodo.7312034, https://zenodo.org/badge/latestdoi/560170574, Alternative url: https://github.com/gisodal/paragnosis, Demo url: https://github.com/gisodal/paragnosis/blob/main/DEMO.md ) is an open-source tool that supports inference queries on Bayesian networks through weighted model counting. In the knowledge compilation step, the input Bayesian network is encoded as propositional logic and then compiled into a knowledge base in decision diagram representation. The tool supports various diagram formats, including the Weighted-Positive Binary Decision Diagram (WPBDD) which can concisely represent discrete probability distributions. Once compiled, the probabilistic knowledge base can be queried in the inference step. To efficiently implement both steps, ParaGnosis uses simulated annealing to split the knowledge base into a number of partitions. This further reduces the decision diagram size and crucially enables parallelism in both the compilation and the inference steps. Experiments demonstrate that this partitioned approach, in combination with the WPBDD representation, can outperform other approaches in the knowledge compilation step, at the cost of slightly more expensive inference queries. Additionally, the tool can attain 15-fold parallel speedups using 64 cores.</p
Automated Sensitivity Analysis for Probabilistic Loops
We present an exact approach to analyze and quantify the sensitivity of
higher moments of probabilistic loops with symbolic parameters, polynomial
arithmetic and potentially uncountable state spaces. Our approach integrates
methods from symbolic computation, probability theory, and static analysis in
order to automatically capture sensitivity information about probabilistic
loops. Sensitivity information allows us to formally establish how value
distributions of probabilistic loop variables influence the functional behavior
of loops, which can in particular be helpful when choosing values of loop
variables in order to ensure efficient/expected computations. Our work uses
algebraic techniques to model higher moments of loop variables via linear
recurrence equations and introduce the notion of sensitivity recurrences. We
show that sensitivity recurrences precisely model loop sensitivities, even in
cases where the moments of loop variables do not satisfy a system of linear
recurrences. As such, we enlarge the class of probabilistic loops for which
sensitivity analysis was so far feasible. We demonstrate the success of our
approach while analyzing the sensitivities of probabilistic loops
Density Elicitation with applications in Probabilistic Loops
Probabilistic loops can be employed to implement and to model different
processes ranging from software to cyber-physical systems. One main challenge
is how to automatically estimate the distribution of the underlying continuous
random variables symbolically and without sampling. We develop an approach,
which we call K-series estimation, to approximate statically the joint and
marginal distributions of a vector of random variables updated in a
probabilistic non-nested loop with polynomial and non-polynomial assignments.
Our approach is a general estimation method for an unknown probability density
function with bounded support. It naturally complements algorithms for
automatic derivation of moments in probabilistic loops such
as~\cite{BartocciKS19,Moosbruggeretal2022}. Its only requirement is a finite
number of moments of the unknown density. We show that Gram-Charlier (GC)
series, a widely used estimation method, is a special case of K-series when the
normal probability density function is used as reference distribution. We
provide also a formulation suitable for estimating both univariate and
multivariate distributions. We demonstrate the feasibility of our approach
using multiple examples from the literature.Comment: 34 page
A Deductive Verification Infrastructure for Probabilistic Programs
This paper presents a quantitative program verification infrastructure for discrete probabilistic programs. Our infrastructure can be viewed as the probabilistic analogue of Boogie: its central components are an intermediate verification language (IVL) together with a real-valued logic. Our IVL provides a programming-language-style for expressing verification conditions whose validity implies the correctness of a program under investigation. As our focus is on verifying quantitative properties such as bounds on expected outcomes, expected run-times, or termination probabilities, off-the-shelf IVLs based on Boolean first-order logic do not suffice. Instead, a paradigm shift from the standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Verification conditions are generated by a weakest-precondition-style semantics, based on our real-valued logic. We show that our verification infrastructure supports natural encodings of numerous verification techniques from the literature. With our SMT-based implementation, we automatically verify a variety of benchmarks. To the best of our knowledge, this establishes the first deductive verification infrastructure for expectation-based reasoning about probabilistic programs
A Deductive Verification Infrastructure for Probabilistic Programs
This paper presents a quantitative program verification infrastructure for
discrete probabilistic programs. Our infrastructure can be viewed as the
probabilistic analogue of Boogie: its central components are an intermediate
verification language (IVL) together with a real-valued logic. Our IVL provides
a programming-language-style for expressing verification conditions whose
validity implies the correctness of a program under investigation. As our focus
is on verifying quantitative properties such as bounds on expected outcomes,
expected run-times, or termination probabilities, off-the-shelf IVLs based on
Boolean first-order logic do not suffice. Instead, a paradigm shift from the
standard Boolean to a real-valued domain is required.
Our IVL features quantitative generalizations of standard verification
constructs such as assume- and assert-statements. Verification conditions are
generated by a weakest-precondition-style semantics, based on our real-valued
logic. We show that our verification infrastructure supports natural encodings
of numerous verification techniques from the literature. With our SMT-based
implementation, we automatically verify a variety of benchmarks. To the best of
our knowledge, this establishes the first deductive verification infrastructure
for expectation-based reasoning about probabilistic programs
Exact Bayesian Inference for Loopy Probabilistic Programs
We present an exact Bayesian inference method for inferring posterior
distributions encoded by probabilistic programs featuring possibly unbounded
looping behaviors. Our method is built on an extended denotational semantics
represented by probability generating functions, which resolves semantic
intricacies induced by intertwining discrete probabilistic loops with
conditioning (for encoding posterior observations). We implement our method in
a tool called Prodigy; it augments existing computer algebra systems with the
theory of generating functions for the (semi-)automatic inference and
quantitative verification of conditioned probabilistic programs. Experimental
results show that Prodigy can handle various infinite-state loopy programs and
outperforms state-of-the-art exact inference tools over benchmarks of loop-free
programs
Inference of Probabilistic Programs with Moment-Matching Gaussian Mixtures
Computing the posterior distribution of a probabilistic program is a hard
task for which no one-fit-for-all solution exists. We propose Gaussian
Semantics, which approximates the exact probabilistic semantics of a bounded
program by means of Gaussian mixtures. It is parametrized by a map that
associates each program location with the moment order to be matched in the
approximation. We provide two main contributions. The first is a universal
approximation theorem stating that, under mild conditions, Gaussian Semantics
can approximate the exact semantics arbitrarily closely. The second is an
approximation that matches up to second-order moments analytically in face of
the generally difficult problem of matching moments of Gaussian mixtures with
arbitrary moment order. We test our second-order Gaussian approximation (SOGA)
on a number of case studies from the literature. We show that it can provide
accurate estimates in models not supported by other approximation methods or
when exact symbolic techniques fail because of complex expressions or
non-simplified integrals. On two notable classes of problems, namely
collaborative filtering and programs involving mixtures of continuous and
discrete distributions, we show that SOGA significantly outperforms alternative
techniques in terms of accuracy and computational time
Symbolic Execution for Randomized Programs
We propose a symbolic execution method for programs that can draw random
samples. In contrast to existing work, our method can verify randomized
programs with unknown inputs and can prove probabilistic properties that
universally quantify over all possible inputs. Our technique augments standard
symbolic execution with a new class of \emph{probabilistic symbolic variables},
which represent the results of random draws, and computes symbolic expressions
representing the probability of taking individual paths. We implement our
method on top of the \textsc{KLEE} symbolic execution engine alongside multiple
optimizations and use it to prove properties about probabilities and expected
values for a range of challenging case studies written in C++, including
Freivalds' algorithm, randomized quicksort, and a randomized property-testing
algorithm for monotonicity. We evaluate our method against \textsc{Psi}, an
exact probabilistic symbolic inference engine, and \textsc{Storm}, a
probabilistic model checker, and show that our method significantly outperforms
both tools.Comment: 47 pages, 9 figures, to appear at OOPSLA 202
Computer Aided Verification
This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book