50 research outputs found

    A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations

    Get PDF
    We present a weakest-precondition-style calculus for reasoning about the expected values (pre-expectations) of \emph{mixed-sign unbounded} random variables after execution of a probabilistic program. The semantics of a while-loop is well-defined as the limit of iteratively applying a functional to a zero-element just as in the traditional weakest pre-expectation calculus, even though a standard least fixed point argument is not applicable in this context. A striking feature of our semantics is that it is always well-defined, even if the expected values do not exist. We show that the calculus is sound, allows for compositional reasoning, and present an invariant-based approach for reasoning about pre-expectations of loops

    On the Hardness of Almost-Sure Termination

    Get PDF
    This paper considers the computational hardness of computing expected outcomes and deciding (universal) (positive) almost-sure termination of probabilistic programs. It is shown that computing lower and upper bounds of expected outcomes is Σ10\Sigma_1^0- and Σ20\Sigma_2^0-complete, respectively. Deciding (universal) almost-sure termination as well as deciding whether the expected outcome of a program equals a given rational value is shown to be Π20\Pi^0_2-complete. Finally, it is shown that deciding (universal) positive almost-sure termination is Σ20\Sigma_2^0-complete (Π30\Pi_3^0-complete).Comment: MFCS 2015. arXiv admin note: text overlap with arXiv:1410.722

    Optimistic Value Iteration

    Get PDF
    Markov decision processes are widely used for planning and verification in settings that combine controllable or adversarial choices with probabilistic behaviour. The standard analysis algorithm, value iteration, only provides a lower bound on unbounded probabilities or reward values. Two "sound" variations, which also deliver an upper bound, have recently appeared. In this paper, we present optimistic value iteration, a new sound approach that leverages value iteration's ability to usually deliver tight lower bounds: we obtain a lower bound via standard value iteration, use the result to "guess" an upper bound, and prove the latter's correctness. Optimistic value iteration is easy to implement, does not require extra precomputations or a priori state space transformations, and works for computing reachability probabilities as well as expected rewards. It is also fast, as we show via an extensive experimental evaluation using our publicly available implementation within the Modest Toolset

    Quantitative Strongest Post: A Calculus for Reasoning about the Flow of Quantitative Information

    Get PDF
    We present a novel strongest-postcondition-style calculus for quantitative reasoning about non-deterministic programs with loops. Whereas existing quantitative weakest pre allows reasoning about the value of a quantity after a program terminates on a given initial state, quantitative strongest post allows reasoning about the value that a quantity had before the program was executed and reached a given final state. We show how strongest post enables reasoning about the flow of quantitative information through programs. Similarly to weakest liberal preconditions, we also develop a quantitative strongest liberal post. As a byproduct, we obtain the entirely unexplored notion of strongest liberal postconditions and show how these foreshadow a potential new program logic - partial incorrectness logic - which would be a more liberal version of O'Hearn's recent incorrectness logic

    A New Proof Rule for Almost-Sure Termination

    Get PDF
    An important question for a probabilistic program is whether the probability mass of all its diverging runs is zero, that is that it terminates "almost surely". Proving that can be hard, and this paper presents a new method for doing so; it is expressed in a program logic, and so applies directly to source code. The programs may contain both probabilistic- and demonic choice, and the probabilistic choices may depend on the current state. As do other researchers, we use variant functions (a.k.a. "super-martingales") that are real-valued and probabilistically might decrease on each loop iteration; but our key innovation is that the amount as well as the probability of the decrease are parametric. We prove the soundness of the new rule, indicate where its applicability goes beyond existing rules, and explain its connection to classical results on denumerable (non-demonic) Markov chains.Comment: V1 to appear in PoPL18. This version collects some existing text into new example subsection 5.5 and adds a new example 5.6 and makes further remarks about uncountable branching. The new example 5.6 relates to work on lexicographic termination methods, also to appear in PoPL18 [Agrawal et al, 2018

    How long, O Bayesian network, will I sample thee? A program analysis perspective on expected sampling times

    Get PDF
    Bayesian networks (BNs) are probabilistic graphical models for describing complex joint probability distributions. The main problem for BNs is inference: Determine the probability of an event given observed evidence. Since exact inference is often infeasible for large BNs, popular approximate inference methods rely on sampling. We study the problem of determining the expected time to obtain a single valid sample from a BN. To this end, we translate the BN together with observations into a probabilistic program. We provide proof rules that yield the exact expected runtime of this program in a fully automated fashion. We implemented our approach and successfully analyzed various real-world BNs taken from the Bayesian network repository

    Probabilistic Data with Continuous Distributions

    Get PDF
    Statistical models of real world data typically involve continuous probability distributions such as normal, Laplace, or exponential distributions. Such distributions are supported by many probabilistic modelling formalisms, including probabilistic database systems. Yet, the traditional theoretical framework of probabilistic databases focusses entirely on finite probabilistic databases. Only recently, we set out to develop the mathematical theory of infinite probabilistic databases. The present paper is an exposition of two recent papers which are cornerstones of this theory. In (Grohe, Lindner; ICDT 2020) we propose a very general framework for probabilistic databases, possibly involving continuous probability distributions, and show that queries have a well-defined semantics in this framework. In (Grohe, Kaminski, Katoen, Lindner; PODS 2020) we extend the declarative probabilistic programming language Generative Datalog, proposed by (B\'ar\'any et al.~2017) for discrete probability distributions, to continuous probability distributions and show that such programs yield generative models of continuous probabilistic databases

    A Deductive Verification Infrastructure for Probabilistic Programs

    Full text link
    This paper presents a quantitative program verification infrastructure for discrete probabilistic programs. Our infrastructure can be viewed as the probabilistic analogue of Boogie: its central components are an intermediate verification language (IVL) together with a real-valued logic. Our IVL provides a programming-language-style for expressing verification conditions whose validity implies the correctness of a program under investigation. As our focus is on verifying quantitative properties such as bounds on expected outcomes, expected run-times, or termination probabilities, off-the-shelf IVLs based on Boolean first-order logic do not suffice. Instead, a paradigm shift from the standard Boolean to a real-valued domain is required. Our IVL features quantitative generalizations of standard verification constructs such as assume- and assert-statements. Verification conditions are generated by a weakest-precondition-style semantics, based on our real-valued logic. We show that our verification infrastructure supports natural encodings of numerous verification techniques from the literature. With our SMT-based implementation, we automatically verify a variety of benchmarks. To the best of our knowledge, this establishes the first deductive verification infrastructure for expectation-based reasoning about probabilistic programs

    A Calculus for Amortized Expected Runtimes

    Get PDF
    We develop a weakest-precondition-style calculus à la Dijkstra for reasoning about amortized expected runtimes of randomized algorithms with access to dynamic memory — the aert calculus. Our calculus is truly quantitative, i.e. instead of Boolean valued predicates, it manipulates real-valued functions. En route to the aert calculus, we study the ert calculus for reasoning about expected runtimes of Kaminski et al. [2018] extended by capabilities for handling dynamic memory, thus enabling compositional and local reasoning about randomized data structures. This extension employs runtime separation logic, which has been foreshadowed by Matheja [2020] and then implemented in Isabelle/HOL by Haslbeck [2021]. In addition to Haslbeck’s results, we further prove soundness of the so-extended ert calculus with respect to an operational Markov decision process model featuring countably-branching nondeterminism, provide extensive intuitive explanations, and provide proof rules enabling separation logic-style verification for upper bounds on expected runtimes. Finally, we build the so-called potential method for amortized analysis into the ert calculus, thus obtaining the aert calculus. Soundness of the aert calculus is obtained from the soundness of the ert calculus and some probabilistic form of telescoping. Since one needs to be able to handle changes in potential which can in principle be both positive or negative, the aert calculus needs to be — essentially — capable of handling certain signed random variables. A particularly pleasing feature of our solution is that, unlike e.g. Kozen [1985], we obtain a loop rule for our signed random variables, and furthermore, unlike e.g. Kaminski and Katoen [2017], the aert calculus makes do without the need for involved technical machinery keeping track of the integrability of the random variables. Finally, we present case studies, including a formal analysis of a randomized delete-insert-find-any set data structure [Brodal et al. 1996], which yields a constant expected runtime per operation, whereas no deterministic algorithm can achieve this

    Quantitative Separation Logic - A Logic for Reasoning about Probabilistic Programs

    Get PDF
    We present quantitative separation logic (QSL\mathsf{QSL}). In contrast to classical separation logic, QSL\mathsf{QSL} employs quantities which evaluate to real numbers instead of predicates which evaluate to Boolean values. The connectives of classical separation logic, separating conjunction and separating implication, are lifted from predicates to quantities. This extension is conservative: Both connectives are backward compatible to their classical analogs and obey the same laws, e.g. modus ponens, adjointness, etc. Furthermore, we develop a weakest precondition calculus for quantitative reasoning about probabilistic pointer programs in QSL\mathsf{QSL}. This calculus is a conservative extension of both Reynolds' separation logic for heap-manipulating programs and Kozen's / McIver and Morgan's weakest preexpectations for probabilistic programs. Soundness is proven with respect to an operational semantics based on Markov decision processes. Our calculus preserves O'Hearn's frame rule, which enables local reasoning. We demonstrate that our calculus enables reasoning about quantities such as the probability of terminating with an empty heap, the probability of reaching a certain array permutation, or the expected length of a list
    corecore