2,690 research outputs found
A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations
We present a weakest-precondition-style calculus for reasoning about the
expected values (pre-expectations) of \emph{mixed-sign unbounded} random
variables after execution of a probabilistic program. The semantics of a
while-loop is well-defined as the limit of iteratively applying a functional to
a zero-element just as in the traditional weakest pre-expectation calculus,
even though a standard least fixed point argument is not applicable in this
context. A striking feature of our semantics is that it is always well-defined,
even if the expected values do not exist. We show that the calculus is sound,
allows for compositional reasoning, and present an invariant-based approach for
reasoning about pre-expectations of loops
How long, O Bayesian network, will I sample thee? A program analysis perspective on expected sampling times
Bayesian networks (BNs) are probabilistic graphical models for describing
complex joint probability distributions. The main problem for BNs is inference:
Determine the probability of an event given observed evidence. Since exact
inference is often infeasible for large BNs, popular approximate inference
methods rely on sampling.
We study the problem of determining the expected time to obtain a single
valid sample from a BN. To this end, we translate the BN together with
observations into a probabilistic program. We provide proof rules that yield
the exact expected runtime of this program in a fully automated fashion. We
implemented our approach and successfully analyzed various real-world BNs taken
from the Bayesian network repository
Relatively Complete Verification of Probabilistic Programs: An Expressive Language for Expectation-Based Reasoning
We study a syntax for specifying quantitative âassertionsâ - functions mapping program states to numbers - for probabilistic program verification. We prove that our syntax is expressive in the following sense: Given any probabilistic program C, if a function f is expressible in our syntax, then the function mapping each initial state Ï to the expected value of f evaluated in the final states reached after termination C on Ï (also called the weakest preexpectation wp[C](f)) is also expressible in our syntax. As a consequence, we obtain a relatively complete verification system for verifying expected values and probabilities in the sense of Cook: Apart from a single reasoning step about the inequality of two functions given as syntactic expressions in our language, given f, g, and C, we can check whether g †wp[C](f)
Lower Bounds for Possibly Divergent Probabilistic Programs
We present a new proof rule for verifying lower bounds on quantities of probabilistic programs. Our proof rule is not confined to almost-surely terminating programs -- as is the case for existing rules -- and can be used to establish non-trivial lower bounds on, e.g., termination probabilities and expected values, for possibly divergent probabilistic loops, e.g., the well-known three-dimensional random walk on a lattice
A Pre-expectation Calculus for Probabilistic Sensitivity
Sensitivity properties describe how changes to the input of a program affect the output, typically by upper bounding the distance between the outputs of two runs by a monotone function of the distance between the corresponding inputs. When programs are probabilistic, the distance between outputs is a distance between distributions. The Kantorovich lifting provides a general way of defining a distance between distributions by lifting the distance of the underlying sample space; by choosing an appropriate distance on the base space, one can recover other usual probabilistic distances, such as the Total Variation distance. We develop a relational pre-expectation calculus to upper bound the Kantorovich distance between two executions of a probabilistic program. We illustrate our methods by proving algorithmic stability of a machine learning algorithm, convergence of a reinforcement learning algorithm, and fast mixing for card shuffling algorithms. We also consider some extensions: using our calculus to show convergence of Markov chains to the uniform distribution over states and an asynchronous extension to reason about pairs of program executions with different control flow
On continuation-passing transformations and expected cost analysis
We define a continuation-passing style (CPS) translation for a typed \u3bb-calculus with probabilistic choice, unbounded recursion, and a tick operator - for modeling cost. The target language is a (non-probabilistic) \u3bb-calculus, enriched with a type of extended positive reals and a fixpoint operator. We then show that applying the CPS transform of an expression M to the continuation \u3bb v. 0 yields the expected cost of M. We also introduce a formal system for higher-order logic, called EHOL, prove it sound, and show it can derive tight upper bounds on the expected cost of classic examples, including Coupon Collector and Random Walk. Moreover, we relate our translation to Kaminski et al.'s ert-calculus, showing that the latter can be recovered by applying our CPS translation to (a generalization of) the classic embedding of imperative programs into \u3bb-calculus. Finally, we prove that the CPS transform of an expression can also be used to compute pre-expectations and to reason about almost sure termination
Quantum Expectation Transformers for Cost Analysis
International audienceWe introduce a new kind of expectation transformer for a mixed classical-quantum programming language. Our semantic approach relies on a new notion of a cost structure, which we introduce and which can be seen as a specialisation of the Kegelspitzen of Keimel and Plotkin. We show that our weakest precondition analysis is both sound and adequate with respect to the operational semantics of the language. Using the induced expectation transformer, we provide formal analysis methods for the expected cost analysis and expected value analysis of classical-quantum programs. We illustrate the usefulness of our techniques by computing the expected cost of several well-known quantum algorithms and protocols, such as coin tossing, repeat until success, entangled state preparation, and quantum walks
Quantitative Strongest Post: A Calculus for Reasoning about the Flow of Quantitative Information
We present a novel strongest-postcondition-style calculus for quantitative
reasoning about non-deterministic programs with loops. Whereas existing
quantitative weakest pre allows reasoning about the value of a quantity after a
program terminates on a given initial state, quantitative strongest post allows
reasoning about the value that a quantity had before the program was executed
and reached a given final state. We show how strongest post enables reasoning
about the flow of quantitative information through programs. Similarly to
weakest liberal preconditions, we also develop a quantitative strongest liberal
post. As a byproduct, we obtain the entirely unexplored notion of strongest
liberal postconditions and show how these foreshadow a potential new program
logic - partial incorrectness logic - which would be a more liberal version of
O'Hearn's recent incorrectness logic
On continuation-passing transformations and expected cost analysis
International audienceWe define a continuation-passing style (CPS) translation for a typed-calculus with probabilistic choice, unbounded recursion, and a tick operator-for modeling cost. The target language is a (non-probabilistic)-calculus, enriched with a type of extended positive reals and a fixpoint operator. We then show that applying the CPS transform of an expression to the continuation .0 yields the expected cost of. We also introduce a formal system for higher-order logic, called EHOL, prove it sound, and show it can derive tight upper bounds on the expected cost of classic examples, including Coupon Collector and Random Walk. Moreover, we relate our translation to Kaminski et al. 's ert-calculus, showing that the latter can be recovered by applying our CPS translation to (a generalization of) the classic embedding of imperative programs into-calculus. Finally, we prove that the CPS transform of an expression can also be used to compute pre-expectations and to reason about almost sure termination
- âŠ