124 research outputs found

    An expectation transformer approach to predicate abstraction and data independence for probabilistic programs

    Full text link
    In this paper we revisit the well-known technique of predicate abstraction to characterise performance attributes of system models incorporating probability. We recast the theory using expectation transformers, and identify transformer properties which correspond to abstractions that yield nevertheless exact bound on the performance of infinite state probabilistic systems. In addition, we extend the developed technique to the special case of "data independent" programs incorporating probability. Finally, we demonstrate the subtleness of the extended technique by using the PRISM model checking tool to analyse an infinite state protocol, obtaining exact bounds on its performance

    Probabilistic Rely-guarantee Calculus

    Full text link
    Jones' rely-guarantee calculus for shared variable concurrency is extended to include probabilistic behaviours. We use an algebraic approach which combines and adapts probabilistic Kleene algebras with concurrent Kleene algebra. Soundness of the algebra is shown relative to a general probabilistic event structure semantics. The main contribution of this paper is a collection of rely-guarantee rules built on top of that semantics. In particular, we show how to obtain bounds on probabilities by deriving rely-guarantee rules within the true-concurrent denotational semantics. The use of these rules is illustrated by a detailed verification of a simple probabilistic concurrent program: a faulty Eratosthenes sieve.Comment: Preprint submitted to TCS-QAP

    Abstract Hidden Markov Models: a monadic account of quantitative information flow

    Full text link
    Hidden Markov Models, HMM's, are mathematical models of Markov processes with state that is hidden, but from which information can leak. They are typically represented as 3-way joint-probability distributions. We use HMM's as denotations of probabilistic hidden-state sequential programs: for that, we recast them as `abstract' HMM's, computations in the Giry monad D\mathbb{D}, and we equip them with a partial order of increasing security. However to encode the monadic type with hiding over some state X\mathcal{X} we use DX→D2X\mathbb{D}\mathcal{X}\to \mathbb{D}^2\mathcal{X} rather than the conventional X→DX\mathcal{X}{\to}\mathbb{D}\mathcal{X} that suffices for Markov models whose state is not hidden. We illustrate the DX→D2X\mathbb{D}\mathcal{X}\to \mathbb{D}^2\mathcal{X} construction with a small Haskell prototype. We then present uncertainty measures as a generalisation of the extant diversity of probabilistic entropies, with characteristic analytic properties for them, and show how the new entropies interact with the order of increasing security. Furthermore, we give a `backwards' uncertainty-transformer semantics for HMM's that is dual to the `forwards' abstract HMM's - it is an analogue of the duality between forwards, relational semantics and backwards, predicate-transformer semantics for imperative programs with demonic choice. Finally, we argue that, from this new denotational-semantic viewpoint, one can see that the Dalenius desideratum for statistical databases is actually an issue in compositionality. We propose a means for taking it into account

    A New Proof Rule for Almost-Sure Termination

    Get PDF
    An important question for a probabilistic program is whether the probability mass of all its diverging runs is zero, that is that it terminates "almost surely". Proving that can be hard, and this paper presents a new method for doing so; it is expressed in a program logic, and so applies directly to source code. The programs may contain both probabilistic- and demonic choice, and the probabilistic choices may depend on the current state. As do other researchers, we use variant functions (a.k.a. "super-martingales") that are real-valued and probabilistically might decrease on each loop iteration; but our key innovation is that the amount as well as the probability of the decrease are parametric. We prove the soundness of the new rule, indicate where its applicability goes beyond existing rules, and explain its connection to classical results on denumerable (non-demonic) Markov chains.Comment: V1 to appear in PoPL18. This version collects some existing text into new example subsection 5.5 and adds a new example 5.6 and makes further remarks about uncountable branching. The new example 5.6 relates to work on lexicographic termination methods, also to appear in PoPL18 [Agrawal et al, 2018

    Hidden-Markov Program Algebra with iteration

    Full text link
    We use Hidden Markov Models to motivate a quantitative compositional semantics for noninterference-based security with iteration, including a refinement- or "implements" relation that compares two programs with respect to their information leakage; and we propose a program algebra for source-level reasoning about such programs, in particular as a means of establishing that an "implementation" program leaks no more than its "specification" program. This joins two themes: we extend our earlier work, having iteration but only qualitative, by making it quantitative; and we extend our earlier quantitative work by including iteration. We advocate stepwise refinement and source-level program algebra, both as conceptual reasoning tools and as targets for automated assistance. A selection of algebraic laws is given to support this view in the case of quantitative noninterference; and it is demonstrated on a simple iterated password-guessing attack

    Directional Privacy for Deep Learning

    Full text link
    Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. This applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable. In this paper we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides ϵd\epsilon d-privacy for deep learning training, rather than the (ϵ,δ)(\epsilon, \delta)-privacy of the Gaussian mechanism; and that experimentally, on key datasets, the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off
    • …
    corecore