71 research outputs found

    Asymmetric distances for approximate differential privacy

    Get PDF
    Differential privacy is a widely studied notion of privacy for various models of computation, based on measuring differences between probability distributions. We consider (epsilon,delta)-differential privacy in the setting of labelled Markov chains. For a given epsilon, the parameter delta can be captured by a variant of the total variation distance, which we call lv_{alpha} (where alpha = e^{epsilon}). First we study lv_{alpha} directly, showing that it cannot be computed exactly. However, the associated approximation problem turns out to be in PSPACE and #P-hard. Next we introduce a new bisimilarity distance for bounding lv_{alpha} from above, which provides a tighter bound than previously known distances while remaining computable with the same complexity (polynomial time with an NP oracle). We also propose an alternative bound that can be computed in polynomial time. Finally, we illustrate the distances on case studies

    Generalized bisimulation metrics

    Get PDF
    International audienceThe pseudometric based on the Kantorovich lifting is one of the most popular notion of distance between probabilistic processes proposed in the literature. However, its application in verification is limited to linear properties. We propose a generalization which allows to deal with a wider class of properties, such as those used in security and privacy. More precisely, we propose a family of pseudometrics, parametrized on a notion of distance which depends on the property we want to verify. Furthermore, we show that the members of this family still characterize bisimilarity in terms of their kernel, and provide a bound on the corresponding distance between trace distributions. Finally, we study the instance corresponding to differential privacy, and we show that it has a dual form, easier to compute. We also prove that the typical process-algebra constructs are non-expansive, thus paving the way to a modular approach to verification

    Metrics for Differential Privacy in Concurrent Systems

    Get PDF
    Part 3: Security AnalysisInternational audienceOriginally proposed for privacy protection in the context of statistical databases, differential privacy is now widely adopted in various models of computation. In this paper we investigate techniques for proving differential privacy in the context of concurrent systems. Our motivation stems from the work of Tschantz et al., who proposed a verification method based on proving the existence of a stratified family between states, that can track the privacy leakage, ensuring that it does not exceed a given leakage budget. We improve this technique by investigating a state property which is more permissive and still implies differential privacy. We consider two pseudometrics on probabilistic automata: The first one is essentially a reformulation of the notion proposed by Tschantz et al. The second one is a more liberal variant, relaxing the relation between them by integrating the notion of amortisation, which results into a more parsimonious use of the privacy budget. We show that the metrical closeness of automata guarantees the preservation of differential privacy, which makes the two metrics suitable for verification. Moreover we show that process combinators are non-expansive in this pseudometric framework. We apply the pseudometric framework to reason about the degree of differential privacy of protocols by the example of the Dining Cryptographers Protocol with biased coins

    On the complexity of verifying differential privacy

    Get PDF
    This thesis contributes to the understanding of the computational complexity of verifying differential privacy. The problem is considered in two constrained, but expressive, models; namely labelled Markov chains and randomised circuits. In the setting of labelled Markov chains (LMC) it is shown that most relevant decision problems are undecidable when considered directly and exactly. Given an LMC, and an ε, consider the problem of finding the least value of δ such that the chain is (ε, δ)-differentially private. Finding this value of δ can be expressed as a variant of the total variation distance. Whilst finding the exact value is not possible, it can be approximated, with a complexity between #P and PSPACE. Instead, bisimilarity distances are studied as over-estimate of δ, which can be computed in polynomial time assuming access to an NP oracle and a slightly weaker distance can be computed in polynomial time. One may also wish to estimate the minimal value of ε such that the LMC is ε-differentially private. The question of whether such an ε even exists is studied through the big-O problem. That is, does there exist a constant C such that the probability of each word in one system is at most C times the probability in the other machine. However in general this problem is undecidable but can be decided on unary chains (and is coNP-complete). On chains with bounded language (that is, when there exists w_1,…..,w_m in Σ such that all words are of the form w_1^*…w_m^*) the problem is decidable subject to Schanuel’s conjecture by invoking the first order theory of the reals with exponential function. The minimal such constant C corresponds exactly to exp(ε) and approximating this value is not possible, even when the value is known to exist. A bisimilarity distance to over-estimate exp(ε) can be computed in PSPACE. In the setting of randomised circuits, the complexity of verifying pure differential privacy is fully captured as coNP^#P-complete; formalising the intuition that differential privacy is universal quantification followed by a condition on probabilities. However verifying approximate differential privacy is between coNP^#P and coNP^#P^#P, and coNP^#P-complete when the number of output bits is small (poly-logarithmic) relative to the total size of the circuit. Further, each parameter cannot be approximated given the other in polynomial time (assuming P not equal to NP)

    Contextual Behavioural Metrics

    Get PDF

    Contextual behavioural Metrics (Extended Version)

    Full text link
    We introduce contextual behavioural metrics (CBMs) as a novel way of measuring the discrepancy in behaviour between processes, taking into account both quantitative aspects and contextual information. This way, process distances by construction take the environment into account: two (non-equivalent) processes may still exhibit very similar behaviour in some contexts, e.g., when certain actions are never performed. We first show how CBMs capture many well-known notions of equivalence and metric, including Larsen's environmental parametrized bisimulation. We then study compositional properties of CBMs with respect to some common process algebraic operators, namely prefixing, restriction, non-deterministic sum, parallel composition and replication.Comment: Extended version of a paper accepted for publication in proc. CONCUR 202

    Efficient Local Computation of Differential Bisimulations via Coupling and Up-to Methods

    Get PDF
    We introduce polynomial couplings, a generalization of probabilistic couplings, to develop an algorithm for the computation of equivalence relations which can be interpreted as a lifting of probabilistic bisimulation to polynomial differential equations, a ubiquitous model of dynamical systems across science and engineering. The algorithm enjoys polynomial time complexity and complements classical partition-refinement approaches because: (a) it implements a local exploration of the system, possibly yielding equivalences that do not necessarily involve the inspection of the whole system of differential equations; (b) it can be enhanced by up-to techniques; and (c) it allows the specification of pairs which ought not be included in the output. Using a prototype, these advantages are demonstrated on case studies from systems biology for applications to model reduction and comparison. Notably, we report four orders of magnitude smaller runtimes than partition-refinement approaches when disproving equivalences between Markov chains

    The complexity of verifying loop-free programs as differentially private

    Get PDF
    We study the problem of verifying differential privacy for loop-free programs with probabilistic choice. Programs in this class can be seen as randomized Boolean circuits, which we will use as a formal model to answer two different questions: first, deciding whether a program satisfies a prescribed level of privacy; second, approximating the privacy parameters a program realizes. We show that the problem of deciding whether a program satisfies ε-differential privacy is coNP#P-complete. In fact, this is the case when either the input domain or the output range of the program is large. Further, we show that deciding whether a program is (ε,δ)-differentially private is coNP#P-hard, and in coNP#P for small output domains, but always in coNP#P#P. Finally, we show that the problem of approximating the level of differential privacy is both NP-hard and coNP-hard. These results complement previous results by Murtagh and Vadhan showing that deciding the optimal composition of differentially private components is #P-complete, and that approximating the optimal composition of differentially private components is in P
    corecore