17,271 research outputs found

    Formal Verification of Differential Privacy for Interactive Systems

    Full text link
    Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.Comment: 65 pages with 1 figur

    Modular Reasoning about Differential Privacy in a Probabilistic Process Calculus

    Get PDF
    International audienceThe verification of systems for protecting sensitive and confidential information is becoming an increasingly important issue. Differential privacy is a promising notion of privacy originated from the community of statistical databases, and now widely adopted in various models of computation. We consider a probabilistic process calculus as a specification formalism for concurrent systems, and we propose a framework for reasoning about the degree of differential privacy provided by such systems. In particular, we investigate the preservation of the degree of privacy under composition via the various operators. We illustrate our idea by proving an anonymity-preservation property for a variant of the Crowds protocol for which the standard analyses from the literature are inapplicable. Finally, we make some preliminary steps towards automatically computing the degree of privacy of a system in a compositional way

    Metrics for Differential Privacy in Concurrent Systems

    Get PDF
    Part 3: Security AnalysisInternational audienceOriginally proposed for privacy protection in the context of statistical databases, differential privacy is now widely adopted in various models of computation. In this paper we investigate techniques for proving differential privacy in the context of concurrent systems. Our motivation stems from the work of Tschantz et al., who proposed a verification method based on proving the existence of a stratified family between states, that can track the privacy leakage, ensuring that it does not exceed a given leakage budget. We improve this technique by investigating a state property which is more permissive and still implies differential privacy. We consider two pseudometrics on probabilistic automata: The first one is essentially a reformulation of the notion proposed by Tschantz et al. The second one is a more liberal variant, relaxing the relation between them by integrating the notion of amortisation, which results into a more parsimonious use of the privacy budget. We show that the metrical closeness of automata guarantees the preservation of differential privacy, which makes the two metrics suitable for verification. Moreover we show that process combinators are non-expansive in this pseudometric framework. We apply the pseudometric framework to reason about the degree of differential privacy of protocols by the example of the Dining Cryptographers Protocol with biased coins

    Statistical verification and differential privacy in cyber-physical systems

    Get PDF
    This thesis studies the statistical verification and differential privacy in Cyber-Physical Systems. The first part focuses on the statistical verification of stochastic hybrid system, a class of formal models for Cyber-Physical Systems. Model reduction techniques are performed on both Discrete-Time and Continuous-Time Stochastic Hybrid Systems to reduce them to Discrete-Time Markov Chains and Continuous-Time Markov Chains, respectively; and statistical verification algorithms are proposed to verify Linear Inequality LTL and Metric Interval Temporal Logic on these discrete probabilistic models. In addition, the advantage of stratified sampling in verifying Probabilistic Computation Tree Logic on Labeled Discrete-Time Markov Chains is studied; this method can potentially be extended to other statistical verification algorithms to reduce computational costs. The second part focuses on the Differential Privacy in multi-agent systems that involve share information sharing to achieve overall control goals. A general formulation of the systems and a notion of Differential Privacy are proposed, and a trade-off between the Differential Privacy and the tracking performance of the systems is demonstrated. In addition, it is proved that there is a trade-off between Differential Privacy and the entropy of the unbiased estimator of the private data, and an optimal algorithm to achieve the best trade-off is given

    Modular Reasoning about Differential Privacy in a Probabilistic Process Calculus

    Get PDF
    International audienceThe verification of systems for protecting sensitive and confidential information is becoming an increasingly important issue. Differential privacy is a promising notion of privacy originated from the community of statistical databases, and now widely adopted in various models of computation. We consider a probabilistic process calculus as a specification formalism for concurrent systems, and we propose a framework for reasoning about the degree of differential privacy provided by such systems. In particular, we investigate the preservation of the degree of privacy under composition via the various operators. We illustrate our idea by proving an anonymity-preservation property for a variant of the Crowds protocol for which the standard analyses from the literature are inapplicable. Finally, we make some preliminary steps towards automatically computing the degree of privacy of a system in a compositional way

    An Accuracy-Assured Privacy-Preserving Recommender System for Internet Commerce

    Full text link
    Recommender systems, tool for predicting users' potential preferences by computing history data and users' interests, show an increasing importance in various Internet applications such as online shopping. As a well-known recommendation method, neighbourhood-based collaborative filtering has attracted considerable attention recently. The risk of revealing users' private information during the process of filtering has attracted noticeable research interests. Among the current solutions, the probabilistic techniques have shown a powerful privacy preserving effect. When facing kk Nearest Neighbour attack, all the existing methods provide no data utility guarantee, for the introduction of global randomness. In this paper, to overcome the problem of recommendation accuracy loss, we propose a novel approach, Partitioned Probabilistic Neighbour Selection, to ensure a required prediction accuracy while maintaining high security against kkNN attack. We define the sum of kk neighbours' similarity as the accuracy metric alpha, the number of user partitions, across which we select the kk neighbours, as the security metric beta. We generalise the kk Nearest Neighbour attack to beta k Nearest Neighbours attack. Differing from the existing approach that selects neighbours across the entire candidate list randomly, our method selects neighbours from each exclusive partition of size kk with a decreasing probability. Theoretical and experimental analysis show that to provide an accuracy-assured recommendation, our Partitioned Probabilistic Neighbour Selection method yields a better trade-off between the recommendation accuracy and system security.Comment: replacement for the previous versio

    Differentially Private Neighborhood-based Recommender Systems

    Get PDF
    Privacy issues of recommender systems have become a hot topic for the society as such systems are appearing in every corner of our life. In contrast to the fact that many secure multi-party computation protocols have been proposed to prevent information leakage in the process of recommendation computation, very little has been done to restrict the information leakage from the recommendation results. In this paper, we apply the differential privacy concept to neighborhood-based recommendation methods (NBMs) under a probabilistic framework. We first present a solution, by directly calibrating Laplace noise into the training process, to differential-privately find the maximum a posteriori parameters similarity. Then we connect differential privacy to NBMs by exploiting a recent observation that sampling from the scaled posterior distribution of a Bayesian model results in provably differentially private systems. Our experiments show that both solutions allow promising accuracy with a modest privacy budget, and the second solution yields better accuracy if the sampling asymptotically converges. We also compare our solutions to the recent differentially private matrix factorization (MF) recommender systems, and show that our solutions achieve better accuracy when the privacy budget is reasonably small. This is an interesting result because MF systems often offer better accuracy when differential privacy is not applied
    corecore