44,315 research outputs found

    On the relation between Differential Privacy and Quantitative Information Flow

    Get PDF
    Differential privacy is a notion that has emerged in the community of statistical databases, as a response to the problem of protecting the privacy of the database's participants when performing statistical queries. The idea is that a randomized query satisfies differential privacy if the likelihood of obtaining a certain answer for a database xx is not too different from the likelihood of obtaining the same answer on adjacent databases, i.e. databases which differ from xx for only one individual. Information flow is an area of Security concerned with the problem of controlling the leakage of confidential information in programs and protocols. Nowadays, one of the most established approaches to quantify and to reason about leakage is based on the R\'enyi min entropy version of information theory. In this paper, we analyze critically the notion of differential privacy in light of the conceptual framework provided by the R\'enyi min information theory. We show that there is a close relation between differential privacy and leakage, due to the graph symmetries induced by the adjacency relation. Furthermore, we consider the utility of the randomized answer, which measures its expected degree of accuracy. We focus on certain kinds of utility functions called "binary", which have a close correspondence with the R\'enyi min mutual information. Again, it turns out that there can be a tight correspondence between differential privacy and utility, depending on the symmetries induced by the adjacency relation and by the query. Depending on these symmetries we can also build an optimal-utility randomization mechanism while preserving the required level of differential privacy. Our main contribution is a study of the kind of structures that can be induced by the adjacency relation and the query, and how to use them to derive bounds on the leakage and achieve the optimal utility

    On the information leakage of differentially-private mechanisms

    Get PDF
    International audienceDifferential privacy aims at protecting the privacy of participants instatistical databases. Roughly, a mechanism satisfies differential privacy ifthe presence or value of a single individual in the database does notsignificantly change the likelihood of obtaining a certain answer to anystatistical query posed by a data analyst. Differentially-private mechanisms areoften oblivious: first the query is processed on the database to produce a trueanswer, and then this answer is adequately randomized before being reported tothe data analyst. Ideally, a mechanism should minimize leakage, i.e., obfuscateas much as possible the link between reported answers and individuals' data,while maximizing utility, i.e., report answers as similar as possible to thetrue ones. These two goals, however, are in conflict with each other, thusimposing a trade-off between privacy and utility.In this paper we use quantitative information flow principles to analyze leakageand utility in oblivious differentially-private mechanisms. We introduce atechnique that exploits graph symmetries of the adjacency relation on databasesto derive bounds on the min-entropy leakage of the mechanism. We consider anotion of utility based on identity gain functions, which is closely related tomin-entropy leakage, and we derive bounds for it. Finally, given some graphsymmetries, we provide a mechanism that maximizes utility while preserving therequired level of differential privacy

    Differential Privacy versus Quantitative Information Flow

    Get PDF
    Differential privacy is a notion of privacy that has become very popular in the database community. Roughly, the idea is that a randomized query mechanism provides sufficient privacy protection if the ratio between the probabilities of two different entries to originate a certain answer is bound by e^\epsilon. In the fields of anonymity and information flow there is a similar concern for controlling information leakage, i.e. limiting the possibility of inferring the secret information from the observables. In recent years, researchers have proposed to quantify the leakage in terms of the information-theoretic notion of mutual information. There are two main approaches that fall in this category: One based on Shannon entropy, and one based on R\'enyi's min entropy. The latter has connection with the so-called Bayes risk, which expresses the probability of guessing the secret. In this paper, we show how to model the query system in terms of an information-theoretic channel, and we compare the notion of differential privacy with that of mutual information. We show that the notion of differential privacy is strictly stronger, in the sense that it implies a bound on the mutual information, but not viceversa

    Differential Privacy: on the trade-off between Utility and Information Leakage

    Get PDF
    Differential privacy is a notion of privacy that has become very popular in the database community. Roughly, the idea is that a randomized query mechanism provides sufficient privacy protection if the ratio between the probabilities that two adjacent datasets give the same answer is bound by e^epsilon. In the field of information flow there is a similar concern for controlling information leakage, i.e. limiting the possibility of inferring the secret information from the observables. In recent years, researchers have proposed to quantify the leakage in terms of R\'enyi min mutual information, a notion strictly related to the Bayes risk. In this paper, we show how to model the query system in terms of an information-theoretic channel, and we compare the notion of differential privacy with that of mutual information. We show that differential privacy implies a bound on the mutual information (but not vice-versa). Furthermore, we show that our bound is tight. Then, we consider the utility of the randomization mechanism, which represents how close the randomized answers are, in average, to the real ones. We show that the notion of differential privacy implies a bound on utility, also tight, and we propose a method that under certain conditions builds an optimal randomization mechanism, i.e. a mechanism which provides the best utility while guaranteeing differential privacy.Comment: 30 pages; HAL repositor

    Privacy Games: Optimal User-Centric Data Obfuscation

    Full text link
    In this paper, we design user-centric obfuscation mechanisms that impose the minimum utility loss for guaranteeing user's privacy. We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error). This double shield of protection limits the information leakage through obfuscation mechanism as well as the posterior inference. We show that the privacy achieved through joint differential-distortion mechanisms against optimal attacks is as large as the maximum privacy that can be achieved by either of these mechanisms separately. Their utility cost is also not larger than what either of the differential or distortion mechanisms imposes. We model the optimization problem as a leader-follower game between the designer of obfuscation mechanism and the potential adversary, and design adaptive mechanisms that anticipate and protect against optimal inference algorithms. Thus, the obfuscation mechanism is optimal against any inference algorithm

    Formal Verification of Differential Privacy for Interactive Systems

    Full text link
    Differential privacy is a promising approach to privacy preserving data analysis with a well-developed theory for functions. Despite recent work on implementing systems that aim to provide differential privacy, the problem of formally verifying that these systems have differential privacy has not been adequately addressed. This paper presents the first results towards automated verification of source code for differentially private interactive systems. We develop a formal probabilistic automaton model of differential privacy for systems by adapting prior work on differential privacy for functions. The main technical result of the paper is a sound proof technique based on a form of probabilistic bisimulation relation for proving that a system modeled as a probabilistic automaton satisfies differential privacy. The novelty lies in the way we track quantitative privacy leakage bounds using a relation family instead of a single relation. We illustrate the proof technique on a representative automaton motivated by PINQ, an implemented system that is intended to provide differential privacy. To make our proof technique easier to apply to realistic systems, we prove a form of refinement theorem and apply it to show that a refinement of the abstract PINQ automaton also satisfies our differential privacy definition. Finally, we begin the process of automating our proof technique by providing an algorithm for mechanically checking a restricted class of relations from the proof technique.Comment: 65 pages with 1 figur
    • …
    corecore