6 research outputs found

    On the information leakage of differentially-private mechanisms

    Get PDF
    International audienceDifferential privacy aims at protecting the privacy of participants instatistical databases. Roughly, a mechanism satisfies differential privacy ifthe presence or value of a single individual in the database does notsignificantly change the likelihood of obtaining a certain answer to anystatistical query posed by a data analyst. Differentially-private mechanisms areoften oblivious: first the query is processed on the database to produce a trueanswer, and then this answer is adequately randomized before being reported tothe data analyst. Ideally, a mechanism should minimize leakage, i.e., obfuscateas much as possible the link between reported answers and individuals' data,while maximizing utility, i.e., report answers as similar as possible to thetrue ones. These two goals, however, are in conflict with each other, thusimposing a trade-off between privacy and utility.In this paper we use quantitative information flow principles to analyze leakageand utility in oblivious differentially-private mechanisms. We introduce atechnique that exploits graph symmetries of the adjacency relation on databasesto derive bounds on the min-entropy leakage of the mechanism. We consider anotion of utility based on identity gain functions, which is closely related tomin-entropy leakage, and we derive bounds for it. Finally, given some graphsymmetries, we provide a mechanism that maximizes utility while preserving therequired level of differential privacy

    A Quantitative Information Flow Analysis of the Topics API

    Full text link
    Third-party cookies have been a privacy concern since cookies were first developed in the mid 1990s, but more strict cookie policies were only introduced by Internet browser vendors in the early 2010s. More recently, due to regulatory changes, browser vendors have started to completely block third-party cookies, with both Firefox and Safari already compliant. The Topics API is being proposed by Google as an additional and less intrusive source of information for interest-based advertising (IBA), following the upcoming deprecation of third-party cookies. Initial results published by Google estimate the probability of a correct re-identification of a random individual would be below 3% while still supporting IBA. In this paper, we analyze the re-identification risk for individual Internet users introduced by the Topics API from the perspective of Quantitative Information Flow (QIF), an information- and decision-theoretic framework. Our model allows a theoretical analysis of both privacy and utility aspects of the API and their trade-off, and we show that the Topics API does have better privacy than third-party cookies. We leave the utility analyses for future work.Comment: WPES '23 (to appear

    A novel analysis of utility in privacy pipelines, using Kronecker products and quantitative information flow

    Full text link
    We combine Kronecker products, and quantitative information flow, to give a novel formal analysis for the fine-grained verification of utility in complex privacy pipelines. The combination explains a surprising anomaly in the behaviour of utility of privacy-preserving pipelines -- that sometimes a reduction in privacy results also in a decrease in utility. We use the standard measure of utility for Bayesian analysis, introduced by Ghosh at al., to produce tractable and rigorous proofs of the fine-grained statistical behaviour leading to the anomaly. More generally, we offer the prospect of formal-analysis tools for utility that complement extant formal analyses of privacy. We demonstrate our results on a number of common privacy-preserving designs

    Maximizing the Conditional Expected Reward for Reaching the Goal

    Full text link
    The paper addresses the problem of computing maximal conditional expected accumulated rewards until reaching a target state (briefly called maximal conditional expectations) in finite-state Markov decision processes where the condition is given as a reachability constraint. Conditional expectations of this type can, e.g., stand for the maximal expected termination time of probabilistic programs with non-determinism, under the condition that the program eventually terminates, or for the worst-case expected penalty to be paid, assuming that at least three deadlines are missed. The main results of the paper are (i) a polynomial-time algorithm to check the finiteness of maximal conditional expectations, (ii) PSPACE-completeness for the threshold problem in acyclic Markov decision processes where the task is to check whether the maximal conditional expectation exceeds a given threshold, (iii) a pseudo-polynomial-time algorithm for the threshold problem in the general (cyclic) case, and (iv) an exponential-time algorithm for computing the maximal conditional expectation and an optimal scheduler.Comment: 103 pages, extended version with appendices of a paper accepted at TACAS 201

    On the information leakage of differentially-private mechanisms

    No full text
    corecore