96,621 research outputs found

    Hypothesis Testing Interpretations and Renyi Differential Privacy

    Full text link
    Differential privacy is a de facto standard in data privacy, with applications in the public and private sectors. A way to explain differential privacy, which is particularly appealing to statistician and social scientists is by means of its statistical hypothesis testing interpretation. Informally, one cannot effectively test whether a specific individual has contributed her data by observing the output of a private mechanism---any test cannot have both high significance and high power. In this paper, we identify some conditions under which a privacy definition given in terms of a statistical divergence satisfies a similar interpretation. These conditions are useful to analyze the distinguishability power of divergences and we use them to study the hypothesis testing interpretation of some relaxations of differential privacy based on Renyi divergence. This analysis also results in an improved conversion rule between these definitions and differential privacy

    Extremal Mechanisms for Local Differential Privacy

    Full text link
    Local differential privacy has recently surfaced as a strong measure of privacy in contexts where personal information remains private even from data analysts. Working in a setting where both the data providers and data analysts want to maximize the utility of statistical analyses performed on the released data, we study the fundamental trade-off between local differential privacy and utility. This trade-off is formulated as a constrained optimization problem: maximize utility subject to local differential privacy constraints. We introduce a combinatorial family of extremal privatization mechanisms, which we call staircase mechanisms, and show that it contains the optimal privatization mechanisms for a broad class of information theoretic utilities such as mutual information and ff-divergences. We further prove that for any utility function and any privacy level, solving the privacy-utility maximization problem is equivalent to solving a finite-dimensional linear program, the outcome of which is the optimal staircase mechanism. However, solving this linear program can be computationally expensive since it has a number of variables that is exponential in the size of the alphabet the data lives in. To account for this, we show that two simple privatization mechanisms, the binary and randomized response mechanisms, are universally optimal in the low and high privacy regimes, and well approximate the intermediate regime.Comment: 52 pages, 10 figures in JMLR 201

    "Pretty strong" converse for the private capacity of degraded quantum wiretap channels

    Full text link
    In the vein of the recent "pretty strong" converse for the quantum and private capacity of degradable quantum channels [Morgan/Winter, IEEE Trans. Inf. Theory 60(1):317-333, 2014], we use the same techniques, in particular the calculus of min-entropies, to show a pretty strong converse for the private capacity of degraded classical-quantum-quantum (cqq-)wiretap channels, which generalize Wyner's model of the degraded classical wiretap channel. While the result is not completely tight, leaving some gap between the region of error and privacy parameters for which the converse bound holds, and a larger no-go region, it represents a further step towards an understanding of strong converses of wiretap channels [cf. Hayashi/Tyagi/Watanabe, arXiv:1410.0443 for the classical case].Comment: 5 pages, 1 figure, IEEEtran.cls. V2 final (conference) version, accepted for ISIT 2016 (Barcelona, 10-15 July 2016
    • …
    corecore