15 research outputs found

    Optimal Noise Adding Mechanisms for Approximate Differential Privacy

    Full text link
    We study the (nearly) optimal mechanisms in (ϵ,δ)(\epsilon,\delta)-approximate differential privacy for integer-valued query functions and vector-valued (histogram-like) query functions under a utility-maximization/cost-minimization framework. We characterize the tradeoff between ϵ\epsilon and δ\delta in utility and privacy analysis for histogram-like query functions (1\ell^1 sensitivity), and show that the (ϵ,δ)(\epsilon,\delta)-differential privacy is a framework not much more general than the (ϵ,0)(\epsilon,0)-differential privacy and (0,δ)(0,\delta)-differential privacy in the context of 1\ell^1 and 2\ell^2 cost functions, i.e., minimum expected noise magnitude and noise power. In the same context of 1\ell^1 and 2\ell^2 cost functions, we show the near-optimality of uniform noise mechanism and discrete Laplacian mechanism in the high privacy regime (as (ϵ,δ)(0,0)(\epsilon,\delta) \to (0,0)). We conclude that in (ϵ,δ)(\epsilon,\delta)-differential privacy, the optimal noise magnitude and noise power are Θ(min(1ϵ,1δ))\Theta(\min(\frac{1}{\epsilon},\frac{1}{\delta})) and Θ(min(1ϵ2,1δ2))\Theta(\min(\frac{1}{\epsilon^2},\frac{1}{\delta^2})), respectively, in the high privacy regime.Comment: 27 pages, 1 figur

    Privacy and Utility Tradeoff in Approximate Differential Privacy

    Full text link
    We characterize the minimum noise amplitude and power for noise-adding mechanisms in (ϵ,δ)(\epsilon, \delta)-differential privacy for single real-valued query function. We derive new lower bounds using the duality of linear programming, and new upper bounds by proposing a new class of (ϵ,δ)(\epsilon,\delta)-differentially private mechanisms, the \emph{truncated Laplacian} mechanisms. We show that the multiplicative gap of the lower bounds and upper bounds goes to zero in various high privacy regimes, proving the tightness of the lower and upper bounds and thus establishing the optimality of the truncated Laplacian mechanism. In particular, our results close the previous constant multiplicative gap in the discrete setting. Numeric experiments show the improvement of the truncated Laplacian mechanism over the optimal Gaussian mechanism in all privacy regimes.Comment: 15 pages, 3 figure

    Optimal Mechanism for Randomized Responses under Universally Composable Security Measure

    Full text link
    We consider a problem of analyzing a global property of private data through randomized responses subject to a certain rule, where private data are used for another cryptographic protocol, e.g., authentication. For this problem, the security of private data was evaluated by a universally composable security measure, which can be regarded as (0,δ)(0,\delta)-differential privacy. Here we focus on the trade-off between the global accuracy and a universally composable security measure, and derive an optimal solution to the trade-off problem. More precisely, we adopt the Fisher information of a certain distribution family as the estimation accuracy of a global property and impose (0,δ)(0,\delta)-differential privacy on a randomization mechanism protecting private data. Finally, we maximize the Fisher information under the (0,δ)(0,\delta)-differential privacy constraint and obtain an optimal mechanism explicitly.Comment: 12 pages, 6 figures; changed the title and revised the abstract, introduction, and section

    Preserving Data-Privacy with Added Noises: Optimal Estimation and Privacy Analysis

    Full text link
    Networked system often relies on distributed algorithms to achieve a global computation goal with iterative local information exchanges between neighbor nodes. To preserve data privacy, a node may add a random noise to its original data for information exchange at each iteration. Nevertheless, a neighbor node can estimate other's original data based on the information it received. The estimation accuracy and data privacy can be measured in terms of (ϵ,δ)(\epsilon, \delta)-data-privacy, defined as the probability of ϵ\epsilon-accurate estimation (the difference of an estimation and the original data is within ϵ\epsilon) is no larger than δ\delta (the disclosure probability). How to optimize the estimation and analyze data privacy is a critical and open issue. In this paper, a theoretical framework is developed to investigate how to optimize the estimation of neighbor's original data using the local information received, named optimal distributed estimation. Then, we study the disclosure probability under the optimal estimation for data privacy analysis. We further apply the developed framework to analyze the data privacy of the privacy-preserving average consensus algorithm and identify the optimal noises for the algorithm.Comment: 32 pages, 2 figure

    More Flexible Differential Privacy: The Application of Piecewise Mixture Distributions in Query Release

    Full text link
    There is an increasing demand to make data "open" to third parties, as data sharing has great benefits in data-driven decision making. However, with a wide variety of sensitive data collected, protecting privacy of individuals, communities and organizations, is an essential factor in making data "open". The approaches currently adopted by industry in releasing private data are often ad hoc and prone to a number of attacks, including re-identification attacks, as they do not provide adequate privacy guarantees. While differential privacy has attracted significant interest from academia and industry by providing rigorous and reliable privacy guarantees, the reduced utility and inflexibility of current differentially private algorithms for data release is a barrier to their use in real-life. This paper aims to address these two challenges. First, we propose a novel mechanism to augment the conventional utility of differential privacy by fusing two Laplace or geometric distributions together. We derive closed form expressions for entropy, variance of added noise, and absolute expectation of noise for the proposed piecewise mixtures. Then the relevant distributions are utilised to theoretically prove the privacy and accuracy guarantees of the proposed mechanisms. Second, we show that our proposed mechanisms have greater flexibility, with three parameters to adjust, giving better utility in bounding noise, and mitigating larger inaccuracy, in comparison to typical one-parameter differentially private mechanisms. We then empirically evaluate the performance of piecewise mixture distributions with extensive simulations and with a real-world dataset for both linear count queries and histogram queries. The empirical results show an increase in all utility measures considered, while maintaining privacy, for the piecewise mixture mechanisms compared to standard Laplace or geometric mechanisms

    Performance Evaluation of Differential Privacy Mechanisms in Blockchain based Smart Metering

    Full text link
    The concept of differential privacy emerged as a strong notion to protect database privacy in an untrusted environment. Later on, researchers proposed several variants of differential privacy in order to preserve privacy in certain other scenarios, such as real-time cyber physical systems. Since then, differential privacy has rigorously been applied to certain other domains which has the need of privacy preservation. One such domain is decentralized blockchain based smart metering, in which smart meters acting as blockchain nodes sent their real-time data to grid utility databases for real-time reporting. This data is further used to carry out statistical tasks, such as load forecasting, demand response calculation, etc. However, in case if any intruder gets access to this data it can leak privacy of smart meter users. In this context, differential privacy can be used to protect privacy of this data. In this chapter, we carry out comparison of four variants of differential privacy (Laplace, Gaussian, Uniform, and Geometric) in blockchain based smart metering scenario. We test these variants on smart metering data and carry out their performance evaluation by varying different parameters. Experimental outcomes shows at low privacy budget (ε\varepsilon) and at low reading sensitivity value (δ\delta), these privacy preserving mechanisms provide high privacy by adding large amount of noise. However, among these four privacy preserving parameters Geometric parameters is more suitable for protecting high peak values and Laplace mechanism is more suitable for protecting low peak values at (ε\varepsilon = 0.01).Comment: Submitte

    Differentially Private Inference for Binomial Data

    Full text link
    We derive uniformly most powerful (UMP) tests for simple and one-sided hypotheses for a population proportion within the framework of Differential Privacy (DP), optimizing finite sample performance. We show that in general, DP hypothesis tests can be written in terms of linear constraints, and for exchangeable data can always be expressed as a function of the empirical distribution. Using this structure, we prove a 'Neyman-Pearson lemma' for binomial data under DP, where the DP-UMP only depends on the sample sum. Our tests can also be stated as a post-processing of a random variable, whose distribution we coin ''Truncated-Uniform-Laplace'' (Tulap), a generalization of the Staircase and discrete Laplace distributions. Furthermore, we obtain exact pp-values, which are easily computed in terms of the Tulap random variable. Using the above techniques, we show that our tests can be applied to give uniformly most accurate one-sided confidence intervals and optimal confidence distributions. We also derive uniformly most powerful unbiased (UMPU) two-sided tests, which lead to uniformly most accurate unbiased (UMAU) two-sided confidence intervals. We show that our results can be applied to distribution-free hypothesis tests for continuous data. Our simulation results demonstrate that all our tests have exact type I error, and are more powerful than current techniques.Comment: 25 pages before references; 39 pages total. 8 figures. arXiv admin note: text overlap with arXiv:1805.0923

    Mathematical comparison of classical and quantum mechanisms in optimization under local differential privacy

    Full text link
    Let ε>0\varepsilon>0. An nn-tuple (pi)i=1n(p_i)_{i=1}^n of probability vectors is called ε\varepsilon-differentially private (ε\varepsilon-DP) if eεpjpie^\varepsilon p_j-p_i has no negative entries for all i,j=1,,ni,j=1,\ldots,n. An nn-tuple (ρi)i=1n(\rho_i)_{i=1}^n of density matrices is called classical-quantum ε\varepsilon-differentially private (CQ ε\varepsilon-DP) if eερjρie^\varepsilon\rho_j-\rho_i is positive semi-definite for all i,j=1,,ni,j=1,\ldots,n. Denote by Cn(ε)\mathrm{C}_n(\varepsilon) the set of all ε\varepsilon-DP nn-tuples, and by CQn(ε)\mathrm{CQ}_n(\varepsilon) the set of all CQ ε\varepsilon-DP nn-tuples. By considering optimization problems under local differential privacy, we define the subset ECn(ε)\mathrm{EC}_n(\varepsilon) of CQn(ε)\mathrm{CQ}_n(\varepsilon) that is essentially classical. Roughly speaking, an element in ECn(ε)\mathrm{EC}_n(\varepsilon) is the image of (pi)i=1nCn(ε)(p_i)_{i=1}^n\in\mathrm{C}_n(\varepsilon) by a completely positive and trace-preserving linear map (CPTP map). In a preceding study, it is known that EC2(ε)=CQ2(ε)\mathrm{EC}_2(\varepsilon)=\mathrm{CQ}_2(\varepsilon). In this paper, we show that ECn(ε)CQn(ε)\mathrm{EC}_n(\varepsilon)\not=\mathrm{CQ}_n(\varepsilon) for every n3n\ge3, and estimate the difference between ECn(ε)\mathrm{EC}_n(\varepsilon) and CQn(ε)\mathrm{CQ}_n(\varepsilon) in a certain manner.Comment: 26 page

    Differentially Private Uniformly Most Powerful Tests for Binomial Data

    Full text link
    We derive uniformly most powerful (UMP) tests for simple and one-sided hypotheses for a population proportion within the framework of Differential Privacy (DP), optimizing finite sample performance. We show that in general, DP hypothesis tests for exchangeable data can always be expressed as a function of the empirical distribution. Using this structure, we prove a `Neyman-Pearson lemma' for binomial data under DP, where the DP-UMP only depends on the sample sum. Our tests can also be stated as a post-processing of a random variable, whose distribution we coin "Truncated-Uniform-Laplace" (Tulap), a generalization of the Staircase and discrete Laplace distributions. Furthermore, we obtain exact p-values, which are easily computed in terms of the Tulap random variable. We show that our results also apply to distribution-free hypothesis tests for continuous data. Our simulation results demonstrate that our tests have exact type I error, and are more powerful than current techniques.Comment: 15 pages, 2 figure

    Cyber-Resilient Transactive Energy System Design over Insecure Communication Links

    Full text link
    In this paper, the privacy and security issues associated with transactive energy systems over insecure communications are addressed. In particular, it is ensured that, during market-based interactions: (1) each agent's bidding information remains private; and (2) any extraneous data injection attack can be easily detected. A unified cryptography-based approach that can simultaneously achieve both objectives is developed, where privacy preservation is realized by the Paillier encryption scheme, and attack detection is achieved by the Paillier digital signature scheme. Simulation results verify the effectiveness of the proposed cyber-resilient design for transactive energy systems.Comment: 11 pages, 8 figures, journal submissio
    corecore