6 research outputs found

    Brascamp-Lieb Inequality and Its Reverse: An Information Theoretic View

    Full text link
    We generalize a result by Carlen and Cordero-Erausquin on the equivalence between the Brascamp-Lieb inequality and the subadditivity of relative entropy by allowing for random transformations (a broadcast channel). This leads to a unified perspective on several functional inequalities that have been gaining popularity in the context of proving impossibility results. We demonstrate that the information theoretic dual of the Brascamp-Lieb inequality is a convenient setting for proving properties such as data processing, tensorization, convexity and Gaussian optimality. Consequences of the latter include an extension of the Brascamp-Lieb inequality allowing for Gaussian random transformations, the determination of the multivariate Wyner common information for Gaussian sources, and a multivariate version of Nelson's hypercontractivity theorem. Finally we present an information theoretic characterization of a reverse Brascamp-Lieb inequality involving a random transformation (a multiple access channel).Comment: 5 pages; to be presented at ISIT 201

    Are Slepian-Wolf Rates Necessary for Distributed Parameter Estimation?

    Full text link
    We consider a distributed parameter estimation problem, in which multiple terminals send messages related to their local observations using limited rates to a fusion center who will obtain an estimate of a parameter related to observations of all terminals. It is well known that if the transmission rates are in the Slepian-Wolf region, the fusion center can fully recover all observations and hence can construct an estimator having the same performance as that of the centralized case. One natural question is whether Slepian-Wolf rates are necessary to achieve the same estimation performance as that of the centralized case. In this paper, we show that the answer to this question is negative. We establish our result by explicitly constructing an asymptotically minimum variance unbiased estimator (MVUE) that has the same performance as that of the optimal estimator in the centralized case while requiring information rates less than the conditions required in the Slepian-Wolf rate region.Comment: Accepted in Allerton 201

    Local Differential Privacy Is Equivalent to Contraction of EγE_\gamma-Divergence

    Full text link
    We investigate the local differential privacy (LDP) guarantees of a randomized privacy mechanism via its contraction properties. We first show that LDP constraints can be equivalently cast in terms of the contraction coefficient of the EγE_\gamma-divergence. We then use this equivalent formula to express LDP guarantees of privacy mechanisms in terms of contraction coefficients of arbitrary ff-divergences. When combined with standard estimation-theoretic tools (such as Le Cam's and Fano's converse methods), this result allows us to study the trade-off between privacy and utility in several testing and minimax and Bayesian estimation problems

    Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

    Full text link
    We propose an information-theoretic technique for analyzing privacy guarantees of online algorithms. Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for ff-divergences. Our technique relies on generalizing the Dobrushin's contraction coefficient for total variation distance to an ff-divergence known as EγE_\gamma-divergence. EγE_\gamma-divergence, in turn, is equivalent to approximate differential privacy. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.Comment: Submitte
    corecore