2 research outputs found

    Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

    Full text link
    We propose an information-theoretic technique for analyzing privacy guarantees of online algorithms. Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for ff-divergences. Our technique relies on generalizing the Dobrushin's contraction coefficient for total variation distance to an ff-divergence known as EΞ³E_\gamma-divergence. EΞ³E_\gamma-divergence, in turn, is equivalent to approximate differential privacy. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.Comment: Submitte

    On facility location problem in the local differential privacy model

    Full text link
    In this paper we study the uncapacitated facility location problem in the model of differential privacy (DP) with uniform facility cost. Specifically, we first show that, under the hierarchically well-separated tree (HST) metrics and the super-set output setting that was introduced in [8], there is an β€€βˆŠ-DP algorithm that achieves an O (ΒΉ/∊) expected multiplicative) approximation ratio; this implies an O( ^log n/_∊) approximation ratio for the general metric case, where n is the size of the input metric. These bounds improve the best-known results given by [8]. In particular, our approximation ratio for HST-metrics is independent of n, and the ratio for general metrics is independent of the aspect ratio of the input metric. On the negative side, we show that the approximation ratio of any β€€βˆŠ-DP algorithm is lower bounded by Ξ© (1/√∊), even for instances on HST metrics with uniform facility cost, under the super-set output setting. The lower bound shows that the dependence of the approximation ratio for HST metrics on ∊ can not be removed or greatly improved. Our novel methods and techniques for both the upper and lower bound may find additional applications.CNS-2040249 - National Science Foundationhttps://proceedings.mlr.press/v151/cohen-addad22a/cohen-addad22a.pdfFirst author draf
    corecore