5 research outputs found

    Privacy Analysis of Online Learning Algorithms via Contraction Coefficients

    Full text link
    We propose an information-theoretic technique for analyzing privacy guarantees of online algorithms. Specifically, we demonstrate that differential privacy guarantees of iterative algorithms can be determined by a direct application of contraction coefficients derived from strong data processing inequalities for ff-divergences. Our technique relies on generalizing the Dobrushin's contraction coefficient for total variation distance to an ff-divergence known as EγE_\gamma-divergence. EγE_\gamma-divergence, in turn, is equivalent to approximate differential privacy. As an example, we apply our technique to derive the differential privacy parameters of gradient descent. Moreover, we also show that this framework can be tailored to batch learning algorithms that can be implemented with one pass over the training dataset.Comment: Submitte
    corecore