1,137 research outputs found

    Differentially private analysis of networks with covariates via a generalized β\beta-model

    Full text link
    How to achieve the tradeoff between privacy and utility is one of fundamental problems in private data analysis.In this paper, we give a rigourous differential privacy analysis of networks in the appearance of covariates via a generalized β\beta-model, which has an nn-dimensional degree parameter β\beta and a pp-dimensional homophily parameter γ\gamma.Under (kn,ϵn)(k_n, \epsilon_n)-edge differential privacy, we use the popular Laplace mechanism to release the network statistics.The method of moments is used to estimate the unknown model parameters. We establish the conditions guaranteeing consistency of the differentially private estimators β^\widehat{\beta} and γ^\widehat{\gamma} as the number of nodes nn goes to infinity, which reveal an interesting tradeoff between a privacy parameter and model parameters. The consistency is shown by applying a two-stage Newton's method to obtain the upper bound of the error between (β^,γ^)(\widehat{\beta},\widehat{\gamma}) and its true value (β,γ)(\beta, \gamma) in terms of the \ell_\infty distance, which has a convergence rate of rough order 1/n1/21/n^{1/2} for β^\widehat{\beta} and 1/n1/n for γ^\widehat{\gamma}, respectively. Further, we derive the asymptotic normalities of β^\widehat{\beta} and γ^\widehat{\gamma}, whose asymptotic variances are the same as those of the non-private estimators under some conditions. Our paper sheds light on how to explore asymptotic theory under differential privacy in a principled manner; these principled methods should be applicable to a class of network models with covariates beyond the generalized β\beta-model. Numerical studies and a real data analysis demonstrate our theoretical findings.Comment: 34 pages, 2 figures. arXiv admin note: substantial text overlap with arXiv:2107.10735 by other author

    Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning

    Get PDF
    Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this paper, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with NN users achieves a secure aggregation overhead of O(NlogN)O(N\log{N}), as opposed to O(N2)O(N^2), while tolerating up to a user dropout rate of 50%50\%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40×40\times speedup over the state-of-the-art protocols with up to N=200N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate
    corecore