1,137 research outputs found
Differentially private analysis of networks with covariates via a generalized -model
How to achieve the tradeoff between privacy and utility is one of fundamental
problems in private data analysis.In this paper, we give a rigourous
differential privacy analysis of networks in the appearance of covariates via a
generalized -model, which has an -dimensional degree parameter
and a -dimensional homophily parameter .Under -edge differential privacy, we use the popular Laplace mechanism to
release the network statistics.The method of moments is used to estimate the
unknown model parameters. We establish the conditions guaranteeing consistency
of the differentially private estimators and
as the number of nodes goes to infinity, which reveal an
interesting tradeoff between a privacy parameter and model parameters. The
consistency is shown by applying a two-stage Newton's method to obtain the
upper bound of the error between and its
true value in terms of the distance, which has
a convergence rate of rough order for and
for , respectively. Further, we derive the asymptotic
normalities of and , whose asymptotic
variances are the same as those of the non-private estimators under some
conditions. Our paper sheds light on how to explore asymptotic theory under
differential privacy in a principled manner; these principled methods should be
applicable to a class of network models with covariates beyond the generalized
-model. Numerical studies and a real data analysis demonstrate our
theoretical findings.Comment: 34 pages, 2 figures. arXiv admin note: substantial text overlap with
arXiv:2107.10735 by other author
Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
Federated learning is a distributed framework for training machine learning
models over the data residing at mobile devices, while protecting the privacy
of individual users. A major bottleneck in scaling federated learning to a
large number of users is the overhead of secure model aggregation across many
users. In particular, the overhead of the state-of-the-art protocols for secure
model aggregation grows quadratically with the number of users. In this paper,
we propose the first secure aggregation framework, named Turbo-Aggregate, that
in a network with users achieves a secure aggregation overhead of
, as opposed to , while tolerating up to a user dropout
rate of . Turbo-Aggregate employs a multi-group circular strategy for
efficient model aggregation, and leverages additive secret sharing and novel
coding techniques for injecting aggregation redundancy in order to handle user
dropouts while guaranteeing user privacy. We experimentally demonstrate that
Turbo-Aggregate achieves a total running time that grows almost linear in the
number of users, and provides up to speedup over the
state-of-the-art protocols with up to users. Our experiments also
demonstrate the impact of model size and bandwidth on the performance of
Turbo-Aggregate
- …