7,157 research outputs found
Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
Federated learning is a distributed framework for training machine learning
models over the data residing at mobile devices, while protecting the privacy
of individual users. A major bottleneck in scaling federated learning to a
large number of users is the overhead of secure model aggregation across many
users. In particular, the overhead of the state-of-the-art protocols for secure
model aggregation grows quadratically with the number of users. In this paper,
we propose the first secure aggregation framework, named Turbo-Aggregate, that
in a network with users achieves a secure aggregation overhead of
, as opposed to , while tolerating up to a user dropout
rate of . Turbo-Aggregate employs a multi-group circular strategy for
efficient model aggregation, and leverages additive secret sharing and novel
coding techniques for injecting aggregation redundancy in order to handle user
dropouts while guaranteeing user privacy. We experimentally demonstrate that
Turbo-Aggregate achieves a total running time that grows almost linear in the
number of users, and provides up to speedup over the
state-of-the-art protocols with up to users. Our experiments also
demonstrate the impact of model size and bandwidth on the performance of
Turbo-Aggregate
Lagrange Coded Computing: Optimal Design for Resiliency, Security and Privacy
We consider a scenario involving computations over a massive dataset stored
distributedly across multiple workers, which is at the core of distributed
learning algorithms. We propose Lagrange Coded Computing (LCC), a new framework
to simultaneously provide (1) resiliency against stragglers that may prolong
computations; (2) security against Byzantine (or malicious) workers that
deliberately modify the computation for their benefit; and (3)
(information-theoretic) privacy of the dataset amidst possible collusion of
workers. LCC, which leverages the well-known Lagrange polynomial to create
computation redundancy in a novel coded form across workers, can be applied to
any computation scenario in which the function of interest is an arbitrary
multivariate polynomial of the input dataset, hence covering many computations
of interest in machine learning. LCC significantly generalizes prior works to
go beyond linear computations. It also enables secure and private computing in
distributed settings, improving the computation and communication efficiency of
the state-of-the-art. Furthermore, we prove the optimality of LCC by showing
that it achieves the optimal tradeoff between resiliency, security, and
privacy, i.e., in terms of tolerating the maximum number of stragglers and
adversaries, and providing data privacy against the maximum number of colluding
workers. Finally, we show via experiments on Amazon EC2 that LCC speeds up the
conventional uncoded implementation of distributed least-squares linear
regression by up to , and also achieves a
- speedup over the state-of-the-art straggler
mitigation strategies
SimpleTrack:Adaptive Trajectory Compression with Deterministic Projection Matrix for Mobile Sensor Networks
Some mobile sensor network applications require the sensor nodes to transfer
their trajectories to a data sink. This paper proposes an adaptive trajectory
(lossy) compression algorithm based on compressive sensing. The algorithm has
two innovative elements. First, we propose a method to compute a deterministic
projection matrix from a learnt dictionary. Second, we propose a method for the
mobile nodes to adaptively predict the number of projections needed based on
the speed of the mobile nodes. Extensive evaluation of the proposed algorithm
using 6 datasets shows that our proposed algorithm can achieve sub-metre
accuracy. In addition, our method of computing projection matrices outperforms
two existing methods. Finally, comparison of our algorithm against a
state-of-the-art trajectory compression algorithm show that our algorithm can
reduce the error by 10-60 cm for the same compression ratio
Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs
- …