9,656 research outputs found
The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation
We consider training models on private data that is distributed across user
devices. To ensure privacy, we add on-device noise and use secure aggregation
so that only the noisy sum is revealed to the server. We present a
comprehensive end-to-end system, which appropriately discretizes the data and
adds discrete Gaussian noise before performing secure aggregation. We provide a
novel privacy analysis for sums of discrete Gaussians. We also analyze the
effect of rounding the input data and the modular summation arithmetic. Our
theoretical guarantees highlight the complex tension between communication,
privacy, and accuracy. Our extensive experimental results demonstrate that our
solution is essentially able to achieve a comparable accuracy to central
differential privacy with 16 bits of precision per value
Optimal State Estimation with Measurements Corrupted by Laplace Noise
Optimal state estimation for linear discrete-time systems is considered.
Motivated by the literature on differential privacy, the measurements are
assumed to be corrupted by Laplace noise. The optimal least mean square error
estimate of the state is approximated using a randomized method. The method
relies on that the Laplace noise can be rewritten as Gaussian noise scaled by
Rayleigh random variable. The probability of the event that the distance
between the approximation and the best estimate is smaller than a constant is
determined as function of the number of parallel Kalman filters that is used in
the randomized method. This estimator is then compared with the optimal linear
estimator, the maximum a posteriori (MAP) estimate of the state, and the
particle filter
- …