1,223 research outputs found
Differentially Private Distributed Optimization
In distributed optimization and iterative consensus literature, a standard
problem is for agents to minimize a function over a subset of Euclidean
space, where the cost function is expressed as a sum . In this paper,
we study the private distributed optimization (PDOP) problem with the
additional requirement that the cost function of the individual agents should
remain differentially private. The adversary attempts to infer information
about the private cost functions from the messages that the agents exchange.
Achieving differential privacy requires that any change of an individual's cost
function only results in unsubstantial changes in the statistics of the
messages. We propose a class of iterative algorithms for solving PDOP, which
achieves differential privacy and convergence to the optimal value. Our
analysis reveals the dependence of the achieved accuracy and the privacy levels
on the the parameters of the algorithm. We observe that to achieve
-differential privacy the accuracy of the algorithm has the order of
Optimal State Estimation with Measurements Corrupted by Laplace Noise
Optimal state estimation for linear discrete-time systems is considered.
Motivated by the literature on differential privacy, the measurements are
assumed to be corrupted by Laplace noise. The optimal least mean square error
estimate of the state is approximated using a randomized method. The method
relies on that the Laplace noise can be rewritten as Gaussian noise scaled by
Rayleigh random variable. The probability of the event that the distance
between the approximation and the best estimate is smaller than a constant is
determined as function of the number of parallel Kalman filters that is used in
the randomized method. This estimator is then compared with the optimal linear
estimator, the maximum a posteriori (MAP) estimate of the state, and the
particle filter
- …