10,668 research outputs found
Differentially Private Distributed Optimization
In distributed optimization and iterative consensus literature, a standard
problem is for agents to minimize a function over a subset of Euclidean
space, where the cost function is expressed as a sum . In this paper,
we study the private distributed optimization (PDOP) problem with the
additional requirement that the cost function of the individual agents should
remain differentially private. The adversary attempts to infer information
about the private cost functions from the messages that the agents exchange.
Achieving differential privacy requires that any change of an individual's cost
function only results in unsubstantial changes in the statistics of the
messages. We propose a class of iterative algorithms for solving PDOP, which
achieves differential privacy and convergence to the optimal value. Our
analysis reveals the dependence of the achieved accuracy and the privacy levels
on the the parameters of the algorithm. We observe that to achieve
-differential privacy the accuracy of the algorithm has the order of
Tailoring Gradient Methods for Differentially-Private Distributed Optimization
Decentralized optimization is gaining increased traction due to its
widespread applications in large-scale machine learning and multi-agent
systems. The same mechanism that enables its success, i.e., information sharing
among participating agents, however, also leads to the disclosure of individual
agents' private information, which is unacceptable when sensitive data are
involved. As differential privacy is becoming a de facto standard for privacy
preservation, recently results have emerged integrating differential privacy
with distributed optimization. Although such differential-privacy based privacy
approaches for distributed optimization are efficient in both computation and
communication, directly incorporating differential privacy design in existing
distributed optimization approaches significantly compromises optimization
accuracy. In this paper, we propose to redesign and tailor gradient methods for
differentially-private distributed optimization, and propose two
differential-privacy oriented gradient methods that can ensure both privacy and
optimality. We prove that the proposed distributed algorithms can ensure almost
sure convergence to an optimal solution under any persistent and
variance-bounded differential-privacy noise, which, to the best of our
knowledge, has not been reported before. The first algorithm is based on
static-consensus based gradient methods and only shares one variable in each
iteration. The second algorithm is based on dynamic-consensus
(gradient-tracking) based distributed optimization methods and, hence, it is
applicable to general directed interaction graph topologies. Numerical
comparisons with existing counterparts confirm the effectiveness of the
proposed approaches
Gradient-tracking Based Differentially Private Distributed Optimization with Enhanced Optimization Accuracy
Privacy protection has become an increasingly pressing requirement in
distributed optimization. However, equipping distributed optimization with
differential privacy, the state-of-the-art privacy protection mechanism, will
unavoidably compromise optimization accuracy. In this paper, we propose an
algorithm to achieve rigorous -differential privacy in
gradient-tracking based distributed optimization with enhanced optimization
accuracy. More specifically, to suppress the influence of differential-privacy
noise, we propose a new robust gradient-tracking based distributed optimization
algorithm that allows both stepsize and the variance of injected noise to vary
with time. Then, we establish a new analyzing approach that can characterize
the convergence of the gradient-tracking based algorithm under both constant
and time-varying stespsizes. To our knowledge, this is the first analyzing
framework that can treat gradient-tracking based distributed optimization under
both constant and time-varying stepsizes in a unified manner. More importantly,
the new analyzing approach gives a much less conservative analytical bound on
the stepsize compared with existing proof techniques for gradient-tracking
based distributed optimization. We also theoretically characterize the
influence of differential-privacy design on the accuracy of distributed
optimization, which reveals that inter-agent interaction has a significant
impact on the final optimization accuracy. The discovery prompts us to optimize
inter-agent coupling weights to minimize the optimization error induced by the
differential-privacy design. Numerical simulation results confirm the
theoretical predictions
Differentially Private Distributed Stochastic Optimization with Time-Varying Sample Sizes
Differentially private distributed stochastic optimization has become a hot
topic due to the urgent need of privacy protection in distributed stochastic
optimization. In this paper, two-time scale stochastic approximation-type
algorithms for differentially private distributed stochastic optimization with
time-varying sample sizes are proposed using gradient- and output-perturbation
methods. For both gradient- and output-perturbation cases, the convergence of
the algorithm and differential privacy with a finite cumulative privacy budget
for an infinite number of iterations are simultaneously
established, which is substantially different from the existing works. By a
time-varying sample sizes method, the privacy level is enhanced, and
differential privacy with a finite cumulative privacy budget for
an infinite number of iterations is established. By properly choosing a
Lyapunov function, the algorithm achieves almost-sure and mean-square
convergence even when the added privacy noises have an increasing variance.
Furthermore, we rigorously provide the mean-square convergence rates of the
algorithm and show how the added privacy noise affects the convergence rate of
the algorithm. Finally, numerical examples including distributed training on a
benchmark machine learning dataset are presented to demonstrate the efficiency
and advantages of the algorithms
Novel gradient-based methods for data distribution and privacy in data science
With an increase in the need of storing data at different locations, designing algorithms that can analyze distributed data is becoming more important. In this thesis, we present several gradient-based algorithms, which are customized for data distribution and privacy. First, we propose a provably convergent, second order incremental and inherently parallel algorithm. The proposed algorithm works with distributed data. By using a local quadratic approximation, we achieve to speed-up the convergence with the help of curvature information. We also illustrate that the parallel implementation of our algorithm performs better than a parallel stochastic gradient descent method to solve a large-scale data science problem. This first algorithm solves the problem of using data that resides at different locations. However, this setting is not necessarily enough for data privacy. To guarantee the privacy of the data, we propose differentially private optimization algorithms in the second part of the thesis. The first one among them employs a smoothing approach which is based on using the weighted averages of the history of gradients. This approach helps to decrease the variance of the noise. This reduction in the variance is important for iterative optimization algorithms, since increasing the amount of noise in the algorithm can harm the performance. We also present differentially private version of a recent multistage accelerated algorithm. These extensions use noise related parameter selection and the proposed stepsizes are proportional to the variance of the noisy gradient. The numerical experiments show that our algorithms show a better performance than some well-known differentially private algorithm
- …