777,780 research outputs found

    Differentially Private Distributed Optimization

    Full text link
    In distributed optimization and iterative consensus literature, a standard problem is for NN agents to minimize a function ff over a subset of Euclidean space, where the cost function is expressed as a sum ∑fi\sum f_i. In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual's cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm. We observe that to achieve ϵ\epsilon-differential privacy the accuracy of the algorithm has the order of O(1ϵ2)O(\frac{1}{\epsilon^2})

    Non-Convex Distributed Optimization

    Full text link
    We study distributed non-convex optimization on a time-varying multi-agent network. Each node has access to its own smooth local cost function, and the collective goal is to minimize the sum of these functions. We generalize the results obtained previously to the case of non-convex functions. Under some additional technical assumptions on the gradients we prove the convergence of the distributed push-sum algorithm to some critical point of the objective function. By utilizing perturbations on the update process, we show the almost sure convergence of the perturbed dynamics to a local minimum of the global objective function. Our analysis shows that this noised procedure converges at a rate of O(1/t)O(1/t)

    Distributed Delayed Stochastic Optimization

    Full text link
    We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to the development of gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. In application to distributed optimization, we develop procedures that overcome communication bottlenecks and synchronization requirements. We show nn-node architectures whose optimization error in stochastic problems---in spite of asynchronous delays---scales asymptotically as \order(1 / \sqrt{nT}) after TT iterations. This rate is known to be optimal for a distributed system with nn nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a statistical machine learning task.Comment: 27 pages, 4 figure
    • …
    corecore