12 research outputs found
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework
As the modern world becomes increasingly digitized and interconnected,
distributed signal processing has proven to be effective in processing its
large volume of data. However, a main challenge limiting the broad use of
distributed signal processing techniques is the issue of privacy in handling
sensitive data. To address this privacy issue, we propose a novel yet general
subspace perturbation method for privacy-preserving distributed optimization,
which allows each node to obtain the desired solution while protecting its
private data. In particular, we show that the dual variables introduced in each
distributed optimizer will not converge in a certain subspace determined by the
graph topology. Additionally, the optimization variable is ensured to converge
to the desired solution, because it is orthogonal to this non-convergent
subspace. We therefore propose to insert noise in the non-convergent subspace
through the dual variable such that the private data are protected, and the
accuracy of the desired solution is completely unaffected. Moreover, the
proposed method is shown to be secure under two widely-used adversary models:
passive and eavesdropping. Furthermore, we consider several distributed
optimizers such as ADMM and PDMM to demonstrate the general applicability of
the proposed method. Finally, we test the performance through a set of
applications. Numerical tests indicate that the proposed method is superior to
existing methods in terms of several parameters like estimated accuracy,
privacy level, communication cost and convergence rate
Theoretical Analysis of Primal-Dual Algorithm for Non-Convex Stochastic Decentralized Optimization
In recent years, decentralized learning has emerged as a powerful tool not
only for large-scale machine learning, but also for preserving privacy. One of
the key challenges in decentralized learning is that the data distribution held
by each node is statistically heterogeneous. To address this challenge, the
primal-dual algorithm called the Edge-Consensus Learning (ECL) was proposed and
was experimentally shown to be robust to the heterogeneity of data
distributions. However, the convergence rate of the ECL is provided only when
the objective function is convex, and has not been shown in a standard machine
learning setting where the objective function is non-convex. Furthermore, the
intuitive reason why the ECL is robust to the heterogeneity of data
distributions has not been investigated. In this work, we first investigate the
relationship between the ECL and Gossip algorithm and show that the update
formulas of the ECL can be regarded as correcting the local stochastic gradient
in the Gossip algorithm. Then, we propose the Generalized ECL (G-ECL), which
contains the ECL as a special case, and provide the convergence rates of the
G-ECL in both (strongly) convex and non-convex settings, which do not depend on
the heterogeneity of data distributions. Through synthetic experiments, we
demonstrate that the numerical results of both the G-ECL and ECL coincide with
the convergence rate of the G-ECL
Derivation and Analysis of the Primal-Dual Method of Multipliers Based on Monotone Operator Theory
In this paper, we present a novel derivation of an existing algorithm for distributed optimization termed the primal-dual method of multipliers (PDMM). In contrast to its initial derivation, monotone operator theory is used to connect PDMM with other first-order methods such as Douglas-Rachford splitting and the alternating direction method of multipliers, thus, providing insight into its operation. In particular, we show how PDMM combines a lifted dual form in conjunction with Peaceman-Rachford splitting to facilitate distributed optimization in undirected networks. We additionally demonstrate sufficient conditions for primal convergence for strongly convex differentiable functions and strengthen this result for strongly convex functions with Lipschitz continuous gradients by introducing a primal geometric convergence bound.</p