1,985 research outputs found
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
Differentially Private Distributed Optimization
In distributed optimization and iterative consensus literature, a standard
problem is for agents to minimize a function over a subset of Euclidean
space, where the cost function is expressed as a sum . In this paper,
we study the private distributed optimization (PDOP) problem with the
additional requirement that the cost function of the individual agents should
remain differentially private. The adversary attempts to infer information
about the private cost functions from the messages that the agents exchange.
Achieving differential privacy requires that any change of an individual's cost
function only results in unsubstantial changes in the statistics of the
messages. We propose a class of iterative algorithms for solving PDOP, which
achieves differential privacy and convergence to the optimal value. Our
analysis reveals the dependence of the achieved accuracy and the privacy levels
on the the parameters of the algorithm. We observe that to achieve
-differential privacy the accuracy of the algorithm has the order of
Distributed optimization over time-varying directed graphs
We consider distributed optimization by a collection of nodes, each having
access to its own convex function, whose collective goal is to minimize the sum
of the functions. The communications between nodes are described by a
time-varying sequence of directed graphs, which is uniformly strongly
connected. For such communications, assuming that every node knows its
out-degree, we develop a broadcast-based algorithm, termed the
subgradient-push, which steers every node to an optimal value under a standard
assumption of subgradient boundedness. The subgradient-push requires no
knowledge of either the number of agents or the graph sequence to implement.
Our analysis shows that the subgradient-push algorithm converges at a rate of
, where the constant depends on the initial values at the
nodes, the subgradient norms, and, more interestingly, on both the consensus
speed and the imbalances of influence among the nodes
- …