20 research outputs found
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
Randomized dual proximal gradient for large-scale distributed optimization
In this paper we consider distributed optimization problems in which the cost
function is separable (i.e., a sum of possibly non-smooth functions all
sharing a common variable) and can be split into a strongly convex term and a
convex one. The second term is typically used to encode constraints or to
regularize the solution. We propose an asynchronous, distributed optimization
algorithm over an undirected topology, based on a proximal gradient update on
the dual problem. We show that by means of a proper choice of primal
variables, the dual problem is separable and the dual variables can be stacked
into separate blocks. This allows us to show that a distributed
gossip update can be obtained by means of a randomized block-coordinate
proximal gradient on the dual function
Dynamic and Distributed Online Convex Optimization for Demand Response of Commercial Buildings
We extend the regret analysis of the online distributed weighted dual
averaging (DWDA) algorithm [1] to the dynamic setting and provide the tightest
dynamic regret bound known to date with respect to the time horizon for a
distributed online convex optimization (OCO) algorithm. Our bound is linear in
the cumulative difference between consecutive optima and does not depend
explicitly on the time horizon. We use dynamic-online DWDA (D-ODWDA) and
formulate a performance-guaranteed distributed online demand response approach
for heating, ventilation, and air-conditioning (HVAC) systems of commercial
buildings. We show the performance of our approach for fast timescale demand
response in numerical simulations and obtain demand response decisions that
closely reproduce the centralized optimal ones
Distributed Stochastic Optimization over Time-Varying Noisy Network
This paper is concerned with distributed stochastic multi-agent optimization
problem over a class of time-varying network with slowly decreasing
communication noise effects. This paper considers the problem in composite
optimization setting which is more general in noisy network optimization. It is
noteworthy that existing methods for noisy network optimization are Euclidean
projection based. We present two related different classes of non-Euclidean
methods and investigate their convergence behavior. One is distributed
stochastic composite mirror descent type method (DSCMD-N) which provides a more
general algorithm framework than former works in this literature. As a
counterpart, we also consider a composite dual averaging type method (DSCDA-N)
for noisy network optimization. Some main error bounds for DSCMD-N and DSCDA-N
are obtained. The trade-off among stepsizes, noise decreasing rates,
convergence rates of algorithm is analyzed in detail. To the best of our
knowledge, this is the first work to analyze and derive convergence rates of
optimization algorithm in noisy network optimization. We show that an optimal
rate of in nonsmooth convex optimization can be obtained for
proposed methods under appropriate communication noise condition. Moveover,
convergence rates in different orders are comprehensively derived in both
expectation convergence and high probability convergence sense.Comment: 27 page
D-SVM over Networked Systems with Non-Ideal Linking Conditions
This paper considers distributed optimization algorithms, with application in
binary classification via distributed support-vector-machines (D-SVM) over
multi-agent networks subject to some link nonlinearities. The agents solve a
consensus-constraint distributed optimization cooperatively via continuous-time
dynamics, while the links are subject to strongly sign-preserving odd nonlinear
conditions. Logarithmic quantization and clipping (saturation) are two examples
of such nonlinearities. In contrast to existing literature that mostly
considers ideal links and perfect information exchange over linear channels, we
show how general sector-bounded models affect the convergence to the optimizer
(i.e., the SVM classifier) over dynamic balanced directed networks. In general,
any odd sector-bounded nonlinear mapping can be applied to our dynamics. The
main challenge is to show that the proposed system dynamics always have one
zero eigenvalue (associated with the consensus) and the other eigenvalues all
have negative real parts. This is done by recalling arguments from matrix
perturbation theory. Then, the solution is shown to converge to the agreement
state under certain conditions. For example, the gradient tracking (GT) step
size is tighter than the linear case by factors related to the upper/lower
sector bounds. To the best of our knowledge, no existing work in distributed
optimization and learning literature considers non-ideal link conditions
Distributed Online Convex Optimization with an Aggregative Variable
This paper investigates distributed online convex optimization in the
presence of an aggregative variable without any global/central coordinators
over a multi-agent network, where each individual agent is only able to access
partial information of time-varying global loss functions, thus requiring local
information exchanges between neighboring agents. Motivated by many
applications in reality, the considered local loss functions depend not only on
their own decision variables, but also on an aggregative variable, such as the
average of all decision variables. To handle this problem, an Online
Distributed Gradient Tracking algorithm (O-DGT) is proposed with exact gradient
information and it is shown that the dynamic regret is upper bounded by three
terms: a sublinear term, a path variation term, and a gradient variation term.
Meanwhile, the O-DGT algorithm is also analyzed with stochastic/noisy
gradients, showing that the expected dynamic regret has the same upper bound as
the exact gradient case. To our best knowledge, this paper is the first to
study online convex optimization in the presence of an aggregative variable,
which enjoys new characteristics in comparison with the conventional scenario
without the aggregative variable. Finally, a numerical experiment is provided
to corroborate the obtained theoretical results