66,462 research outputs found
Distributed Nonconvex Multiagent Optimization Over Time-Varying Networks
We study nonconvex distributed optimization in multiagent networks where the
communications between nodes is modeled as a time-varying sequence of arbitrary
digraphs. We introduce a novel broadcast-based distributed algorithmic
framework for the (constrained) minimization of the sum of a smooth (possibly
nonconvex and nonseparable) function, i.e., the agents' sum-utility, plus a
convex (possibly nonsmooth and nonseparable) regularizer. The latter is usually
employed to enforce some structure in the solution, typically sparsity. The
proposed method hinges on Successive Convex Approximation (SCA) techniques
coupled with i) a tracking mechanism instrumental to locally estimate the
gradients of agents' cost functions; and ii) a novel broadcast protocol to
disseminate information and distribute the computation among the agents.
Asymptotic convergence to stationary solutions is established. A key feature of
the proposed algorithm is that it neither requires the double-stochasticity of
the consensus matrices (but only column stochasticity) nor the knowledge of the
graph sequence to implement. To the best of our knowledge, the proposed
framework is the first broadcast-based distributed algorithm for convex and
nonconvex constrained optimization over arbitrary, time-varying digraphs.
Numerical results show that our algorithm outperforms current schemes on both
convex and nonconvex problems.Comment: Copyright 2001 SS&C. Published in the Proceedings of the 50th annual
Asilomar conference on signals, systems, and computers, Nov. 6-9, 2016, CA,
US
Transformation Method for Solving Hamilton-Jacobi-Bellman Equation for Constrained Dynamic Stochastic Optimal Allocation Problem
In this paper we propose and analyze a method based on the Riccati
transformation for solving the evolutionary Hamilton-Jacobi-Bellman equation
arising from the stochastic dynamic optimal allocation problem. We show how the
fully nonlinear Hamilton-Jacobi-Bellman equation can be transformed into a
quasi-linear parabolic equation whose diffusion function is obtained as the
value function of certain parametric convex optimization problem. Although the
diffusion function need not be sufficiently smooth, we are able to prove
existence, uniqueness and derive useful bounds of classical H\"older smooth
solutions. We furthermore construct a fully implicit iterative numerical scheme
based on finite volume approximation of the governing equation. A numerical
solution is compared to a semi-explicit traveling wave solution by means of the
convergence ratio of the method. We compute optimal strategies for a portfolio
investment problem motivated by the German DAX 30 Index as an example of
application of the method
A stochastic approximation algorithm for stochastic semidefinite programming
Motivated by applications to multi-antenna wireless networks, we propose a
distributed and asynchronous algorithm for stochastic semidefinite programming.
This algorithm is a stochastic approximation of a continous- time matrix
exponential scheme regularized by the addition of an entropy-like term to the
problem's objective function. We show that the resulting algorithm converges
almost surely to an -approximation of the optimal solution
requiring only an unbiased estimate of the gradient of the problem's stochastic
objective. When applied to throughput maximization in wireless multiple-input
and multiple-output (MIMO) systems, the proposed algorithm retains its
convergence properties under a wide array of mobility impediments such as user
update asynchronicities, random delays and/or ergodically changing channels.
Our theoretical analysis is complemented by extensive numerical simulations
which illustrate the robustness and scalability of the proposed method in
realistic network conditions.Comment: 25 pages, 4 figure
Distributed Big-Data Optimization via Block-Iterative Convexification and Averaging
In this paper, we study distributed big-data nonconvex optimization in
multi-agent networks. We consider the (constrained) minimization of the sum of
a smooth (possibly) nonconvex function, i.e., the agents' sum-utility, plus a
convex (possibly) nonsmooth regularizer. Our interest is in big-data problems
wherein there is a large number of variables to optimize. If treated by means
of standard distributed optimization algorithms, these large-scale problems may
be intractable, due to the prohibitive local computation and communication
burden at each node. We propose a novel distributed solution method whereby at
each iteration agents optimize and then communicate (in an uncoordinated
fashion) only a subset of their decision variables. To deal with non-convexity
of the cost function, the novel scheme hinges on Successive Convex
Approximation (SCA) techniques coupled with i) a tracking mechanism
instrumental to locally estimate gradient averages; and ii) a novel block-wise
consensus-based protocol to perform local block-averaging operations and
gradient tacking. Asymptotic convergence to stationary solutions of the
nonconvex problem is established. Finally, numerical results show the
effectiveness of the proposed algorithm and highlight how the block dimension
impacts on the communication overhead and practical convergence speed
- …