3 research outputs found
Solving Linear Equations with Separable Problem Data over Directed Networks
This paper deals with linear algebraic equations where the global coefficient
matrix and constant vector are given respectively, by the summation of the
coefficient matrices and constant vectors of the individual agents. Our
approach is based on reformulating the original problem as an unconstrained
optimization. Based on this exact reformulation, we first provide a
gradient-based, centralized algorithm which serves as a reference for the
ensuing design of distributed algorithms. We propose two sets of exponentially
stable continuous-time distributed algorithms that do not require the
individual agent matrices to be invertible, and are based on estimating
non-distributed terms in the centralized algorithm using dynamic average
consensus. The first algorithm works for time-varying weight-balanced directed
networks, and the second algorithm works for general directed networks for
which the communication graphs might not be balanced. Numerical simulations
illustrate our results.Comment: 6 pages, 2 figure
Nesterov Acceleration for Equality-Constrained Convex Optimization via Continuously Differentiable Penalty Functions
We propose a framework to use Nesterov's accelerated method for constrained
convex optimization problems. Our approach consists of first reformulating the
original problem as an unconstrained optimization problem using a continuously
differentiable exact penalty function. This reformulation is based on replacing
the Lagrange multipliers in the augmented Lagrangian of the original problem by
Lagrange multiplier functions. The expressions of these Lagrange multiplier
functions, which depend upon the gradients of the objective function and the
constraints, can make the unconstrained penalty function non-convex in general
even if the original problem is convex. We establish sufficient conditions on
the objective function and the constraints of the original problem under which
the unconstrained penalty function is convex. This enables us to use Nesterov's
accelerated gradient method for unconstrained convex optimization and achieve a
guaranteed rate of convergence which is better than the state-of-the-art
first-order algorithms for constrained convex optimization. Simulations
illustrate our results.Comment: 7 pages, 1 figur
Network Optimization via Smooth Exact Penalty Functions Enabled by Distributed Gradient Computation
This paper proposes a distributed algorithm for a network of agents to solve
an optimization problem with separable objective function and locally coupled
constraints. Our strategy is based on reformulating the original constrained
problem as the unconstrained optimization of a smooth (continuously
differentiable) exact penalty function. Computing the gradient of this penalty
function in a distributed way is challenging even under the separability
assumptions on the original optimization problem. Our technical approach shows
that the distributed computation problem for the gradient can be formulated as
a system of linear algebraic equations defined by separable problem data. To
solve it, we design an exponentially fast, input-to-state stable distributed
algorithm that does not require the individual agent matrices to be invertible.
We employ this strategy to compute the gradient of the penalty function at the
current network state. Our distributed algorithmic solver for the original
constrained optimization problem interconnects this estimation with the
prescription of having the agents follow the resulting direction. Numerical
simulations illustrate the convergence and robustness properties of the
proposed algorithm.Comment: 12 pages, 3 figure