170 research outputs found
A distributed primal-dual interior-point method for loosely coupled problems using ADMM
In this paper we propose an efficient distributed algorithm for solving
loosely coupled convex optimization problems. The algorithm is based on a
primal-dual interior-point method in which we use the alternating direction
method of multipliers (ADMM) to compute the primal-dual directions at each
iteration of the method. This enables us to join the exceptional convergence
properties of primal-dual interior-point methods with the remarkable
parallelizability of ADMM. The resulting algorithm has superior computational
properties with respect to ADMM directly applied to our problem. The amount of
computations that needs to be conducted by each computing agent is far less. In
particular, the updates for all variables can be expressed in closed form,
irrespective of the type of optimization problem. The most expensive
computational burden of the algorithm occur in the updates of the primal
variables and can be precomputed in each iteration of the interior-point
method. We verify and compare our method to ADMM in numerical experiments.Comment: extended version, 50 pages, 9 figure
Lagrangian-based methods in convex optimization: prediction-correction frameworks with non-ergodic convergence rates
Lagrangian-based methods are classical methods for solving convex
optimization problems with equality constraints. We present novel
prediction-correction frameworks for such methods and their variants, which can
achieve non-ergodic convergence rates for general convex optimization
and non-ergodic convergence rates under the assumption that the
objective function is strongly convex or gradient Lipschitz continuous. We give
two approaches ( ) to design algorithms
satisfying the presented prediction-correction frameworks. As applications, we
establish non-ergodic convergence rates for some well-known Lagrangian-based
methods (esp., the ADMM type methods and the multi-block ADMM type methods)
ADMM-based Adaptive Sampling Strategy for Nonholonomic Mobile Robotic Sensor Networks
This paper discusses the adaptive sampling problem in a nonholonomic mobile
robotic sensor network for efficiently monitoring a spatial field. It is
proposed to employ Gaussian process to model a spatial phenomenon and predict
it at unmeasured positions, which enables the sampling optimization problem to
be formulated by the use of the log determinant of a predicted covariance
matrix at next sampling locations. The control, movement and nonholonomic
dynamics constraints of the mobile sensors are also considered in the adaptive
sampling optimization problem. In order to tackle the nonlinearity and
nonconvexity of the objective function in the optimization problem we first
exploit the linearized alternating direction method of multipliers (L-ADMM)
method that can effectively simplify the objective function, though it is
computationally expensive since a nonconvex problem needs to be solved exactly
in each iteration. We then propose a novel approach called the successive
convexified ADMM (SC-ADMM) that sequentially convexify the nonlinear dynamic
constraints so that the original optimization problem can be split into convex
subproblems. It is noted that both the L-ADMM algorithm and our SC-ADMM
approach can solve the sampling optimization problem in either a centralized or
a distributed manner. We validated the proposed approaches in 1000 experiments
in a synthetic environment with a real-world dataset, where the obtained
results suggest that both the L-ADMM and SC- ADMM techniques can provide good
accuracy for the monitoring purpose. However, our proposed SC-ADMM approach
computationally outperforms the L-ADMM counterpart, demonstrating its better
practicality.Comment: submitted to IEEE Sensors Journal, revised versio
- …