41,115 research outputs found
Parallel Direction Method of Multipliers
We consider the problem of minimizing block-separable convex functions
subject to linear constraints. While the Alternating Direction Method of
Multipliers (ADMM) for two-block linear constraints has been intensively
studied both theoretically and empirically, in spite of some preliminary work,
effective generalizations of ADMM to multiple blocks is still unclear. In this
paper, we propose a randomized block coordinate method named Parallel Direction
Method of Multipliers (PDMM) to solve the optimization problems with
multi-block linear constraints. PDMM randomly updates some primal and dual
blocks in parallel, behaving like parallel randomized block coordinate descent.
We establish the global convergence and the iteration complexity for PDMM with
constant step size. We also show that PDMM can do randomized block coordinate
descent on overlapping blocks. Experimental results show that PDMM performs
better than state-of-the-arts methods in two applications, robust principal
component analysis and overlapping group lasso.Comment: This paper has been withdrawn by the authors. There are errors in
Equations from 139-19
Parallel Algorithms for Constrained Tensor Factorization via the Alternating Direction Method of Multipliers
Tensor factorization has proven useful in a wide range of applications, from
sensor array processing to communications, speech and audio signal processing,
and machine learning. With few recent exceptions, all tensor factorization
algorithms were originally developed for centralized, in-memory computation on
a single machine; and the few that break away from this mold do not easily
incorporate practically important constraints, such as nonnegativity. A new
constrained tensor factorization framework is proposed in this paper, building
upon the Alternating Direction method of Multipliers (ADMoM). It is shown that
this simplifies computations, bypassing the need to solve constrained
optimization problems in each iteration; and it naturally leads to distributed
algorithms suitable for parallel implementation on regular high-performance
computing (e.g., mesh) architectures. This opens the door for many emerging big
data-enabled applications. The methodology is exemplified using nonnegativity
as a baseline constraint, but the proposed framework can more-or-less readily
incorporate many other types of constraints. Numerical experiments are very
encouraging, indicating that the ADMoM-based nonnegative tensor factorization
(NTF) has high potential as an alternative to state-of-the-art approaches.Comment: Submitted to the IEEE Transactions on Signal Processin
Parallel ADMM for robust quadratic optimal resource allocation problems
An alternating direction method of multipliers (ADMM) solver is described for
optimal resource allocation problems with separable convex quadratic costs and
constraints and linear coupling constraints. We describe a parallel
implementation of the solver on a graphics processing unit (GPU) using a
bespoke quartic function minimizer. An application to robust optimal energy
management in hybrid electric vehicles is described, and the results of
numerical simulations comparing the computation times of the parallel GPU
implementation with those of an equivalent serial implementation are presented
Block-separable linking constraints in augmented Lagrangian coordination
Augmented Lagrangian coordination (ALC) is a provably convergent coordination method for multidisciplinary design optimization (MDO) that is able to treat both linking variables and linking functions (i.e. system-wide objectives and constraints). Contrary to quasi-separable problems with only linking variables, the presence of linking functions may hinder the parallel solution of subproblems and the use of the efficient alternating directions method of multipliers. We show that this unfortunate situation is not the case for MDO problems with block-separable linking constraints. We derive a centralized formulation of ALC for block-separable constraints, which does allow parallel solution of subproblems. Similarly, we derive a distributed coordination variant for which subproblems cannot be solved in parallel, but that still enables the use of the alternating direction method of multipliers. The approach can also be used for other existing MDO coordination strategies such that they can include block-separable linking constraints
- …