543 research outputs found

    On the Global Linear Convergence of the ADMM with Multi-Block Variables

    Full text link
    The alternating direction method of multipliers (ADMM) has been widely used for solving structured convex optimization problems. In particular, the ADMM can solve convex programs that minimize the sum of NN convex functions with NN-block variables linked by some linear constraints. While the convergence of the ADMM for N=2N=2 was well established in the literature, it remained an open problem for a long time whether or not the ADMM for Nā‰„3N \ge 3 is still convergent. Recently, it was shown in [3] that without further conditions the ADMM for Nā‰„3N\ge 3 may actually fail to converge. In this paper, we show that under some easily verifiable and reasonable conditions the global linear convergence of the ADMM when Nā‰„3N\geq 3 can still be assured, which is important since the ADMM is a popular method for solving large scale multi-block optimization models and is known to perform very well in practice even when Nā‰„3N\ge 3. Our study aims to offer an explanation for this phenomenon

    A decomposition procedure based on approximate newton directions

    Get PDF
    The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods, as well as to study the impact of different preconditioner choices

    A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

    Get PDF
    The efficient solution of large-scale linear and nonlinear optimization problems may require exploiting any special structure in them in an efficient manner. We describe and analyze some cases in which this special structure can be used with very little cost to obtain search directions from decomposed subproblems. We also study how to correct these directions using (decomposable) preconditioned conjugate gradient methods to ensure local convergence in all cases. The choice of appropriate preconditioners results in a natural manner from the structure in the problem. Finally, we conduct computational experiments to compare the resulting procedures with direct methods, as well as to study the impact of different preconditioner choices.
    • ā€¦
    corecore