6,689 research outputs found

    Convergence property of the Iri-Imai algorithm for some smooth convex programming problems

    Get PDF
    In this paper, the Iri-Imai algorithm for solving linear and convex quadratic programming is extended to solve some other smooth convex programming problems. The globally linear convergence rate of this extended algorithm is proved, under the condition that the objective and constraint functions satisfy a certain type of convexity, called the harmonic convexity in this paper. A characterization of this convexity condition is given. The same convexity condition was used by Mehrotra and Sun to prove the convergence of a path-following algorithm. The Iri-Imai algorithm is a natural generalization of the original Newton algorithm to constrained convex programming. Other known convergent interior-point algorithms for smooth convex programming are mainly based on the path-following approach

    On the Global Linear Convergence of the ADMM with Multi-Block Variables

    Full text link
    The alternating direction method of multipliers (ADMM) has been widely used for solving structured convex optimization problems. In particular, the ADMM can solve convex programs that minimize the sum of NN convex functions with NN-block variables linked by some linear constraints. While the convergence of the ADMM for N=2N=2 was well established in the literature, it remained an open problem for a long time whether or not the ADMM for N≥3N \ge 3 is still convergent. Recently, it was shown in [3] that without further conditions the ADMM for N≥3N\ge 3 may actually fail to converge. In this paper, we show that under some easily verifiable and reasonable conditions the global linear convergence of the ADMM when N≥3N\geq 3 can still be assured, which is important since the ADMM is a popular method for solving large scale multi-block optimization models and is known to perform very well in practice even when N≥3N\ge 3. Our study aims to offer an explanation for this phenomenon

    Message-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation

    Get PDF
    Computing maximum a posteriori (MAP) estimation in graphical models is an important inference problem with many applications. We present message-passing algorithms for quadratic programming (QP) formulations of MAP estimation for pairwise Markov random fields. In particular, we use the concave-convex procedure (CCCP) to obtain a locally optimal algorithm for the non-convex QP formulation. A similar technique is used to derive a globally convergent algorithm for the convex QP relaxation of MAP. We also show that a recently developed expectation-maximization (EM) algorithm for the QP formulation of MAP can be derived from the CCCP perspective. Experiments on synthetic and real-world problems confirm that our new approach is competitive with max-product and its variations. Compared with CPLEX, we achieve more than an order-of-magnitude speedup in solving optimally the convex QP relaxation

    Bregman Proximal Gradient Algorithm with Extrapolation for a class of Nonconvex Nonsmooth Minimization Problems

    Get PDF
    In this paper, we consider an accelerated method for solving nonconvex and nonsmooth minimization problems. We propose a Bregman Proximal Gradient algorithm with extrapolation(BPGe). This algorithm extends and accelerates the Bregman Proximal Gradient algorithm (BPG), which circumvents the restrictive global Lipschitz gradient continuity assumption needed in Proximal Gradient algorithms (PG). The BPGe algorithm has higher generality than the recently introduced Proximal Gradient algorithm with extrapolation(PGe), and besides, due to the extrapolation step, BPGe converges faster than BPG algorithm. Analyzing the convergence, we prove that any limit point of the sequence generated by BPGe is a stationary point of the problem by choosing parameters properly. Besides, assuming Kurdyka-{\'L}ojasiewicz property, we prove the whole sequences generated by BPGe converges to a stationary point. Finally, to illustrate the potential of the new method BPGe, we apply it to two important practical problems that arise in many fundamental applications (and that not satisfy global Lipschitz gradient continuity assumption): Poisson linear inverse problems and quadratic inverse problems. In the tests the accelerated BPGe algorithm shows faster convergence results, giving an interesting new algorithm.Comment: Preprint submitted for publication, February 14, 201
    • …
    corecore