49,194 research outputs found

    Deterministic Versus Randomized Kaczmarz Iterative Projection

    Get PDF
    Kaczmarz's alternating projection method has been widely used for solving a consistent (mostly over-determined) linear system of equations Ax=b. Because of its simple iterative nature with light computation, this method was successfully applied in computerized tomography. Since tomography generates a matrix A with highly coherent rows, randomized Kaczmarz algorithm is expected to provide faster convergence as it picks a row for each iteration at random, based on a certain probability distribution. It was recently shown that picking a row at random, proportional with its norm, makes the iteration converge exponentially in expectation with a decay constant that depends on the scaled condition number of A and not the number of equations. Since Kaczmarz's method is a subspace projection method, the convergence rate for simple Kaczmarz algorithm was developed in terms of subspace angles. This paper provides analyses of simple and randomized Kaczmarz algorithms and explain the link between them. It also propose new versions of randomization that may speed up convergence

    Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations

    Get PDF
    By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations AX+YB=C and DX+YE=F in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation AXB+CXD=F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm

    An Efficient Re-Scaled Perceptron Algorithm for Conic Systems

    Get PDF
    The classical perceptron algorithm is an elementary row-action/relaxation algorithm for solving a homogeneous linear inequality system Ax > 0. A natural condition measure associated with this algorithm is the Euclidean width T of the cone of feasible solutions, and the iteration complexity of the perceptron algorithm is bounded by 1/T^2, see Rosenblatt 1962. Dunagan and Vempala have developed a re-scaled version of the perceptron algorithm with an improved complexity of O(n ln(1/T)) iterations (with high probability), which is theoretically efficient in T, and in particular is polynomial-time in the bit-length model. We explore extensions of the concepts of these perceptron methods to the general homogeneous conic system Ax is an element of a set int K where K is a regular convex cone. We provide a conic extension of the re-scaled perceptron algorithm based on the notion of a deep-separation oracle of a cone, which essentially computes a certificate of strong separation. We give a general condition under which the re-scaled perceptron algorithm is itself theoretically efficient; this includes the cases when K is the cross-product of half-spaces, second-order cones, and the positive semi-definite cone

    A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization

    Get PDF
    We propose a new first-order primal-dual optimization framework for a convex optimization template with broad applications. Our optimization algorithms feature optimal convergence guarantees under a variety of common structure assumptions on the problem template. Our analysis relies on a novel combination of three classic ideas applied to the primal-dual gap function: smoothing, acceleration, and homotopy. The algorithms due to the new approach achieve the best known convergence rate results, in particular when the template consists of only non-smooth functions. We also outline a restart strategy for the acceleration to significantly enhance the practical performance. We demonstrate relations with the augmented Lagrangian method and show how to exploit the strongly convex objectives with rigorous convergence rate guarantees. We provide numerical evidence with two examples and illustrate that the new methods can outperform the state-of-the-art, including Chambolle-Pock, and the alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech. Report, Oct. 2015 (last update Sept. 2016

    Accelerated Linearized Bregman Method

    Full text link
    In this paper, we propose and analyze an accelerated linearized Bregman (ALB) method for solving the basis pursuit and related sparse optimization problems. This accelerated algorithm is based on the fact that the linearized Bregman (LB) algorithm is equivalent to a gradient descent method applied to a certain dual formulation. We show that the LB method requires O(1/ϵ)O(1/\epsilon) iterations to obtain an ϵ\epsilon-optimal solution and the ALB algorithm reduces this iteration complexity to O(1/ϵ)O(1/\sqrt{\epsilon}) while requiring almost the same computational effort on each iteration. Numerical results on compressed sensing and matrix completion problems are presented that demonstrate that the ALB method can be significantly faster than the LB method
    corecore