817 research outputs found

    Alternating Projections and Douglas-Rachford for Sparse Affine Feasibility

    Full text link
    The problem of finding a vector with the fewest nonzero elements that satisfies an underdetermined system of linear equations is an NP-complete problem that is typically solved numerically via convex heuristics or nicely-behaved nonconvex relaxations. In this work we consider elementary methods based on projections for solving a sparse feasibility problem without employing convex heuristics. In a recent paper Bauschke, Luke, Phan and Wang (2014) showed that, locally, the fundamental method of alternating projections must converge linearly to a solution to the sparse feasibility problem with an affine constraint. In this paper we apply different analytical tools that allow us to show global linear convergence of alternating projections under familiar constraint qualifications. These analytical tools can also be applied to other algorithms. This is demonstrated with the prominent Douglas-Rachford algorithm where we establish local linear convergence of this method applied to the sparse affine feasibility problem.Comment: 29 pages, 2 figures, 37 references. Much expanded version from last submission. Title changed to reflect new development

    Convergence in Distribution of Randomized Algorithms: The Case of Partially Separable Optimization

    Full text link
    We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these methods are statements about the Markov operators and distributions behind the iterates of stochastic algorithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the sequences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.Comment: 25 pages, 43 reference
    • …
    corecore