117 research outputs found

    Accelerating Random Kaczmarz Algorithm Based on Clustering Information

    Full text link
    Kaczmarz algorithm is an efficient iterative algorithm to solve overdetermined consistent system of linear equations. During each updating step, Kaczmarz chooses a hyperplane based on an individual equation and projects the current estimate for the exact solution onto that space to get a new estimate. Many vairants of Kaczmarz algorithms are proposed on how to choose better hyperplanes. Using the property of randomly sampled data in high-dimensional space, we propose an accelerated algorithm based on clustering information to improve block Kaczmarz and Kaczmarz via Johnson-Lindenstrauss lemma. Additionally, we theoretically demonstrate convergence improvement on block Kaczmarz algorithm

    Acceleration of Randomized Kaczmarz Method via the Johnson-Lindenstrauss Lemma

    Get PDF
    The Kaczmarz method is an algorithm for finding the solution to an overdetermined consistent system of linear equations Ax=b by iteratively projecting onto the solution spaces. The randomized version put forth by Strohmer and Vershynin yields provably exponential convergence in expectation, which for highly overdetermined systems even outperforms the conjugate gradient method. In this article we present a modified version of the randomized Kaczmarz method which at each iteration selects the optimal projection from a randomly chosen set, which in most cases significantly improves the convergence rate. We utilize a Johnson-Lindenstrauss dimension reduction technique to keep the runtime on the same order as the original randomized version, adding only extra preprocessing time. We present a series of empirical studies which demonstrate the remarkable acceleration in convergence to the solution using this modified approach

    A subspace constrained randomized Kaczmarz method for structure or external knowledge exploitation

    Full text link
    We study a subspace constrained version of the randomized Kaczmarz algorithm for solving large linear systems in which the iterates are confined to the space of solutions of a selected subsystem. We show that the subspace constraint leads to an accelerated convergence rate, especially when the system has structure such as having coherent rows or being approximately low-rank. On Gaussian-like random data, it results in a form of dimension reduction that effectively improves the aspect ratio of the system. Furthermore, this method serves as a building block for a second, quantile-based algorithm for the problem of solving linear systems with arbitrary sparse corruptions, which is able to efficiently exploit partial external knowledge about uncorrupted equations and achieve convergence in difficult settings such as in almost-square systems. Numerical experiments on synthetic and real-world data support our theoretical results and demonstrate the validity of the proposed methods for even more general data models than guaranteed by the theory.Comment: 34 page
    • …
    corecore