Randomly sparsified Richardson iteration is really fast

Abstract

Recently, a class of algorithms combining classical fixed point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as 10108×1010810^{108} \times 10^{108}. So far, a complete mathematical explanation for their success has proven elusive. Additionally, the methods have not been extended to linear system solves. In this paper we propose a new scheme based on repeated random sparsification that is capable of solving linear systems in extremely high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster-than-Monte Carlo convergence rate and justifies use of the scheme even when the solution vector itself is too large to store.Comment: 27 pages, 2 figure

    Similar works

    Full text

    thumbnail-image

    Available Versions