3 research outputs found

    Author Index Volume 231 (2009)

    Get PDF

    Randomly sparsified Richardson iteration is really fast

    Full text link
    Recently, a class of algorithms combining classical fixed point iterations with repeated random sparsification of approximate solution vectors has been successfully applied to eigenproblems with matrices as large as 10108×1010810^{108} \times 10^{108}. So far, a complete mathematical explanation for their success has proven elusive. Additionally, the methods have not been extended to linear system solves. In this paper we propose a new scheme based on repeated random sparsification that is capable of solving linear systems in extremely high dimensions. We provide a complete mathematical analysis of this new algorithm. Our analysis establishes a faster-than-Monte Carlo convergence rate and justifies use of the scheme even when the solution vector itself is too large to store.Comment: 27 pages, 2 figure

    Adaptive Monte Carlo methods for matrix equations with applications

    Get PDF
    AbstractThis paper discusses empirical studies with both the adaptive correlated sequential sampling method and the adaptive importance sampling method which can be used in solving matrix and integral equations. Both methods achieve geometric convergence (provided the number of random walks per stage is large enough) in the sense: eν≤cλν, where eν is the error at stage ν, λ∈(0,1) is a constant, c>0 is also a constant. Thus, both methods converge much faster than the conventional Monte Carlo method. Our extensive numerical test results show that the adaptive importance sampling method converges faster than the adaptive correlated sequential sampling method, even with many fewer random walks per stage for the same problem. The methods can be applied to problems involving large scale matrix equations with non-sparse coefficient matrices. We also provide an application of the adaptive importance sampling method to the numerical solution of integral equations, where the integral equations are converted into matrix equations (with order up to 8192×8192) after discretization. By using Niederreiter’s sequence, instead of a pseudo-random sequence when generating the nodal point set used in discretizing the phase space Γ, we find that the average absolute errors or relative errors at nodal points can be reduced by a factor of more than one hundred
    corecore