2,487 research outputs found

    TR-2008005: Weakly Random Additive Preconditioning for Matrix Computations

    Full text link
    Our weakly random additive preconditioners facilitate the solution of linear systems of equa-tions and other fundamental matrix computations. Compared to the popular SVD-based multiplicative preconditioners, these preconditioners are generated more readily and for a much wider class of input matrices. Furthermore they better preserve matrix structure and sparseness and have a wider range of applications, in particular to linear systems with rectangular coefficient matrices. We study the generation of such preconditioners and their impact on conditioning of the input matrix. Our analysis and experiments show the power of our approach even where w

    Robust and parallel scalable iterative solutions for large-scale finite cell analyses

    Full text link
    The finite cell method is a highly flexible discretization technique for numerical analysis on domains with complex geometries. By using a non-boundary conforming computational domain that can be easily meshed, automatized computations on a wide range of geometrical models can be performed. Application of the finite cell method, and other immersed methods, to large real-life and industrial problems is often limited due to the conditioning problems associated with these methods. These conditioning problems have caused researchers to resort to direct solution methods, which signifi- cantly limit the maximum size of solvable systems. Iterative solvers are better suited for large-scale computations than their direct counterparts due to their lower memory requirements and suitability for parallel computing. These benefits can, however, only be exploited when systems are properly conditioned. In this contribution we present an Additive-Schwarz type preconditioner that enables efficient and parallel scalable iterative solutions of large-scale multi-level hp-refined finite cell analyses.Comment: 32 pages, 17 figure

    Preconditioning For Matrix Computation

    Full text link
    Preconditioning is a classical subject of numerical solution of linear systems of equations. The goal is to turn a linear system into another one which is easier to solve. The two central subjects of numerical matrix computations are LIN-SOLVE, that is, the solution of linear systems of equations and EIGEN-SOLVE, that is, the approximation of the eigenvalues and eigenvectors of a matrix. We focus on the former subject of LIN-SOLVE and show an application to EIGEN-SOLVE. We achieve our goal by applying randomized additive and multiplicative preconditioning. We facilitate the numerical solution by decreasing the condition of the coefficient matrix of the linear system, which enables reliable numerical solution of LIN-SOLVE. After the introduction in the Chapter 1 we recall the definitions and auxiliary results in Chapter 2. Then in Chapter 3 we precondition linear systems of equations solved at every iteration of the Inverse Power Method applied to EIGEN-SOLVE. These systems are ill conditioned, that is, have large condition numbers, and we decrease them by applying randomized additive preconditioning. This is our first subject. Our second subject is randomized multiplicative preconditioning for LIN-SOLVE. In this way we support application of GENP, that is, Gaussian elimination with no pivoting, and block Gaussian elimination. We prove that the proposed preconditioning methods are efficient when we apply Gaussian random matrices as preconditioners. We confirm these results with our extensive numerical tests. The tests also show that the same methods work as efficiently on the average when we use random structured, in particular circulant, preconditioners instead, but we show both formally and experimentally that these preconditioners fail in the case of LIN-SOLVE for the unitary matrix of discreet Fourier transform, for which Gaussian preconditioners work efficiently

    Spatially Distributed Stochastic Systems: equation-free and equation-assisted preconditioned computation

    Full text link
    Spatially distributed problems are often approximately modelled in terms of partial differential equations (PDEs) for appropriate coarse-grained quantities (e.g. concentrations). The derivation of accurate such PDEs starting from finer scale, atomistic models, and using suitable averaging, is often a challenging task; approximate PDEs are typically obtained through mathematical closure procedures (e.g. mean-field approximations). In this paper, we show how such approximate macroscopic PDEs can be exploited in constructing preconditioners to accelerate stochastic simulations for spatially distributed particle-based process models. We illustrate how such preconditioning can improve the convergence of equation-free coarse-grained methods based on coarse timesteppers. Our model problem is a stochastic reaction-diffusion model capable of exhibiting Turing instabilities.Comment: 8 pages, 6 figures, submitted to Journal of Chemical Physic

    Accelerating Staggered Fermion Dynamics with the Rational Hybrid Monte Carlo (RHMC) Algorithm

    Full text link
    Improved staggered fermion formulations are a popular choice for lattice QCD calculations. Historically, the algorithm used for such calculations has been the inexact R algorithm, which has systematic errors that only vanish as the square of the integration step-size. We describe how the exact Rational Hybrid Monte Carlo (RHMC) algorithm may be used in this context, and show that for parameters corresponding to current state-of-the-art computations it leads to a factor of approximately seven decrease in cost as well as having no step-size errors.Comment: 4 pages, 2 figures, 1 tabl
    • …
    corecore