2,945 research outputs found

    Least squares residuals and minimal residual methods

    Get PDF
    We study Krylov subspace methods for solving unsymmetric linear algebraic systems that minimize the norm of the residual at each step (minimal residual (MR) methods). MR methods are often formulated in terms of a sequence of least squares (LS) problems of increasing dimension. We present several basic identities and bounds for the LS residual. These results are interesting in the general context of solving LS problems. When applied to MR methods, they show that the size of the MR residual is strongly related to the conditioning of different bases of the same Krylov subspace. Using different bases is useful in theory because relating convergence to the characteristics of different bases offers new insight into the behavior of MR methods. Different bases also lead to different implementations which are mathematically equivalent but can differ numerically. Our theoretical results are used for a finite precision analysis of implementations of the GMRES method [Y. Saad and M. H. Schultz, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]. We explain that the choice of the basis is fundamental for the numerical stability of the implementation. As demonstrated in the case of Simpler GMRES [H. F. Walker and L. Zhou, Numer. Linear Algebra Appl., 1 (1994), pp. 571--581], the best orthogonalization technique used for computing the basis does not compensate for the loss of accuracy due to an inappropriate choice of the basis. In particular, we prove that Simpler GMRES is inherently less numerically stable than the Classical GMRES implementation due to Saad and Schultz [SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856--869]

    Scalability Analysis of Parallel GMRES Implementations

    Get PDF
    Applications involving large sparse nonsymmetric linear systems encourage parallel implementations of robust iterative solution methods, such as GMRES(k). Two parallel versions of GMRES(k) based on different data distributions and using Householder reflections in the orthogonalization phase, and variations of these which adapt the restart value k, are analyzed with respect to scalability (their ability to maintain fixed efficiency with an increase in problem size and number of processors).A theoretical algorithm-machine model for scalability is derived and validated by experiments on three parallel computers, each with different machine characteristics

    An assessment of two classes of variational multiscale methods for the simulation of incompressible turbulent flows

    Get PDF
    A numerical assessment of two classes of variational multiscale (VMS) methods for the simulation of incompressible flows is presented. Two types of residual-based VMS methods and two types of projection-based VMS methods are included in this assessment. The numerical simulations are performed at turbulent channel flow problems with various friction Reynolds numbers. It turns out the the residual-based VMS methods, in particular when used with a pair of inf-sup stable finite elements, give usually the most accurate results for second order statistics. For this pair of finite element spaces, a flexible GMRES method with a Least Squares Commutator (LSC) preconditioner proved to be an efficient solver

    Can coercive formulations lead to fast and accurate solution of the Helmholtz equation?

    Full text link
    A new, coercive formulation of the Helmholtz equation was introduced in [Moiola, Spence, SIAM Rev. 2014]. In this paper we investigate hh-version Galerkin discretisations of this formulation, and the iterative solution of the resulting linear systems. We find that the coercive formulation behaves similarly to the standard formulation in terms of the pollution effect (i.e. to maintain accuracy as k→∞k\to\infty, hh must decrease with kk at the same rate as for the standard formulation). We prove kk-explicit bounds on the number of GMRES iterations required to solve the linear system of the new formulation when it is preconditioned with a prescribed symmetric positive-definite matrix. Even though the number of iterations grows with kk, these are the first such rigorous bounds on the number of GMRES iterations for a preconditioned formulation of the Helmholtz equation, where the preconditioner is a symmetric positive-definite matrix.Comment: 27 pages, 7 figure

    A fast semi-direct least squares algorithm for hierarchically block separable matrices

    Full text link
    We present a fast algorithm for linear least squares problems governed by hierarchically block separable (HBS) matrices. Such matrices are generally dense but data-sparse and can describe many important operators including those derived from asymptotically smooth radial kernels that are not too oscillatory. The algorithm is based on a recursive skeletonization procedure that exposes this sparsity and solves the dense least squares problem as a larger, equality-constrained, sparse one. It relies on a sparse QR factorization coupled with iterative weighted least squares methods. In essence, our scheme consists of a direct component, comprised of matrix compression and factorization, followed by an iterative component to enforce certain equality constraints. At most two iterations are typically required for problems that are not too ill-conditioned. For an M×NM \times N HBS matrix with M≥NM \geq N having bounded off-diagonal block rank, the algorithm has optimal O(M+N)\mathcal{O} (M + N) complexity. If the rank increases with the spatial dimension as is common for operators that are singular at the origin, then this becomes O(M+N)\mathcal{O} (M + N) in 1D, O(M+N3/2)\mathcal{O} (M + N^{3/2}) in 2D, and O(M+N2)\mathcal{O} (M + N^{2}) in 3D. We illustrate the performance of the method on both over- and underdetermined systems in a variety of settings, with an emphasis on radial basis function approximation and efficient updating and downdating.Comment: 24 pages, 8 figures, 6 tables; to appear in SIAM J. Matrix Anal. App
    • …
    corecore