134 research outputs found

    Minimizing the Euclidean Condition Number

    Get PDF
    This paper considers the problem of determining the row and/or column scaling of a matrix A that minimizes the condition number of the scaled matrix. This problem has been studied by many authors. For the cases of the ∞-norm and the 1-norm, the scaling problem was completely solved in the 1960s. It is the Euclidean norm case that has widespread application in robust control analyses. For example, it is used for integral controllability tests based on steady-state information, for the selection of sensors and actuators based on dynamic information, and for studying the sensitivity of stability to uncertainty in control systems. Minimizing the scaled Euclidean condition number has been an open question—researchers proposed approaches to solving the problem numerically, but none of the proposed numerical approaches guaranteed convergence to the true minimum. This paper provides a convex optimization procedure to determine the scalings that minimize the Euclidean condition number. This optimization can be solved in polynomial-time with off-the-shelf software

    ILU Smoothers for AMG with Scaled Triangular Factors

    Full text link
    ILU smoothers are effective in the algebraic multigrid (AMG) V-cycle for reducing high-frequency components of the residual error. However, direct triangular solves are comparatively slow on GPUs. Previous work by Chow and Patel (2015) and Antz et al. (2015) demonstrated the advantages of Jacobi relaxation as an alternative. Depending on the threshold and fill-level parameters chosen, the factors are highly non-normal and Jacobi is unlikely to converge in a low number of iterations. The Ruiz algorithm applies row or row/column scaling to U in order to reduce the departure from normality. The inherently sequential solve is replaced with a Richardson iteration. There are several advantages beyond the lower compute time. Scaling is performed locally for a diagonal block of the global matrix because it is applied directly to the factor. An ILUT Schur complement smoother maintains a constant GMRES iteration count as the number of MPI ranks increases and thus parallel strong-scaling is improved. The new algorithms are included in hypre, and achieve improved time to solution for several Exascale applications, including the Nalu-Wind and PeleLM pressure solvers. For large problem sizes, GMRES+AMG with iterative triangular solves execute at least five times faster than with direct on massively-parallel GPUs.Comment: v2 updated citation information; v3 updated results; v4 abstract updated, new results added; v5 new experimental analysis and results adde

    Computing the singular value decomposition with high relative accuracy

    Get PDF
    AbstractWe analyze when it is possible to compute the singular values and singular vectors of a matrix with high relative accuracy. This means that each computed singular value is guaranteed to have some correct digits, even if the singular values have widely varying magnitudes. This is in contrast to the absolute accuracy provided by conventional backward stable algorithms, which in general only guarantee correct digits in the singular values with large enough magnitudes. It is of interest to compute the tiniest singular values with several correct digits, because in some cases, such as finite element problems and quantum mechanics, it is the smallest singular values that have physical meaning, and should be determined accurately by the data. Many recent papers have identified special classes of matrices where high relative accuracy is possible, since it is not possible in general. The perturbation theory and algorithms for these matrix classes have been quite different, motivating us to seek a common perturbation theory and common algorithm. We provide these in this paper, and show that high relative accuracy is possible in many new cases as well. The briefest way to describe our results is that we can compute the SVD of G to high relative accuracy provided we can accurately factor G=XDYT where D is diagonal and X and Y are any well-conditioned matrices; furthermore, the LDU factorization frequently does the job. We provide many examples of matrix classes permitting such an LDU decomposition

    Scaling algorithms for matrices

    Get PDF
    We present an iterative algorithm, called SCALGM, which asymptotically scales both rows and columns of any given matrix such that each element of the scaled matrix is in the interval [-1, 1] and the elements of minimum magnitude are maximized. The object is to make the condition number reasonably small, thus causing the pivoting process in Gaussian elimination to work well, and to diagnose any instability in the elimination process. Numerical evidence is presented showing the effectiveness of the algorithm

    Survey and Comparison of Matrix Scaling Methods

    Get PDF
    Computer Scienc

    Accurate computation of singular values and eigenvalues of symmetric matrices

    Get PDF
    We give the review of recent results in relative perturbation theory for eigenvalue and singular value problems and highly accurate algorithms which compute eigenvalues and singular values to the highest possible relative accuracy
    corecore