171 research outputs found

    Alan Turing y los orígenes de la eliminación gaussiana moderna

    Get PDF
    The proceeding at: The International Symposium "The Alan Turing Legacy" held in Madrid (Spain) in October 23-24, 2012. This symposium was organized and funded by the Real Academia de Ciencias Exactas, Físicas y Naturales of Spain and Fundación Ramón Areces.The solution of a system of linear equations is by far the most important problem in Applied Mathematics. It is important both in itself and because it is an intermediate step in many other important problems. Gaussian elimination is nowadays the standard method for solving this problem numerically on a computer and it was the first numerical algorithm to be subjected to rounding error analysis. In 1948, Alan Turing published a remarkable paper on this topic: "Rounding-off errors in matrix processes" (Quart. J. Mech. Appl. Math. 1, pp. 287-308). In this paper, Turing formulated Gaussian elimination as the matrix LU factorization and introduced the "condition number of a matrix", both of them fundamental notions of modern Numerical Analysis. In addition, Turing presented an error analysis of Gaussian elimination for general matrices that deeply influenced the spirit of the definitive analysis developed by James Wilkinson in 1961. Alan Turing's work on Gaussian elimination appears in a fascinating period for modern Numerical Analysis. Other giants of Mathematics, as John von Neumann, Herman Goldstine, and Harold Hotelling were also working in the mid-1940s on Gaussian elimination. The goal of these researchers was to find an efficient and reliable method for solving systems of linear equations in modern "automatic computers". At that time, it was not clear at all whether Gaussian elimination was a right choice or not. The purpose of this paper is to revise, at an introductory level, the contributions of Alan Turing and other authors to the error analysis of Gaussian elimination, the historical context of these contributions, and their influence on modern Numerical Analysis.La resolución de sistemas de ecuaciones lineales es sin duda el problema más importante en Matemática Aplicada. Es importante en sí mismo y también porque es un paso intermedio en la resolución de muchos otros problemas de gran relevancia. La eliminación Gaussiana es hoy en día el método estándar para resolver este problema en un ordenador y, además, fue el primer algoritmo numérico para el que se realizó un análisis de errores de redondeo. En 1948, Alan Turing publicó un artículo de gran relevancia sobre este tema: “Rounding-off errors in matrix processes” (Quart. J. Mech. Appl. Math. 1, pp. 287-308). En este artículo, Turing formuló la eliminación Gaussiana en términos de la factorización LU de una matriz e introdujo la noción de número de condición de una matriz, que son dos de las nociones más fundamentales del Análisis Numérico moderno. Además, Turing presentó un análisis de errores de la eliminación Gaussiana para matrices generales que influyó profundamente en el espíritu del análisis de errores definitivo desarrollado por Wilkinson en 1961. El trabajo de Alan Turing sobre la eliminación Gaussiana aparece en un periodo fascinante del Análisis Numérico moderno. Otros gigantes de las matemáticas como John von Neumann, Herman Goldstine y Harold Hotelling también realizaron investigaciones sobre la eliminación Gaussiana en la década de 1940-50. El objetivo de estos investigadores era encontrar un método eficiente y fiable para resolver sistemas de ecuaciones lineales en los ordenadores modernos que estaban desarrollándose por entonces. En aquella época, no estaba claro en absoluto si utilizar la eliminación Gaussiana era una elección adecuada o no. El propósito de este artículo es revisar, a nivel básico, las contribuciones realizadas por Alan Turing y otros investigadores al análisis de errores de la eliminación Gaussiana, el contexto histórico de esas contribuciones y su influencia en el Análisis Numérico moderno.This work was partially supported by the Ministerio de Economía y Competitividad of Spain through grant MTM-2009-09281.Publicad

    LU factorization with panel rank revealing pivoting and its communication avoiding version

    Get PDF
    We present the LU decomposition with panel rank revealing pivoting (LU_PRRP), an LU factorization algorithm based on strong rank revealing QR panel factorization. LU_PRRP is more stable than Gaussian elimination with partial pivoting (GEPP). Our extensive numerical experiments show that the new factorization scheme is as numerically stable as GEPP in practice, but it is more resistant to pathological cases and easily solves the Wilkinson matrix and the Foster matrix. We also present CALU_PRRP, a communication avoiding version of LU_PRRP that minimizes communication. CALU_PRRP is based on tournament pivoting, with the selection of the pivots at each step of the tournament being performed via strong rank revealing QR factorization. CALU_PRRP is more stable than CALU, the communication avoiding version of GEPP. CALU_PRRP is also more stable in practice and is resistant to pathological cases on which GEPP and CALU fail.Comment: No. RR-7867 (2012

    A Class of Parallel Tiled Linear Algebra Algorithms for Multicore Architectures

    Full text link
    As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations

    A weakly stable algorithm for general Toeplitz systems

    Full text link
    We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A. Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx = A^Tb, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the full-rank Toeplitz or Hankel least squares problem.Comment: 17 pages. An old Technical Report with postscript added. For further details, see http://wwwmaths.anu.edu.au/~brent/pub/pub143.htm

    Fast and accurate con-eigenvalue algorithm for optimal rational approximations

    Full text link
    The need to compute small con-eigenvalues and the associated con-eigenvectors of positive-definite Cauchy matrices naturally arises when constructing rational approximations with a (near) optimally small LL^{\infty} error. Specifically, given a rational function with nn poles in the unit disk, a rational approximation with mnm\ll n poles in the unit disk may be obtained from the mmth con-eigenvector of an n×nn\times n Cauchy matrix, where the associated con-eigenvalue λm>0\lambda_{m}>0 gives the approximation error in the LL^{\infty} norm. Unfortunately, standard algorithms do not accurately compute small con-eigenvalues (and the associated con-eigenvectors) and, in particular, yield few or no correct digits for con-eigenvalues smaller than the machine roundoff. We develop a fast and accurate algorithm for computing con-eigenvalues and con-eigenvectors of positive-definite Cauchy matrices, yielding even the tiniest con-eigenvalues with high relative accuracy. The algorithm computes the mmth con-eigenvalue in O(m2n)\mathcal{O}(m^{2}n) operations and, since the con-eigenvalues of positive-definite Cauchy matrices decay exponentially fast, we obtain (near) optimal rational approximations in O(n(logδ1)2)\mathcal{O}(n(\log\delta^{-1})^{2}) operations, where δ\delta is the approximation error in the LL^{\infty} norm. We derive error bounds demonstrating high relative accuracy of the computed con-eigenvalues and the high accuracy of the unit con-eigenvectors. We also provide examples of using the algorithm to compute (near) optimal rational approximations of functions with singularities and sharp transitions, where approximation errors close to machine precision are obtained. Finally, we present numerical tests on random (complex-valued) Cauchy matrices to show that the algorithm computes all the con-eigenvalues and con-eigenvectors with nearly full precision
    corecore