26 research outputs found
Numerical methods and accurate computations with structured matrices
Esta tesis doctoral es un compendio de 11 artículos científicos. El tema principal de la tesis es el Álgebra Lineal Numérica, con énfasis en dos clases de matrices estructuradas: las matrices totalmente positivas y las M-matrices. Para algunas subclases de estas matrices, es posible desarrollar algoritmos para resolver numéricamente varios de los problemas más comunes en álgebra lineal con alta precisión relativa independientemente del número de condición de la matriz. La clave para lograr cálculos precisos está en el uso de una parametrización diferente que represente la estructura especial de la matriz y en el desarrollo de algoritmos adaptados que trabajen con dicha parametrización.Las matrices totalmente positivas no singulares admiten una factorización única como producto de matrices bidiagonales no negativas llamada factorización bidiagonal. Si conocemos esta representación con alta precisión relativa, se puede utilizar para resolver ciertos sistemas de ecuaciones y para calcular la inversa, los valores propios y los valores singulares con alta precisión relativa. Nuestra contribución en este campo ha sido la obtención de la factorización bidiagonal con alta precisión relativa de matrices de colocación de polinomios de Laguerre generalizados, de matrices de colocación de polinomios de Bessel, de clases de matrices que generalizan la matriz de Pascal y de matrices de q-enteros. También hemos estudiado la extensión de varias propiedades óptimas de las matrices de colocación de B-bases normalizadas (que en particular son matrices totalmente positivas). En particular, hemos demostrado propiedades de optimalidad de las matrices de colocación del producto tensorial de B-bases normalizadas.Si conocemos las sumas de filas y las entradas extradiagonales de una M-matriz no singular diagonal dominante con alta precisión relativa, entonces podemos calcular su inversa, determinante y valores singulares también con alta precisión relativa. Hemos buscado nuevos métodos para lograr cálculos precisos con nuevas clases de M-matrices o matrices relacionadas. Hemos propuesto una parametrización para las Z-matrices de Nekrasov con entradas diagonales positivas que puede utilizarse para calcular su inversa y determinante con alta precisión relativa. También hemos estudiado la clase denominada B-matrices, que está muy relacionada con las M-matrices. Hemos obtenido un método para calcular los determinantes de esta clase con alta precisión relativa y otro para calcular los determinantes de las matrices de B-Nekrasov también con alta precisión relativa. Basándonos en la utilización de dos matrices de escalado que hemos introducido, hemos desarrollado nuevas cotas para la norma infinito de la inversa de una matriz de Nekrasov y para el error del problema de complementariedad lineal cuando su matriz asociada es de Nekrasov. También hemos obtenido nuevas cotas para la norma infinito de las inversas de Bpi-matrices, una clase que extiende a las B-matrices, y las hemos utilizado para obtener nuevas cotas del error para el problema de complementariedad lineal cuya matriz asociada es una Bpi-matriz. Algunas clases de matrices han sido generalizadas al caso de mayor dimensión para desarrollar una teoría para tensores extendiendo la conocida para el caso matricial. Por ejemplo, la definición de la clase de las B-matrices ha sido extendida a la clase de B-tensores, dando lugar a un criterio sencillo para identificar una nueva clase de tensores definidos positivos. Hemos propuesto una extensión de la clase de las Bpi-matrices a Bpi-tensores, definiendo así una nueva clase de tensores definidos positivos que puede ser identificada en base a un criterio sencillo basado solo en cálculos que involucran a las entradas del tensor. Finalmente, hemos caracterizado los casos en los que las matrices de Toeplitz tridiagonales son P-matrices y hemos estudiado cuándo pueden ser representadas en términos de una factorización bidiagonal que sirve como parametrización para lograr cálculos con alta precisión relativa.<br /
Algebraic, Block and Multiplicative Preconditioners based on Fast Tridiagonal Solves on GPUs
This thesis contributes to the field of sparse linear algebra, graph applications, and preconditioners for Krylov iterative solvers of sparse linear equation systems, by providing a (block) tridiagonal solver library, a generalized sparse matrix-vector implementation, a linear forest extraction, and a multiplicative preconditioner based on tridiagonal solves. The tridiagonal library, which supports (scaled) partial pivoting, outperforms cuSPARSE's tridiagonal solver by factor five while completely utilizing the available GPU memory bandwidth. For the performance optimized solving of multiple right-hand sides, the explicit factorization of the tridiagonal matrix can be computed. The extraction of a weighted linear forest (union of disjoint paths) from a general graph is used to build algebraic (block) tridiagonal preconditioners and deploys the generalized sparse-matrix vector implementation of this thesis for preconditioner construction. During linear forest extraction, a new parallel bidirectional scan pattern, which can operate on double-linked list structures, identifies the path ID and the position of a vertex. The algebraic preconditioner construction is also used to build more advanced preconditioners, which contain multiple tridiagonal factors, based on generalized ILU factorizations. Additionally, other preconditioners based on tridiagonal factors are presented and evaluated in comparison to ILU and ILU incomplete sparse approximate inverse preconditioners (ILU-ISAI) for the solution of large sparse linear equation systems from the Sparse Matrix Collection. For all presented problems of this thesis, an efficient parallel algorithm and its CUDA implementation for single GPU systems is provided
Approximate inverse based multigrid solution of large sparse linear systems
In this thesis we study the approximate inverse based multigrid algorithm FAPIN
for the solution of large sparse linear systems of equations.
This algorithm, which is closely related to the well known multigrid V-cycle, has
proven successful in the numerical solution of several second order boundary value problems.
Here we are mainly concerned with its application to fourth order problems. In
particular, we demonstrate good multigrid performance with discrete problems arising
from the beam equation and the biharmonic (plate) equation. The work presented also
represents new experience with FAPIN using cubic B-spline, bicubic B-spline and piecewise
bicubic Hermite basis functions. We recast a convergence proof in matrix notation
for the nonsingular case.
Central to our development are the concepts of an approximate inverse and an
approximate pseudo-inverse of a matrix. In particular, we use least squares approximate
inverses (and related approximate pseudo-inverses) found by solving a Frobenius
matrix norm minimization problem. These approximate inverses are used in the multigrid
smoothers of our FAPIN algorithms
Recommended from our members
The Foundations of Infinite-Dimensional Spectral Computations
Spectral computations in infinite dimensions are ubiquitous in the sciences. However, their many applications and theoretical studies depend on computations which are infamously difficult. This thesis, therefore, addresses the broad question,
“What is computationally possible within the field of spectral theory of separable Hilbert spaces?”
The boundaries of what computers can achieve in computational spectral theory and mathematical physics are unknown, leaving many open questions that have been unsolved for decades. This thesis provides solutions to several such long-standing problems.
To determine these boundaries, we use the Solvability Complexity Index (SCI) hierarchy, an idea which has its roots in Smale's comprehensive programme on the foundations of computational mathematics. The Smale programme led to a real-number counterpart of the Turing machine, yet left a substantial gap between theory and practice. The SCI hierarchy encompasses both these models and provides universal bounds on what is computationally possible. What makes spectral problems particularly delicate is that many of the problems can only be computed by using several limits, a phenomenon also shared in the foundations of polynomial root-finding as shown by McMullen. We develop and extend the SCI hierarchy to prove optimality of algorithms and construct a myriad of different methods for infinite-dimensional spectral problems, solving many computational spectral problems for the first time.
For arguably almost any operator of applicable interest, we solve the long-standing computational spectral problem and construct algorithms that compute spectra with error control. This is done for partial differential operators with coefficients of locally bounded total variation and also for discrete infinite matrix operators. We also show how to compute spectral measures of normal operators (when the spectrum is a subset of a regular enough Jordan curve), including spectral measures of classes of self-adjoint operators with error control and the construction of high-order rational kernel methods. We classify the problems of computing measures, measure decompositions, types of spectra (pure point, absolutely continuous, singular continuous), functional calculus, and Radon--Nikodym derivatives in the SCI hierarchy. We construct algorithms for and classify; fractal dimensions of spectra, Lebesgue measures of spectra, spectral gaps, discrete spectra, eigenvalue multiplicities, capacity, different spectral radii and the problem of detecting algorithmic failure of previous methods (finite section method). The infinite-dimensional QR algorithm is also analysed, recovering extremal parts of spectra, corresponding eigenvectors, and invariant subspaces, with convergence rates and error control. Finally, we analyse pseudospectra of pseudoergodic operators (a generalisation of random operators) on vector-valued spaces.
All of the algorithms developed in this thesis are sharp in the sense of the SCI hierarchy. In other words, we prove that they are optimal, realising the boundaries of what digital computers can achieve. They are also implementable and practical, and the majority are parallelisable. Extensive numerical examples are given throughout, demonstrating efficiency and tackling difficult problems taken from mathematics and also physical applications.
In summary, this thesis allows scientists to rigorously and efficiently compute many spectral properties for the first time. The framework provided by this thesis also encompasses a vast number of areas in computational mathematics, including the classical problem of polynomial root-finding, as well as optimisation, neural networks, PDEs and computer-assisted proofs. This framework will be explored in the future work of the author within these settings