66 research outputs found

    Variant of the Thomas Algorithm for opposite-bordered tridiagonal systems of equations

    Full text link
    To solve tridiagonal systems of linear equations, the Thomas Algorithm is a much more efficient method than, for instance, Gaussian elimination. The algorithm uses a series of elementary row operations and can solve a system of n equations in ( n ) operations, instead of ( n 3 ) . Many variations of the Thomas Algorithm have been developed over the years to solve very specific near-tridiagonal matrix. However, none of these methods address the situation of a system of linear equations that could easily be solved if elementary operations on columns are applied, instead of elementary operations on rows. The present paper proposes an efficient method that allows the use of elementary column operations to solve linear systems of equations using vector multiplication techniques, such as the one proposed by Thomas. Copyright © 2008 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/75764/1/1172_ftp.pd

    Numerical Solution of Linear and Nonlinear Eigenvalue Problems

    Get PDF
    Given a real parameter-dependent matrix, we obtain an algorithm for computing the value of the parameter and corresponding eigenvalue for which two eigenvalues of the matrix coalesce to form a 2-dimensional Jordan block. Our algorithms are based on extended versions of the implicit determinant method of Spence and Poulton [55]. We consider when the eigenvalue is both real and complex, which results in solving systems of nonlinear equations by Newton’s or the Gauss-Newton method. Our algorithms rely on good initial guesses, but if these are available, we obtain quadratic convergence. Next, we describe two quadratically convergent algorithms for computing a nearby defective matrix which are cheaper than already known ones. The first approach extends the implicit determinant method in [55] to find parameter values for which a certain Hermitian matrix is singular subject to a constraint. This results in using Newton’s method to solve a real system of three nonlinear equations. The second approach involves simply writing down all the nonlinear equations and solving a real over-determined system using the Gauss-Newton method. We only consider the case where the nearest defective matrix is real. Finally, we consider the computation of an algebraically simple complex eigenpair of a nonsymmetric matrix where the eigenvector is normalised using the natural 2-norm, which produces only a single real normalising equation. We obtain an under-determined system of nonlinear equations which is solved by the Gauss-Newton method. We show how to obtain an equivalent square linear system of equations for the computation of the desired eigenpairs. This square system is exactly what would have been obtained if we had ignored the non uniqueness and nondifferentiability of the normalisation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Zerofinding of analytic functions by structured matrix methods

    Get PDF
    We propose a fast and numerically robust algorithm based on structured numerical linear algebra technology for the computation of the zeros of an analytic function inside the unit circle in the complex plane. At the core of our method there are two matrix algorithms: (a) a fast reduction of a certain linearization of the zerofinding problem to a matrix eigenvalue computation involving a perturbed CMV--like matrix and (b) a fast variant of the QR eigenvalue algorithm suited to exploit the structural properties of this latter matrix. We illustrate the reliability of the proposed method by several numerical examples

    Row Compression and Nested Product Decomposition of a Hierarchical Representation of a Quasiseparable Matrix

    Get PDF
    This research introduces a row compression and nested product decomposition of an nxn hierarchical representation of a rank structured matrix A, which extends the compression and nested product decomposition of a quasiseparable matrix. The hierarchical parameter extraction algorithm of a quasiseparable matrix is efficient, requiring only O(nlog(n))operations, and is proven backward stable. The row compression is comprised of a sequence of small Householder transformations that are formed from the low-rank, lower triangular, off-diagonal blocks of the hierarchical representation. The row compression forms a factorization of matrix A, where A = QC, Q is the product of the Householder transformations, and C preserves the low-rank structure in both the lower and upper triangular parts of matrix A. The nested product decomposition is accomplished by applying a sequence of orthogonal transformations to the low-rank, upper triangular, off-diagonal blocks of the compressed matrix C. Both the compression and decomposition algorithms are stable, and require O(nlog(n)) operations. At this point, the matrix-vector product and solver algorithms are the only ones fully proven to be backward stable for quasiseparable matrices. By combining the fast matrix-vector product and system solver, linear systems involving the hierarchical representation to nested product decomposition are directly solved with linear complexity and unconditional stability. Applications in image deblurring and compression, that capitalize on the concepts from the row compression and nested product decomposition algorithms, will be shown

    Méthodes hybrides pour la résolution de grands systèmes linéaires creux sur calculateurs parallèles

    Get PDF
    Nous nous intéressons à la résolution en parallèle de système d’équations linéaires creux et de large taille. Le calcul de la solution d’un tel type de système requiert un grand espace mémoire et une grande puissance de calcul. Il existe deux principales méthodes de résolution de systèmes linéaires. Soit la méthode est directe et de ce fait est rapide et précise, mais consomme beaucoup de mémoire. Soit elle est itérative, économe en mémoire, mais assez lente à atteindre une solution de qualité suffisante. Notre travail consiste à combiner ces deux techniques pour créer un solveur hybride efficient en consommation mémoire tout en étant rapide et robuste. Nous essayons ensuite d’améliorer ce solveur en introduisant une nouvelle méthode pseudo directe qui contourne certains inconvénients de la méthode précédente. Dans les premiers chapitres nous examinons les méthodes de projections par lignes, en particulier la méthode Cimmino en bloc, certains de leurs aspects numériques et comment ils affectent la convergence. Ensuite, nous analyserons l’accélération de ces techniques avec la méthode des gradients conjugués et comment cette accélération peut être améliorée avec une version en bloc du gradient conjugué. Nous regarderons ensuite comment le partitionnement du système linéaire affecte lui aussi la convergence et comment nous pouvons améliorer sa qualité. Finalement, nous examinerons l’implantation en parallèle du solveur hybride, ses performances ainsi que les améliorations possible. Les deux derniers chapitres introduisent une amélioration à ce solveur hybride, en améliorant les propriétés numériques du système linéaire, de sorte à avoir une convergence en une seule itération et donc un solveur pseudo direct. Nous commençons par examiner les propriétés numériques du système résultants, analyser la solution parallèle et comment elle se comporte face au solveur hybride et face à un solveur direct. Finalement, nous introduisons de possible amélioration au solveur pseudo direct. Ce travail a permis d’implanter un solveur hybride "ABCD solver" (Augmented Block Cimmino Distributed solver) qui peut soit fonctionner en mode itératif ou en mode pseudo direct. ABSTRACT : We are interested in solving large sparse systems of linear equations in parallel. Computing the solution of such systems requires a large amount of memory and computational power. The two main ways to obtain the solution are direct and iterative approaches. The former achieves this goal fast but with a large memory footprint while the latter is memory friendly but can be slow to converge. In this work we try first to combine both approaches to create a hybrid solver that can be memory efficient while being fast. Then we discuss a novel approach that creates a pseudo-direct solver that compensates for the drawback of the earlier approach. In the first chapters we take a look at row projection techniques, especially the block Cimmino method and examine some of their numerical aspects and how they affect the convergence. We then discuss the acceleration of convergence using conjugate gradients and show that a block version improves the convergence. Next, we see how partitioning the linear system affects the convergence and show how to improve its quality. We finish by discussing the parallel implementation of the hybrid solver, discussing its performance and seeing how it can be improved. The last two chapters focus on an improvement to this hybrid solver. We try to improve the numerical properties of the linear system so that we converge in a single iteration which results in a pseudo-direct solver. We first discuss the numerical properties of the new system, see how it works in parallel and see how it performs versus the iterative version and versus a direct solver. We finally consider some possible improvements to the solver. This work led to the implementation of a hybrid solver, our "ABCD solver" (Augmented Block Cimmino Distributed solver), that can either work in a fully iterative mode or in a pseudo-direct mode

    The solution of large sparse linear systems on parallel computers using a hybrid implementation of the block Cimmino method

    Get PDF
    We are interested in solving large sparse systems of linear equations in parallel. Computing the solution of such systems requires a large amount of memory and computational power. The two main ways to obtain the solution are direct and iterative approaches. The former achieves this goal fast but with a large memory footprint while the latter is memory friendly but can be slow to converge. In this work we try first to combine both approaches to create a hybrid solver that can be memory efficient while being fast. Then we discuss a novel approach that creates a pseudo-direct solver that compensates for the drawback of the earlier approach. In the first chapters we take a look at row projection techniques, especially the block Cimmino method and examine some of their numerical aspects and how they affect the convergence. We then discuss the acceleration of convergence using conjugate gradients and show that a block version improves the convergence. Next, we see how partitioning the linear system affects the convergence and show how to improve its quality. We finish by discussing the parallel implementation of the hybrid solver, discussing its performance and seeing how it can be improved. The last two chapters focus on an improvement to this hybrid solver. We try to improve the numerical properties of the linear system so that we converge in a single iteration which results in a pseudo-direct solver. We first discuss the numerical properties of the new system, see how it works in parallel and see how it performs versus the iterative version and versus a direct solver. We finally consider some possible improvements to the solver. This work led to the implementation of a hybrid solver, our "ABCD solver" (Augmented Block Cimmino Distributed solver), that can either work in a fully iterative mode or in a pseudo-direct mode
    • …
    corecore