34 research outputs found

    On the Role of Ill-conditioning: Biharmonic Eigenvalue Problem and Multigrid Algorithms

    Get PDF
    Very fine discretizations of differential operators often lead to large, sparse matrices A, where the condition number of A is large. Such ill-conditioning has well known effects on both solving linear systems and eigenvalue computations, and, in general, computing solutions with relative accuracy independent of the condition number is highly desirable. This dissertation is divided into two parts. In the first part, we discuss a method of preconditioning, developed by Ye, which allows solutions of Ax=b to be computed accurately. This, in turn, allows for accurate eigenvalue computations. We then use this method to develop discretizations that yield accurate computations of the smallest eigenvalue of the biharmonic operator across several domains. Numerical results from the various schemes are provided to demonstrate the performance of the methods. In the second part we address the role of the condition number of A in the context of multigrid algorithms. Under various assumptions, we use rigorous Fourier analysis on 2- and 3-grid iteration operators to analyze round off errors in floating point arithmetic. For better understanding of general results, we provide detailed bounds for a particular algorithm applied to the 1-dimensional Poisson equation. Numerical results are provided and compared with those obtained by the schemes discussed in part 1

    Performance Improvements of Common Sparse Numerical Linear Algebra Computations

    Get PDF
    Manufacturers of computer hardware are able to continuously sustain an unprecedented pace of progress in computing speed of their products, partially due to increased clock rates but also because of ever more complicated chip designs. With new processor families appearing every few years, it is increasingly harder to achieve high performance rates in sparse matrix computations. This research proposes new methods for sparse matrix factorizations and applies in an iterative code generalizations of known concepts from related disciplines. The proposed solutions and extensions are implemented in ways that tend to deliver efficiency while retaining ease of use of existing solutions. The implementations are thoroughly timed and analyzed using a commonly accepted set of test matrices. The tests were conducted on modern processors that seem to have gained an appreciable level of popularity and are fairly representative for a wider range of processor types that are available on the market now or in the near future. The new factorization technique formally introduced in the early chapters is later on proven to be quite competitive with state of the art software currently available. Although not totally superior in all cases (as probably no single approach could possibly be), the new factorization algorithm exhibits a few promising features. In addition, an all-embracing optimization effort is applied to an iterative algorithm that stands out for its robustness. This also gives satisfactory results on the tested computing platforms in terms of performance improvement. The same set of test matrices is used to enable an easy comparison between both investigated techniques, even though they are customarily treated separately in the literature. Possible extensions of the presented work are discussed. They range from easily conceivable merging with existing solutions to rather more evolved schemes dependent on hard to predict progress in theoretical and algorithmic research

    Introduction to Linear Algebra: Models, Methods, and Theory

    Get PDF
    This book develops linear algebra around matrices. Vector spaces in the abstract are not considered, only vector spaces associated with matrices. This book puts problem solving and an intuitive treatment of theory first, with a proof-oriented approach intended to come in a second course, the same way that calculus is taught. The book\u27s organization is straightforward: Chapter 1 has introductory linear models; Chapter 2 has the basics of matrix algebra; Chapter 3 develops different ways to solve a system of equations; Chapter 4 has applications, and Chapter 5 has vector-space theory associated with matrices and related topics such as pseudoinverses and orthogonalization. Many linear algebra textbooks start immediately with Gaussian elimination, before any matrix algebra. Here we first pose problems in Chapter 1, then develop a mathematical language for representing and recasting the problems in Chapter 2, and then look at ways to solve the problems in Chapter 3-four different solution methods are presented with an analysis of strengths and weaknesses of each.https://commons.library.stonybrook.edu/ams-books/1000/thumbnail.jp
    corecore