90 research outputs found

    Scharz Preconditioners for Krylov Methods: Theory and Practice

    Full text link
    Several numerical methods were produced and analyzed. The main thrust of the work relates to inexact Krylov subspace methods for the solution of linear systems of equations arising from the discretization of partial di#11;erential equa- tions. These are iterative methods, i.e., where an approximation is obtained and at each step. Usually, a matrix-vector product is needed at each iteration. In the inexact methods, this product (or the application of a preconditioner) can be done inexactly. Schwarz methods, based on domain decompositions, are excellent preconditioners for thise systems. We contributed towards their under- standing from an algebraic point of view, developed new ones, and studied their performance in the inexact setting. We also worked on combinatorial problems to help de#12;ne the algebraic partition of the domains, with the needed overlap, as well as PDE-constraint optimization using the above-mentioned inexact Krylov subspace methods

    Analytical Methods for Structured Matrix Computations

    Get PDF
    The design of fast algorithms is not only about achieving faster speeds but also about retaining the ability to control the error and numerical stability. This is crucial to the reliability of computed numerical solutions. This dissertation studies topics related to structured matrix computations with an emphasis on their numerical analysis aspects and algorithms. The methods discussed here are all based on rich analytical results that are mathematically justified. In chapter 2, we present a series of comprehensive error analyses to an analytical matrix compression method and it serves as a theoretical explanation of the proxy point method. These results are also important instructions on optimizing the performance. In chapter 3, we propose a non-Hermitian eigensolver by combining HSS matrix techniques with a contour-integral based method. Moreover, probabilistic analysis enables further acceleration of the method in addition to manipulating the HSS representation algebraically. An application of the HSS matrix is discussed in chapter 4 where we design a structured preconditioner for linear systems generated by AIIM. We improve the numerical stability for the matrix-free HSS construction process and make some additional modifications tailored to this particular problem

    Constraint interface preconditioning for topology optimization problems

    Get PDF
    The discretization of constrained nonlinear optimization problems arising in the field of topology optimization yields algebraic systems which are challenging to solve in practice, due to pathological ill-conditioning, strong nonlinearity and size. In this work we propose a methodology which brings together existing fast algorithms, namely, interior-point for the optimization problem and a novel substructuring domain decomposition method for the ensuing large-scale linear systems. The main contribution is the choice of interface preconditioner which allows for the acceleration of the domain decomposition method, leading to performance independent of problem size.Comment: To be published in SIAM J. Sci. Com

    Performance Modeling and Prediction for the Scalable Solution of Partial Differential Equations on Unstructured Grids

    Get PDF
    This dissertation studies the sources of poor performance in scientific computing codes based on partial differential equations (PDEs), which typically perform at a computational rate well below other scientific simulations (e.g., those with dense linear algebra or N-body kernels) on modern architectures with deep memory hierarchies. We identify that the primary factors responsible for this relatively poor performance are: insufficient available memory bandwidth, low ratio of work to data size (good algorithmic efficiency), and nonscaling cost of synchronization and gather/scatter operations (for a fixed problem size scaling). This dissertation also illustrates how to reuse the legacy scientific and engineering software within a library framework. Specifically, a three-dimensional unstructured grid incompressible Euler code from NASA has been parallelized with the Portable Extensible Toolkit for Scientific Computing (PETSc) library for distributed memory architectures. Using this newly instrumented code (called PETSc-FUN3D) as an example of a typical PDE solver, we demonstrate some strategies that are effective in tolerating the latencies arising from the hierarchical memory system and the network. Even on a single processor from each of the major contemporary architectural families, the PETSc-FUN3D code runs from 2.5 to 7.5 times faster than the legacy code on a medium-sized data set (with approximately 105 degrees of freedom). The major source of performance improvement is the increased locality in data reference patterns achieved through blocking, interlacing, and edge reordering. To explain these performance gains, we provide simple performance models based on memory bandwidth and instruction issue rates. Experimental evidence, in terms of translation lookaside buffer (TLB) and data cache miss rates, achieved memory bandwidth, and graduated floating point instructions per memory reference, is provided through accurate measurements with hardware counters. The performance models and experimental results motivate algorithmic and software practices that lead to improvements in both parallel scalability and per-node performance. We identify the bottlenecks to scalability (algorithmic as well as implementation) for a fixed-size problem when the number of processors grows to several thousands (the expected level of concurrency on terascale architectures). We also evaluate the hybrid programming model (mixed distributed/shared) from a performance standpoint

    An introduction to Multitrace Formulations and Associated Domain Decomposition Solvers

    Get PDF
    Multitrace formulations (MTFs) are based on a decomposition of the problem domain into subdomains, and thus domain decomposition solvers are of interest. The fully rigorous mathematical MTF can however be daunting for the non-specialist. We introduce in this paper MTFs on a simple model problem using concepts familiar to researchers in domain decomposition. This allows us to get a new understanding of MTFs and a natural block Jacobi iteration, for which we determine optimal relaxation parameters. We then show how iterative multitrace formulation solvers are related to a well known domain decomposition method called optimal Schwarz method: a method which used Dirichlet to Neumann maps in the transmission condition. We finally show that the insight gained from the simple model problem leads to remarkable identities for Calderon projectors and related operators, and the convergence results and optimal choice of the relaxation parameter we obtained is independent of the geometry, the space dimension of the problem{\color{black}, and the precise form of the spatial elliptic operator, like for optimal Schwarz methods. We illustrate our analysis with numerical experiments

    Equations and systems of nonlinear equations: from high order numerical methods to fast Eigensolvers for structured matrices and applications

    Get PDF
    A parametrized multi-step Newton method is constructed for widening the region of convergence of classical multi-step Newton method. The second improvement is proposed in the context of multistep Newton methods, by introducing preconditioners to enhance their accuracy, without disturbing their original order of convergence and the related computational cost (in most of the cases). To find roots with unknown multiplicities preconditioners are also effective when they are applied to the Newton method for roots with unknown multiplicities. Frozen Jacobian higher order multistep iterative method for the solution of systems of nonlinear equations are developed and the related results better than those obtained when employing the classical frozen Jacobian multi-step Newton method. To get benefit from the past information that is produced by the iterative method, we constructed iterative methods with memory for solving systems of nonlinear equations. Iterative methods with memory have a greater rate of convergence, if compared with the iterative method without memory. In terms of computational cost, iterative methods with memory are marginally superior comparatively. Numerical methods are also introduced for approximating all the eigenvalues of banded symmetric Toeplitz and preconditioned Toeplitz matrices. Our proposed numerical methods work very efficiently, when the generating symbols of the considered Toeplitz matrices are bijective
    • …
    corecore