21 research outputs found

    Inner-outer Iterative Methods for Eigenvalue Problems - Convergence and Preconditioning

    Get PDF
    Many methods for computing eigenvalues of a large sparse matrix involve shift-invert transformations which require the solution of a shifted linear system at each step. This thesis deals with shift-invert iterative techniques for solving eigenvalue problems where the arising linear systems are solved inexactly using a second iterative technique. This approach leads to an inner-outer type algorithm. We provide convergence results for the outer iterative eigenvalue computation as well as techniques for efficient inner solves. In particular eigenvalue computations using inexact inverse iteration, the Jacobi-Davidson method without subspace expansion and the shift-invert Arnoldi method as a subspace method are investigated in detail. A general convergence result for inexact inverse iteration for the non-Hermitian generalised eigenvalue problem is given, using only minimal assumptions. This convergence result is obtained in two different ways; on the one hand, we use an equivalence result between inexact inverse iteration applied to the generalised eigenproblem and modified Newton's method; on the other hand, a splitting method is used which generalises the idea of orthogonal decomposition. Both approaches also include an analysis for the convergence theory of a version of inexact Jacobi-Davidson method, where equivalences between Newton's method, inverse iteration and the Jacobi-Davidson method are exploited. To improve the efficiency of the inner iterative solves we introduce a new tuning strategy which can be applied to any standard preconditioner. We give a detailed analysis on this new preconditioning idea and show how the number of iterations for the inner iterative method and hence the total number of iterations can be reduced significantly by the application of this tuning strategy. The analysis of the tuned preconditioner is carried out for both Hermitian and non-Hermitian eigenproblems. We show how the preconditioner can be implemented efficiently and illustrate its performance using various numerical examples. An equivalence result between the preconditioned simplified Jacobi-Davidson method and inexact inverse iteration with the tuned preconditioner is given. Finally, we discuss the shift-invert Arnoldi method both in the standard and restarted fashion. First, existing relaxation strategies for the outer iterative solves are extended to implicitly restarted Arnoldi's method. Second, we apply the idea of tuning the preconditioner to the inner iterative solve. As for inexact inverse iteration the tuned preconditioner for inexact Arnoldi's method is shown to provide significant savings in the number of inner solves. The theory in this thesis is supported by many numerical examples.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Inexact inverse iteration using Galerkin Krylov solvers

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    High Resolution Numerical Methods for Coupled Non-linear Multi-physics Simulations with Applications in Reactor Analysis

    Get PDF
    The modeling of nuclear reactors involves the solution of a multi-physics problem with widely varying time and length scales. This translates mathematically to solving a system of coupled, non-linear, and stiff partial differential equations (PDEs). Multi-physics applications possess the added complexity that most of the solution fields participate in various physics components, potentially yielding spatial and/or temporal coupling errors. This dissertation deals with the verification aspects associated with such a multi-physics code, i.e., the substantiation that the mathematical description of the multi-physics equations are solved correctly (both in time and space). Conventional paradigms used in reactor analysis problems employed to couple various physics components are often non-iterative and can be inconsistent in their treatment of the non-linear terms. This leads to the usage of smaller time steps to maintain stability and accuracy requirements, thereby increasing the overall computational time for simulation. The inconsistencies of these weakly coupled solution methods can be overcome using tighter coupling strategies and yield a better approximation to the coupled non-linear operator, by resolving the dominant spatial and temporal scales involved in the multi-physics simulation. A multi-physics framework, KARMA (K(c)ode for Analysis of Reactor and other Multi-physics Applications), is presented. KARMA uses tight coupling strategies for various physical models based on a Matrix-free Nonlinear-Krylov (MFNK) framework in order to attain high-order spatio-temporal accuracy for all solution fields in amenable wall clock times, for various test problems. The framework also utilizes traditional loosely coupled methods as lower-order solvers, which serve as efficient preconditioners for the tightly coupled solution. Since the software platform employs both lower and higher-order coupling strategies, it can easily be used to test and evaluate different coupling strategies and numerical methods and to compare their efficiency for problems of interest. Multi-physics code verification efforts pertaining to reactor applications are described and associated numerical results obtained using the developed multi-physics framework are provided. The versatility of numerical methods used here for coupled problems and feasibility of general non-linear solvers with appropriate physics-based preconditioners in the KARMA framework offer significantly efficient techniques to solve multi-physics problems in reactor analysis

    Efficient interior point algorithms for large scale convex optimization problems

    Get PDF
    Interior point methods (IPMs) are among the most widely used algorithms for convex optimization problems. They are applicable to a wide range of problems, including linear, quadratic, nonlinear, conic and semidefinite programming problems, requiring a polynomial number of iterations to find an accurate approximation of the primal-dual solution. The formidable convergence properties of IPMs come with a fundamental drawback: the numerical linear algebra involved becomes progressively more and more challenging as the IPM converges towards optimality. In particular, solving the linear systems to find the Newton directions requires most of the computational effort of an IPM. Proposed remedies to alleviate this phenomenon include regularization techniques, predictor-corrector schemes, purposely developed preconditioners, low-rank update strategies, to mention a few. For problems of very large scale, this unpleasant characteristic of IPMs becomes a more and more problematic feature, since any technique used must be efficient and scalable in order to maintain acceptable computational requirements. In this Thesis, we deal with convex linear and quadratic problems of large “dimension”: we use this term in a broader sense than just a synonym for “size” of the problem. The instances considered can be either problems with a large number of variables and/or constraints but with a sparse structure, or problems with a moderate number of variables and/or constraints but with a dense structure. Both these type of problems require very efficient strategies to be used during the algorithm, even though the corresponding difficulties arise for different reasons. The first application that we consider deals with a moderate size quadratic problem where the quadratic term is 100% dense; this problem arises from X-ray tomographic imaging reconstruction, in particular with the goal of separating the distribution of two materials present in the observed sample. A novel non-convex regularizer is introduced for this purpose; convexity of the overall problem is maintained by careful choice of the parameters. We derive a specialized interior point method for this problem and an appropriate preconditioner for the normal equations linear system, to be used without ever forming the fully dense matrices involved. The next major contribution is related to the issue of efficiently computing the Newton direction during IPMs. When an iterative method is applied to solve the linear equation system in IPMs, the attention is usually placed on accelerating their convergence by designing appropriate preconditioners, but the linear solver is applied as a black box with a standard termination criterion which asks for a sufficient reduction of the residual in the linear system. Such an approach often leads to an unnecessary “over-solving” of linear equations. We propose new indicators for the early termination of the inner iterations and test them on a set of large scale quadratic optimization problems. Evidence gathered from these computational experiments shows that the new technique delivers significant improvements in terms of inner (linear) iterations and those translate into significant savings of the IPM solution time. The last application considered is discrete optimal transport (OT) problems; these kind of problems give rise to very large linear programs with highly structured matrices. Solutions of such problems are expected to be sparse, that is only a small subset of entries in the optimal solution is expected to be nonzero. We derive an IPM for the standard OT formulation, which exploits a column-generation-like technique to force all intermediate iterates to be as sparse as possible. We prove theoretical results about the sparsity pattern of the optimal solution and we propose to mix iterative and direct linear solvers in an efficient way, to keep computational time and memory requirement as low as possible. We compare the proposed method with two state-of-the-art solvers and show that it can compete with the best network optimization tools in terms of computational time and memory usage. We perform experiments with problems reaching more than four billion variables and demonstrate the robustness of the proposed method. We consider also the optimal transport problem on sparse graphs and present a primal-dual regularized IPM to solve it. We prove that the introduction of the regularization allows us to use sparsified versions of the normal equations system to inexpensively generate inexact IPM directions. The proposed method is shown to have polynomial complexity and to outperform a very efficient network simplex implementation, for problems with up to 50 million variables

    Analysis and massively parallel implementation of the 2-Lagrange multiplier methods and optimized Schwarz methods

    Get PDF
    Engineering and Physical Sciences Research Council (EPSRC) grant EP/G036136/1

    Spectral and High Order Methods for Partial Differential Equations ICOSAHOM 2018

    Get PDF
    This open access book features a selection of high-quality papers from the presentations at the International Conference on Spectral and High-Order Methods 2018, offering an overview of the depth and breadth of the activities within this important research area. The carefully reviewed papers provide a snapshot of the state of the art, while the extensive bibliography helps initiate new research directions
    corecore