44,189 research outputs found

    Convergence acceleration for vector sequences and applications to computational fluid dynamics

    Get PDF
    Some recent developments in acceleration of convergence methods for vector sequences are reviewed. The methods considered are the minimal polynomial extrapolation, the reduced rank extrapolation, and the modified minimal polynomial extrapolation. The vector sequences to be accelerated are those that are obtained from the iterative solution of linear or nonlinear systems of equations. The convergence and stability properties of these methods as well as different ways of numerical implementation are discussed in detail. Based on the convergence and stability results, strategies that are useful in practical applications are suggested. Two applications to computational fluid mechanics involving the three dimensional Euler equations for ducted and external flows are considered. The numerical results demonstrate the usefulness of the methods in accelerating the convergence of the time marching techniques in the solution of steady state problems

    Hybrid direct and interactive solvers for sparse indefinite and overdetermined systems on future exascale architectures

    Get PDF
    In scientific computing, the numerical simulation of systems is crucial to get a deep understanding of the physics underlying real world applications. The models used in simulation are often based on partial differential equations (PDE) which, after fine discretisation, give rise to huge sparse systems of equations to solve. Historically, 2 classes of methods were designed for the solution of such systems: direct methods, robust but expensive in both computations and memory; and iterative methods, cheap but with a very problem-dependent convergence properties. In the context of high performance computing, hybrid direct-iterative methods were then introduced inorder to combine the advantages of both methods, while using efficiently the increasingly largeand fast supercomputing facilities. In this thesis, we focus on the latter type of methods with two complementary research axis.In the first chapter, we detail the mechanisms behind the efficient implementation of multigrid methods. The latter makes use of several levels of increasingly refined grids to solve linear systems with a combination of fine grid smoothing and coarse grid corrections. The efficient parallel implementation of such a scheme is a difficult task. We focus on the solution of the problem on the coarse grid whose scalability is often observed as limiting at very large scales. We propose an agglomeration technique to gather the data of the coarse grid problem on a subset ofthe computing resources in order to minimise the execution time of a direct solver. Combined with a relaxation of the solution accuracy, we demonstrate an increased overall scalability of the multigrid scheme when using our approach compared to classical iterative methods, when the problem is numerically difficult. At extreme scale, this study is carried in the HHG framework(Hierarchical Hybrid Grids) for the solution of a Stokes problem with jumping coefficients, inspired from Earth's mantle convection simulation. The direct solver used on the coarse grid is MUMPS,combined with block low-rank approximation and single precision arithmetic.In the following chapters, we study some hybrid methods derived from the classical row-projection method block Cimmino, and interpreted as domain decomposition methods. These methods are based on the partitioning of the matrix into blocks of rows. Due to its known slow convergence, the original iterative scheme is accelerated with a stabilised block version of the conjugate gradient algorithm. While an optimal choice of block size improves the efficiency of this approach, the convergence stays problem dependent. An alternative solution is then introduced which enforces a convergence in one iteration by embedding the linear system into a carefully augmented space.These two approaches are extended in order to compute the minimum norm solution of in definite systems and the solution of least-squares problems. The latter problems require a partitioning in blocks of columns. We show how to improve the numerical properties of the iterative and pseudo-direct methods with scaling, partitioning and better augmentation methods. Both methods are implemented in the parallel solver ABCD-Solver (Augmented Block Cimmino Distributed solver)whose parallelisation we improve through a combination of load balancing and communication minimising techniques.Finally, for the solution of discretised PDE problems, we propose a new approach which augments the linear system using a coarse representation of the space. The size of the augmentation is controlled by the choice of a more or less refined mesh. We obtain an iterative method with fast linear convergence demonstrated on Helmholtz and Convection-Diffusion problems. The central point of the approach is the iterative construction and solution of a Schur complemen

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a κ\kappa-conditioned problem in O(κ)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(κ1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(κ)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT
    corecore