127,616 research outputs found

    Preconditioning complex symmetric linear systems

    Get PDF
    A new polynomial preconditioner for symmetric complex linear systems based on Hermitian and skew-Hermitian splitting (HSS) for complex symmetric linear systems is herein presented. It applies to Conjugate Orthogonal Conjugate Gradient (COCG) or Conjugate Orthogonal Conjugate Residual (COCR) iterative solvers and does not require any estimation of the spectrum of the coefficient matrix. An upper bound of the condition number of the preconditioned linear system is provided. Moreover, to reduce the computational cost, an inexact variant based on incomplete Cholesky decomposition or orthogonal polynomials is proposed. Numerical results show that the present preconditioner and its inexact variant are efficient and robust solvers for this class of linear systems. A stability analysis of the method completes the description of the preconditioner.Comment: 26 pages, 4 figures, 4 table

    Instabilities and Patterns in Coupled Reaction-Diffusion Layers

    Full text link
    We study instabilities and pattern formation in reaction-diffusion layers that are diffusively coupled. For two-layer systems of identical two-component reactions, we analyze the stability of homogeneous steady states by exploiting the block symmetric structure of the linear problem. There are eight possible primary bifurcation scenarios, including a Turing-Turing bifurcation that involves two disparate length scales whose ratio may be tuned via the inter-layer coupling. For systems of nn-component layers and non-identical layers, the linear problem's block form allows approximate decomposition into lower-dimensional linear problems if the coupling is sufficiently weak. As an example, we apply these results to a two-layer Brusselator system. The competing length scales engineered within the linear problem are readily apparent in numerical simulations of the full system. Selecting a 2\sqrt{2}:1 length scale ratio produces an unusual steady square pattern.Comment: 13 pages, 5 figures, accepted for publication in Phys. Rev.

    On the construction of preconditioners by subspace decomposition

    Get PDF
    AbstractA preconditioner for the iterative solution of symmetric linear systems which arise in Galerkin's method is obtained by decomposition of the space into orthogonal subspaces. The preconditioner corresponds to a particular bilinear form and is actually a 2-level multigrid method. Applications are presented to the numerical solution of elliptic problems by the finite-element method and to the numerical solution of first-kind Fredholm integral equations by Tikhonov regularization

    Symbolic preconditioning techniques for linear systems of partial differential equations

    Get PDF
    Some algorithmic aspects of systems of PDEs based simulations can be better clarified by means of symbolic computation techniques. This is very important since numerical simulations heavily rely on solving systems of PDEs. For the large-scale problems we deal with in today's standard applications, it is necessary to rely on iterative Krylov methods that are scalable (i.e., weakly dependent on the number of degrees on freedom and number of subdomains) and have limited memory requirements. They are preconditioned by domain decomposition methods, incomplete factorizations and multigrid preconditioners. These techniques are well understood and efficient for scalar symmetric equations (e.g., Laplacian, biLaplacian) and to some extent for non-symmetric equations (e.g., convection-diffusion). But they have poor performances and lack robustness when used for symmetric systems of PDEs, and even more so for non-symmetric complex systems (fluid mechanics, porous media ...). As a general rule, the study of iterative solvers for systems of PDEs as opposed to scalar PDEs is an underdeveloped subject. We aim at building new robust and efficient solvers, such as domain decomposition methods and preconditioners for some linear and well-known systems of PDEs

    An efficient null space inexact Newton method for hydraulic simulation of water distribution networks

    Full text link
    Null space Newton algorithms are efficient in solving the nonlinear equations arising in hydraulic analysis of water distribution networks. In this article, we propose and evaluate an inexact Newton method that relies on partial updates of the network pipes' frictional headloss computations to solve the linear systems more efficiently and with numerical reliability. The update set parameters are studied to propose appropriate values. Different null space basis generation schemes are analysed to choose methods for sparse and well-conditioned null space bases resulting in a smaller update set. The Newton steps are computed in the null space by solving sparse, symmetric positive definite systems with sparse Cholesky factorizations. By using the constant structure of the null space system matrices, a single symbolic factorization in the Cholesky decomposition is used multiple times, reducing the computational cost of linear solves. The algorithms and analyses are validated using medium to large-scale water network models.Comment: 15 pages, 9 figures, Preprint extension of Abraham and Stoianov, 2015 (https://dx.doi.org/10.1061/(ASCE)HY.1943-7900.0001089), September 2015. Includes extended exposition, additional case studies and new simulations and analysi

    Nonoverlapping domain decomposition preconditioners for discontinuous Galerkin approximations of Hamilton--Jacobi--Bellman equations

    Get PDF
    We analyse a class of nonoverlapping domain decomposition preconditioners for nonsymmetric linear systems arising from discontinuous Galerkin finite element approximation of fully nonlinear Hamilton--Jacobi--Bellman (HJB) partial differential equations. These nonsymmetric linear systems are uniformly bounded and coercive with respect to a related symmetric bilinear form, that is associated to a matrix A\mathbf{A}. In this work, we construct a nonoverlapping domain decomposition preconditioner P\mathbf{P}, that is based on A\mathbf{A}, and we then show that the effectiveness of the preconditioner for solving the} nonsymmetric problems can be studied in terms of the condition number κ(P−1A)\kappa(\mathbf{P}^{-1}\mathbf{A}). In particular, we establish the bound κ(P−1A)≲1+p6H3/q3h3\kappa(\mathbf{P}^{-1}\mathbf{A}) \lesssim 1+ p^6 H^3 /q^3 h^3, where HH and hh are respectively the coarse and fine mesh sizes, and qq and pp are respectively the coarse and fine mesh polynomial degrees. This represents the first such result for this class of methods that explicitly accounts for the dependence of the condition number on qq; our analysis is founded upon an original optimal order approximation result between fine and coarse discontinuous finite element spaces. Numerical experiments demonstrate the sharpness of this bound. Although the preconditioners are not robust with respect to the polynomial degree, our bounds quantify the effect of the coarse and fine space polynomial degrees. Furthermore, we show computationally that these methods are effective in practical applications to nonsymmetric, fully nonlinear HJB equations under hh-refinement for moderate polynomial degrees

    Communication-optimal Parallel and Sequential Cholesky Decomposition

    Full text link
    Numerical algorithms have two kinds of costs: arithmetic and communication, by which we mean either moving data between levels of a memory hierarchy (in the sequential case) or over a network connecting processors (in the parallel case). Communication costs often dominate arithmetic costs, so it is of interest to design algorithms minimizing communication. In this paper we first extend known lower bounds on the communication cost (both for bandwidth and for latency) of conventional (O(n^3)) matrix multiplication to Cholesky factorization, which is used for solving dense symmetric positive definite linear systems. Second, we compare the costs of various Cholesky decomposition implementations to these lower bounds and identify the algorithms and data structures that attain them. In the sequential case, we consider both the two-level and hierarchical memory models. Combined with prior results in [13, 14, 15], this gives a set of communication-optimal algorithms for O(n^3) implementations of the three basic factorizations of dense linear algebra: LU with pivoting, QR and Cholesky. But it goes beyond this prior work on sequential LU by optimizing communication for any number of levels of memory hierarchy.Comment: 29 pages, 2 tables, 6 figure
    • …
    corecore