72,840 research outputs found

    A domain decomposing parallel sparse linear system solver

    Get PDF
    The solution of large sparse linear systems is often the most time-consuming part of many science and engineering applications. Computational fluid dynamics, circuit simulation, power network analysis, and material science are just a few examples of the application areas in which large sparse linear systems need to be solved effectively. In this paper we introduce a new parallel hybrid sparse linear system solver for distributed memory architectures that contains both direct and iterative components. We show that by using our solver one can alleviate the drawbacks of direct and iterative solvers, achieving better scalability than with direct solvers and more robustness than with classical preconditioned iterative solvers. Comparisons to well-known direct and iterative solvers on a parallel architecture are provided.Comment: To appear in Journal of Computational and Applied Mathematic

    On preconditioning strategies for geotechnics

    Get PDF
    Iterative solvers are of increasing interest in geomechanics with the move towards 3D finite element modelling. Potentially, these methods can lead to reduced computational complexity as, unlike direct methods, they do not require the full system matrix to be assembled. In general, however, iterative solvers have not been widely adopted in geomechanics due to problems with convergence. This paper reviews the background to iterative methods for elastic and elasto-plastic material models. In some cases, existing numerical methods can be taken from research in the mathematics community. For other systems, further work is needed. The paper provides demonstrations of the capabilities of some strategies

    Probabilistic Linear Solvers: A Unifying View

    Full text link
    Several recent works have developed a new, probabilistic interpretation for numerical algorithms solving linear systems in which the solution is inferred in a Bayesian framework, either directly or by inferring the unknown action of the matrix inverse. These approaches have typically focused on replicating the behavior of the conjugate gradient method as a prototypical iterative method. In this work surprisingly general conditions for equivalence of these disparate methods are presented. We also describe connections between probabilistic linear solvers and projection methods for linear systems, providing a probabilistic interpretation of a far more general class of iterative methods. In particular, this provides such an interpretation of the generalised minimum residual method. A probabilistic view of preconditioning is also introduced. These developments unify the literature on probabilistic linear solvers, and provide foundational connections to the literature on iterative solvers for linear systems

    Nonlinear Preconditioning: How to use a Nonlinear Schwarz Method to Precondition Newton's Method

    Get PDF
    For linear problems, domain decomposition methods can be used directly as iterative solvers, but also as preconditioners for Krylov methods. In practice, Krylov acceleration is almost always used, since the Krylov method finds a much better residual polynomial than the stationary iteration, and thus converges much faster. We show in this paper that also for non-linear problems, domain decomposition methods can either be used directly as iterative solvers, or one can use them as preconditioners for Newton's method. For the concrete case of the parallel Schwarz method, we show that we obtain a preconditioner we call RASPEN (Restricted Additive Schwarz Preconditioned Exact Newton) which is similar to ASPIN (Additive Schwarz Preconditioned Inexact Newton), but with all components directly defined by the iterative method. This has the advantage that RASPEN already converges when used as an iterative solver, in contrast to ASPIN, and we thus get a substantially better preconditioner for Newton's method. The iterative construction also allows us to naturally define a coarse correction using the multigrid full approximation scheme, which leads to a convergent two level non-linear iterative domain decomposition method and a two level RASPEN non-linear preconditioner. We illustrate our findings with numerical results on the Forchheimer equation and a non-linear diffusion problem

    Fast iterative solvers for convection-diffusion control problems

    Get PDF
    In this manuscript, we describe effective solvers for the optimal control of stabilized convection-diffusion problems. We employ the local projection stabilization, which we show to give the same matrix system whether the discretize-then-optimize or optimize-then-discretize approach for this problem is used. We then derive two effective preconditioners for this problem, the �first to be used with MINRES and the second to be used with the Bramble-Pasciak Conjugate Gradient method. The key components of both preconditioners are an accurate mass matrix approximation, a good approximation of the Schur complement, and an appropriate multigrid process to enact this latter approximation. We present numerical results to demonstrate that these preconditioners result in convergence in a small number of iterations, which is robust with respect to the mesh size h, and the regularization parameter β, for a range of problems

    Linear iterative solvers for implicit ODE methods

    Get PDF
    The numerical solution of stiff initial value problems, which lead to the problem of solving large systems of mildly nonlinear equations are considered. For many problems derived from engineering and science, a solution is possible only with methods derived from iterative linear equation solvers. A common approach to solving the nonlinear equations is to employ an approximate solution obtained from an explicit method. The error is examined to determine how it is distributed among the stiff and non-stiff components, which bears on the choice of an iterative method. The conclusion is that error is (roughly) uniformly distributed, a fact that suggests the Chebyshev method (and the accompanying Manteuffel adaptive parameter algorithm). This method is described, also commenting on Richardson's method and its advantages for large problems. Richardson's method and the Chebyshev method with the Mantueffel algorithm are applied to the solution of the nonlinear equations by Newton's method
    corecore