1,987 research outputs found

    A general framework for nonlinear multigrid inversion

    Full text link

    Composing Scalable Nonlinear Algebraic Solvers

    Get PDF
    Most efficient linear solvers use composable algorithmic components, with the most common model being the combination of a Krylov accelerator and one or more preconditioners. A similar set of concepts may be used for nonlinear algebraic systems, where nonlinear composition of different nonlinear solvers may significantly improve the time to solution. We describe the basic concepts of nonlinear composition and preconditioning and present a number of solvers applicable to nonlinear partial differential equations. We have developed a software framework in order to easily explore the possible combinations of solvers. We show that the performance gains from using composed solvers can be substantial compared with gains from standard Newton-Krylov methods.Comment: 29 pages, 14 figures, 13 table

    A semi-implicit Hall-MHD solver using whistler wave preconditioning

    Full text link
    The dispersive character of the Hall-MHD solutions, in particular the whistler waves, is a strong restriction to numerical treatments of this system. Numerical stability demands a time step dependence of the form Δt(Δx)2\Delta t\propto (\Delta x)^2 for explicit calculations. A new semi--implicit scheme for integrating the induction equation is proposed and applied to a reconnection problem. It it based on a fix point iteration with a physically motivated preconditioning. Due to its convergence properties, short wavelengths converge faster than long ones, thus it can be used as a smoother in a nonlinear multigrid method

    Multigrid elliptic equation solver with adaptive mesh refinement

    Full text link
    In this paper we describe in detail the computational algorithm used by our parallel multigrid elliptic equation solver with adaptive mesh refinement. Our code uses truncation error estimates to adaptively refine the grid as part of the solution process. The presentation includes a discussion of the orders of accuracy that we use for prolongation and restriction operators to ensure second order accurate results and to minimize computational work. Code tests are presented that confirm the overall second order accuracy and demonstrate the savings in computational resources provided by adaptive mesh refinement.Comment: 12 pages, 9 figures, Modified in response to reviewer suggestions, added figure, added references. Accepted for publication in J. Comp. Phy

    A fully discrete framework for the adaptive solution of inverse problems

    Get PDF
    We investigate and contrast the differences between the discretize-then-differentiate and differentiate-then-discretize approaches to the numerical solution of parameter estimation problems. The former approach is attractive in practice due to the use of automatic differentiation for the generation of the dual and optimality equations in the first-order KKT system. The latter strategy is more versatile, in that it allows one to formulate efficient mesh-independent algorithms over suitably chosen function spaces. However, it is significantly more difficult to implement, since automatic code generation is no longer an option. The starting point is a classical elliptic inverse problem. An a priori error analysis for the discrete optimality equation shows consistency and stability are not inherited automatically from the primal discretization. Similar to the concept of dual consistency, We introduce the concept of optimality consistency. However, the convergence properties can be restored through suitable consistent modifications of the target functional. Numerical tests confirm the theoretical convergence order for the optimal solution. We then derive a posteriori error estimates for the infinite dimensional optimal solution error, through a suitably chosen error functional. This estimates are constructed using second order derivative information for the target functional. For computational efficiency, the Hessian is replaced by a low order BFGS approximation. The efficiency of the error estimator is confirmed by a numerical experiment with multigrid optimization
    corecore