8,569 research outputs found

    Globally convergent techniques in nonlinear Newton-Krylov

    Get PDF
    Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces

    All-at-once solution of time-dependent PDE-constrained optimization problems

    Get PDF
    Time-dependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. One-shot methods try to compute the solution to these problems in a single iteration that solves for all time-steps at the same time. In this paper, we look at one-shot approaches for the optimal control of time-dependent PDEs and focus on the fast solution of these problems. The use of Krylov subspace solvers together with an efficient preconditioner allows for minimal storage requirements. We solve only approximate time-evolutions for both forward and adjoint problem and compute accurate solutions of a given control problem only at convergence of the overall Krylov subspace iteration. We show that our approach can give competitive results for a variety of problem formulations

    All-at-Once Solution if Time-Dependent PDE-Constrained Optimisation Problems

    Get PDF
    Time-dependent partial differential equations (PDEs) play an important role in applied mathematics and many other areas of science. One-shot methods try to compute the solution to these problems in a single iteration that solves for all time-steps at the same time. In this paper, we look at one-shot approaches for the optimal control of time-dependent PDEs and focus on the fast solution of these problems. The use of Krylov subspace solvers together with an efficient preconditioner allows for minimal storage requirements. We solve only approximate time-evolutions for both forward and adjoint problem and compute accurate solutions of a given control problem only at convergence of the overall Krylov subspace iteration. We show that our approach can give competitive results for a variety of problem formulations

    Deliberate Ill-Conditioning of Krylov Matrices

    Get PDF
    This paper starts o with studying simple extrapolation methods for the classical iteration schemes such as Richardson, Jacobi and Gauss-Seidel iteration. The extrapolation procedures can be interpreted as approximate minimal residual methods in a Krylov subspace. It seems therefore logical to consider, conversely, classical methods as pre-processors for Krylov subspace methods, as was done by Ztko (1996) for the Conjugate Gradient method. The observation made by Ipsen (1998) that small residuals necessarily imply an ill-conditioned Krylov matrix, explains the success of such pre-processing schemes: residuals of classical methods are (unscaled) power method iterates, and building a Krylov subspace on such a classical residual will therefore lead to expansion vectors that are at small angle to the previous Krylov vectors. This results in an ill-conditioned Krylov matrix. In this paper, we present a largenumber of experiments that support this claim, and give theoretical interpretations of the pre-processing. The results are mainly of interest in Krylov subspace methods for non-Hermitian matrices based on long recurrences, and in particular for applications with heavy memory limitations. Also, in applications in which minimal residual methods stagnate due to a lack of ill-conditioning, the use of a classical preprocessor can be a cheap and easily parallelizable remedy
    corecore