201 research outputs found

    Error bounds for Lanczos-based matrix function approximation

    Full text link
    We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing f(A)bf(\mathbf{A}) \mathbf{b} when A\mathbf{A} is a Hermitian matrix and b\mathbf{b} is a given mathbftor. Assuming that f:C→Cf : \mathbb{C} \rightarrow \mathbb{C} is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {\em a priori} and \emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of A\mathbf{A}, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms bHf(A)b\mathbf{b}^\textsf{H} f(\mathbf{A}) \mathbf{b}, and demonstrate the effectiveness of our bounds with numerical experiments

    On the preconditioning for weak constraint four-dimensional variational data assimilation

    Get PDF
    Data assimilation is used to obtain an improved estimate (analysis) of the state of a dynamical system by combining a previous estimate with observations of the system. A weak constraint four-dimensional variational assimilation (4D-Var) method accounts for the dynamical model error and is of large interest in numerical weather prediction. The analysis can be approximated by solving a series of large sparse symmetric positive definite (SPD) or saddle point linear systems of equations. The iterative solvers used for these systems require preconditioning for a satisfactory performance. In this thesis, we use randomised numerical methods to construct effective preconditioners that are cheap to construct and apply. We employ a randomised eigenvalue decomposition to construct limited memory preconditioners (LMPs) for a forcing formulation of 4D-Var independently of the previously solved systems. This preconditioning remains effective even if the subsequent systems change significantly. We propose a randomised approximation of a control variable transform technique (CVT) to precondition the SPD system of the state formulation, which preserves potential for a time-parallel model integration. A new way to include the observation information in the approximation of the inverse Schur complement in the block diagonal preconditioner for the saddle point formulations is introduced, namely applying the randomised LMPs. Numerical experiments with idealised systems show that the proposed preconditioners improve the performance of the iterative solvers. We provide theoretical results describing the change of the extreme eigenvalues of the unpreconditioned and preconditioned coefficient matrices when new observations of the dynamical system are added. These show that small positive eigenvalues can cause convergence issues. New eigenvalue bounds for the SPD and saddle point coefficient matrices in the state formulation emphasize their sensitivities to the observations. These results can guide the design of other preconditioners

    BayesCG As An Uncertainty Aware Version of CG

    Full text link
    The Bayesian Conjugate Gradient method (BayesCG) is a probabilistic generalization of the Conjugate Gradient method (CG) for solving linear systems with real symmetric positive definite coefficient matrices. We present a CG-based implementation of BayesCG with a structure-exploiting prior distribution. The BayesCG output consists of CG iterates and posterior covariances that can be propagated to subsequent computations. The covariances are low-rank and maintained in factored form. This allows easy generation of accurate samples to probe uncertainty in subsequent computations. Numerical experiments confirm the effectiveness of the posteriors and their low-rank approximations.Comment: 31 Pages including supplementary material (main paper is 22 pages, supplement is 9 pages). Computer codes are available at https://github.com/treid5/ProbNumCG_Sup
    • …
    corecore