3 research outputs found

    Single-pass Nystr\"{o}m approximation in mixed precision

    Full text link
    Low rank matrix approximations appear in a number of scientific computing applications. We consider the Nystr\"{o}m method for approximating a positive semidefinite matrix AA. The computational cost of its single-pass version can be decreased by running it in mixed precision, where the expensive products with AA are computed in a precision lower than the working precision. We bound the extra finite precision error which is compared to the error of the Nystr\"{o}m approximation in exact arithmetic and develop a heuristic to identify when the approximation quality is not affected by the low precision computation. Further, the mixed precision Nystr\"{o}m method can be used to inexpensively construct a limited memory preconditioner for the conjugate gradient method. We bound the condition number of the resulting preconditioned coefficient matrix, and experimentally show that such a preconditioner can be effective

    Randomised preconditioning for the forcing formulation of weak constraint 4D‐Var

    Get PDF
    There is growing awareness that errors in the model equations cannot be ignored in data assimilation methods such as four-dimensional variational assimilation (4D-Var). If allowed for, more information can be extracted from observations, longer time windows are possible, and the minimisation process is easier, at least in principle. Weak constraint 4D-Var estimates the model error and minimises a series of linear least-squares cost functions, which can be achieved using the conjugate gradient (CG) method; minimising each cost function is called an inner loop. CG needs preconditioning to improve its performance. In previous work, limited memory preconditioners (LMPs) have been constructed using approximations of the eigenvalues and eigenvectors of the Hessian in the previous inner loop. If the Hessian changes significantly in consecutive inner loops, the LMP may be of limited usefulness. To circumvent this, we propose using randomised methods for low rank eigenvalue decomposition and use these approximations to cheaply construct LMPs using information from the current inner loop. Three randomised methods are compared. Numerical experiments in idealized systems show that the resulting LMPs perform better than the existing LMPs. Using these methods may allow more efficient and robust implementations of incremental weak constraint 4D-Var
    corecore