4 research outputs found

    Efficient methods for computing observation impact in 4D-Var data assimilation

    Full text link
    This paper presents a practical computational approach to quantify the effect of individual observations in estimating the state of a system. Such an analysis can be used for pruning redundant measurements, and for designing future sensor networks. The mathematical approach is based on computing the sensitivity of the reanalysis (unconstrained optimization solution) with respect to the data. The computational cost is dominated by the solution of a linear system, whose matrix is the Hessian of the cost function, and is only available in operator form. The right hand side is the gradient of a scalar cost function that quantifies the forecast error of the numerical model. The use of adjoint models to obtain the necessary first and second order derivatives is discussed. We study various strategies to accelerate the computation, including matrix-free iterative solvers, preconditioners, and an in-house multigrid solver. Experiments are conducted on both a small-size shallow-water equations model, and on a large-scale numerical weather prediction model, in order to illustrate the capabilities of the new methodology

    Low-rank Approximations for Computing Observation Impact in 4D-Var Data Assimilation

    Full text link
    We present an efficient computational framework to quantify the impact of individual observations in four dimensional variational data assimilation. The proposed methodology uses first and second order adjoint sensitivity analysis, together with matrix-free algorithms to obtain low-rank approximations of ob- servation impact matrix. We illustrate the application of this methodology to important applications such as data pruning and the identification of faulty sensors for a two dimensional shallow water test system

    An Optimization Framework to Improve 4D-Var Data Assimilation System Performance

    Full text link
    This paper develops a computational framework for optimizing the parameters of data assimilation systems in order to improve their performance. The approach formulates a continuous meta-optimization problem for parameters; the meta-optimization is constrained by the original data assimilation problem. The numerical solution process employs adjoint models and iterative solvers. The proposed framework is applied to optimize observation values, data weighting coefficients, and the location of sensors for a test problem. The ability to optimize a distributed measurement network is crucial for cutting down operating costs and detecting malfunctions

    Numerical Linear Algebra in Data Assimilation

    Full text link
    Data assimilation is a method that combines observations (that is, real world data) of a state of a system with model output for that system in order to improve the estimate of the state of the system and thereby the model output. The model is usually represented by a discretised partial differential equation. The data assimilation problem can be formulated as a large scale Bayesian inverse problem. Based on this interpretation we will derive the most important variational and sequential data assimilation approaches, in particular three-dimensional and four-dimensional variational data assimilation (3D-Var and 4D-Var) and the Kalman filter. We will then consider more advanced methods which are extensions of the Kalman filter and variational data assimilation and pay particular attention to their advantages and disadvantages. The data assimilation problem usually results in a very large optimisation problem and/or a very large linear system to solve (due to inclusion of time and space dimensions). Therefore, the second part of this article aims to review advances and challenges, in particular from the numerical linear algebra perspective, within the various data assimilation approaches.Comment: 31 pages, 2 figure
    corecore