6,358 research outputs found

    Degenerate Kalman filter error covariances and their convergence onto the unstable subspace

    Get PDF
    The characteristics of the model dynamics are critical in the performance of (ensemble) Kalman filters. In particular, as emphasized in the seminal work of Anna Trevisan and coauthors, the error covariance matrix is asymptotically supported by the unstable-neutral subspace only, i.e., it is spanned by the backward Lyapunov vectors with nonnegative exponents. This behavior is at the core of algorithms known as assimilation in the unstable subspace, although a formal proof was still missing. This paper provides the analytical proof of the convergence of the Kalman filter covariance matrix onto the unstable-neutral subspace when the dynamics and the observation operator are linear and when the dynamical model is error free, for any, possibly rank-deficient, initial error covariance matrix. The rate of convergence is provided as well. The derivation is based on an expression that explicitly relates the error covariances at an arbitrary time to the initial ones. It is also shown that if the unstable and neutral directions of the model are sufficiently observed and if the column space of the initial covariance matrix has a nonzero projection onto all of the forward Lyapunov vectors associated with the unstable and neutral directions of the dynamics, the covariance matrix of the Kalman filter collapses onto an asymptotic sequence which is independent of the initial covariances. Numerical results are also shown to illustrate and support the theoretical findings

    Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation

    Get PDF
    Two families of methods are widely used in data assimilation: the four dimensional variational (4D-Var) approach, and the ensemble Kalman filter (EnKF) approach. The two families have been developed largely through parallel research efforts. Each method has its advantages and disadvantages. It is of interest to develop hybrid data assimilation algorithms that can combine the relative strengths of the two approaches. This paper proposes a subspace approach to investigate the theoretical equivalence between the suboptimal 4D-Var method (where only a small number of optimization iterations are performed) and the practical EnKF method (where only a small number of ensemble members are used) in a linear Gaussian setting. The analysis motivates a new hybrid algorithm: the optimization directions obtained from a short window 4D-Var run are used to construct the EnKF initial ensemble. The proposed hybrid method is computationally less expensive than a full 4D-Var, as only short assimilation windows are considered. The hybrid method has the potential to perform better than the regular EnKF due to its look-ahead property. Numerical results show that the proposed hybrid ensemble filter method performs better than the regular EnKF method for both linear and nonlinear test problems

    Joint Covariance Estimation with Mutual Linear Structure

    Full text link
    We consider the problem of joint estimation of structured covariance matrices. Assuming the structure is unknown, estimation is achieved using heterogeneous training sets. Namely, given groups of measurements coming from centered populations with different covariances, our aim is to determine the mutual structure of these covariance matrices and estimate them. Supposing that the covariances span a low dimensional affine subspace in the space of symmetric matrices, we develop a new efficient algorithm discovering the structure and using it to improve the estimation. Our technique is based on the application of principal component analysis in the matrix space. We also derive an upper performance bound of the proposed algorithm in the Gaussian scenario and compare it with the Cramer-Rao lower bound. Numerical simulations are presented to illustrate the performance benefits of the proposed method

    Asymptotic forecast uncertainty and the unstable subspace in the presence of additive model error

    Get PDF
    It is well understood that dynamic instability is among the primary drivers of forecast uncertainty in chaotic, physical systems. Data assimilation techniques have been designed to exploit this phenomenon, reducing the effective dimension of the data assimilation problem to the directions of rapidly growing errors. Recent mathematical work has, moreover, provided formal proofs of the central hypothesis of the assimilation in the unstable subspace methodology of Anna Trevisan and her collaborators: for filters and smoothers in perfect, linear, Gaussian models, the distribution of forecast errors asymptotically conforms to the unstable-neutral subspace. Specifically, the column span of the forecast and posterior error covariances asymptotically align with the span of backward Lyapunov vectors with nonnegative exponents. Earlier mathematical studies have focused on perfect models, and this current work now explores the relationship between dynamical instability, the precision of observations, and the evolution of forecast error in linear models with additive model error. We prove bounds for the asymptotic uncertainty, explicitly relating the rate of dynamical expansion, model precision, and observational accuracy. Formalizing this relationship, we provide a novel, necessary criterion for the boundedness of forecast errors. Furthermore, we numerically explore the relationship between observational design, dynamical instability, and filter boundedness. Additionally, we include a detailed introduction to the multiplicative ergodic theorem and to the theory and construction of Lyapunov vectors. While forecast error in the stable subspace may not generically vanish, we show that even without filtering, uncertainty remains uniformly bounded due its dynamical dissipation. However, the continuous reinjection of uncertainty from model errors may be excited by transient instabilities in the stable modes of high variance, rendering forecast uncertainty impractically large. In the context of ensemble data assimilation, this requires rectifying the rank of the ensemble-based gain to account for the growth of uncertainty beyond the unstable and neutral subspace, additionally correcting stable modes with frequent occurrences of positive local Lyapunov exponents that excite model errors

    Chaotic dynamics and the role of covariance inflation for reduced rank Kalman filters with model error

    Get PDF
    The ensemble Kalman filter and its variants have shown to be robust for data assimilation in high dimensional geophysical models, with localization, using ensembles of extremely small size relative to the model dimension. However, a reduced rank representation of the estimated covariance leaves a large dimensional complementary subspace unfiltered. Utilizing the dynamical properties of the filtration for the backward Lyapunov vectors, this paper explores a previously unexplained mechanism, providing a novel theoretical interpretation for the role of covariance inflation in ensemble-based Kalman filters. Our derivation of the forecast error evolution describes the dynamic upwelling of the unfiltered error from outside of the span of the anomalies into the filtered subspace. Analytical results for linear systems explicitly describe the mechanism for the upwelling, and the associated recursive Riccati equation for the forecast error, while nonlinear approximations are explored numerically
    • …
    corecore