755 research outputs found

    Rank deficiency of Kalman error covariance matrices in linear time-varying system with deterministic evolution

    Get PDF
    We prove that for-linear, discrete, time-varying, deterministic system (perfect-model) with noisy outputs, the Riccati transformation in the Kalman filter asymptotically bounds the rank of the forecast and the analysis error covariance matrices to be less than or equal to the number of nonnegative Lyapunov exponents of the system. Further, the support of these error covariance matrices is shown to be confined to the space spanned by the unstable-neutral backward Lyapunov vectors, providing the theoretical justification for the methodology of the algorithms that perform assimilation only in the unstable-neutral subspace. The equivalent property of the autonomous system is investigated as a special case

    Chaotic dynamics and the role of covariance inflation for reduced rank Kalman filters with model error

    Get PDF
    The ensemble Kalman filter and its variants have shown to be robust for data assimilation in high dimensional geophysical models, with localization, using ensembles of extremely small size relative to the model dimension. However, a reduced rank representation of the estimated covariance leaves a large dimensional complementary subspace unfiltered. Utilizing the dynamical properties of the filtration for the backward Lyapunov vectors, this paper explores a previously unexplained mechanism, providing a novel theoretical interpretation for the role of covariance inflation in ensemble-based Kalman filters. Our derivation of the forecast error evolution describes the dynamic upwelling of the unfiltered error from outside of the span of the anomalies into the filtered subspace. Analytical results for linear systems explicitly describe the mechanism for the upwelling, and the associated recursive Riccati equation for the forecast error, while nonlinear approximations are explored numerically

    On dimension reduction in Gaussian filters

    Full text link
    A priori dimension reduction is a widely adopted technique for reducing the computational complexity of stationary inverse problems. In this setting, the solution of an inverse problem is parameterized by a low-dimensional basis that is often obtained from the truncated Karhunen-Loeve expansion of the prior distribution. For high-dimensional inverse problems equipped with smoothing priors, this technique can lead to drastic reductions in parameter dimension and significant computational savings. In this paper, we extend the concept of a priori dimension reduction to non-stationary inverse problems, in which the goal is to sequentially infer the state of a dynamical system. Our approach proceeds in an offline-online fashion. We first identify a low-dimensional subspace in the state space before solving the inverse problem (the offline phase), using either the method of "snapshots" or regularized covariance estimation. Then this subspace is used to reduce the computational complexity of various filtering algorithms - including the Kalman filter, extended Kalman filter, and ensemble Kalman filter - within a novel subspace-constrained Bayesian prediction-and-update procedure (the online phase). We demonstrate the performance of our new dimension reduction approach on various numerical examples. In some test cases, our approach reduces the dimensionality of the original problem by orders of magnitude and yields up to two orders of magnitude in computational savings

    On the Mathematical Theory of Ensemble (Linear-Gaussian) Kalman-Bucy Filtering

    Get PDF
    The purpose of this review is to present a comprehensive overview of the theory of ensemble Kalman-Bucy filtering for linear-Gaussian signal models. We present a system of equations that describe the flow of individual particles and the flow of the sample covariance and the sample mean in continuous-time ensemble filtering. We consider these equations and their characteristics in a number of popular ensemble Kalman filtering variants. Given these equations, we study their asymptotic convergence to the optimal Bayesian filter. We also study in detail some non-asymptotic time-uniform fluctuation, stability, and contraction results on the sample covariance and sample mean (or sample error track). We focus on testable signal/observation model conditions, and we accommodate fully unstable (latent) signal models. We discuss the relevance and importance of these results in characterising the filter's behaviour, e.g. it's signal tracking performance, and we contrast these results with those in classical studies of stability in Kalman-Bucy filtering. We provide intuition for how these results extend to nonlinear signal models and comment on their consequence on some typical filter behaviours seen in practice, e.g. catastrophic divergence

    Reduced rank filtering in chaotic systems with application in geophysical sciences

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references.Recent technological advancements have enabled us to collect large volumes of geophysical noisy measurements that need to be combined with the model forecasts, which capture all of the known properties of the underlying system. This problem is best formulated in a stochastic optimization framework, which when solved recursively is known as Filtering. Due to the large dimensions of geophysical models, optimal filtering algorithms cannot be implemented within the constraints of available computation resources. As a result, most applications use suboptimal reduced rank algorithms. Successful implementation of reduced rank filters depends on the dynamical properties of the underlying system. Here, the focus is on geophysical systems with chaotic behavior defined as extreme sensitivity of the dynamics to perturbations in the state or parameters of the system. In particular, uncertainties in a chaotic system experience growth and instability along a particular set of directions in the state space that are continually subject to large and abrupt state-dependent changes. Therefore, any successful reduced rank filter has to continually identify the important direction of uncertainty in order to properly estimate the true state of the system. In this thesis, we introduce two efficient reduced rank filtering algorithms for chaotic system, scalable to large geophysical applications. Firstly, a geometric approach is taken to identify the growing directions of uncertainty, which translate to the leading singular vectors of the state transition matrix over the forecast period, so long as the linear approximation of the dynamics is valid.The singular vectors are computed via iterations of the linear forward and adjoint models of the system and used in a filter with linear Kalman-based update. Secondly, the dynamical stability of the estimation error in a filter with linear update is analyzed, assuming that error propagation can be approximated using the state transition matrix of the system over the forecast period. The unstable directions of error dynamics are identified as the Floquet vectors of an auxiliary periodic system that is defined based on the forecast trajectory. These vectors are computed by iterations of the forward nonlinear model and used in a Kalman-based filter. Both of the filters are tested on a chaotic Lorenz 95 system with dynamic model error against the ensemble Kalman filter. Results show that when enough directions are considered, the filters perform at the optimal level, defined by an ensemble Kalman filter with a very large ensemble size. Additionally, both of the filters perform equally well when the dynamic model error is absence and ensemble filters fail. The number of iterations for computing the vectors can be set a priori based on the available computational resources and desired accuracy. To investigate scalability of the algorithms, they are implemented in a quasi-geostrophic ocean circulation model. The results are promising for future extensions to realistic geophysical applications, with large models.by Adel Ahanin.Ph.D

    Ensemble Kalman methods for high-dimensional hierarchical dynamic space-time models

    Full text link
    We propose a new class of filtering and smoothing methods for inference in high-dimensional, nonlinear, non-Gaussian, spatio-temporal state-space models. The main idea is to combine the ensemble Kalman filter and smoother, developed in the geophysics literature, with state-space algorithms from the statistics literature. Our algorithms address a variety of estimation scenarios, including on-line and off-line state and parameter estimation. We take a Bayesian perspective, for which the goal is to generate samples from the joint posterior distribution of states and parameters. The key benefit of our approach is the use of ensemble Kalman methods for dimension reduction, which allows inference for high-dimensional state vectors. We compare our methods to existing ones, including ensemble Kalman filters, particle filters, and particle MCMC. Using a real data example of cloud motion and data simulated under a number of nonlinear and non-Gaussian scenarios, we show that our approaches outperform these existing methods

    Accounting for Model Error from Unresolved Scales in Ensemble Kalman Filters by Stochastic Parameterization

    Get PDF
    The use of discrete-time stochastic parameterization to account for model error due to unresolved scales in ensemble Kalman filters is investigated by numerical experiments. The parameterization quantifies the model error and produces an improved non-Markovian forecast model, which generates high quality forecast ensembles and improves filter performance. Results are compared with the methods of dealing with model error through covariance inflation and localization (IL), using as an example the two-layer Lorenz-96 system. The numerical results show that when the ensemble size is sufficiently large, the parameterization is more effective in accounting for the model error than IL; if the ensemble size is small, IL is needed to reduce sampling error, but the parameterization further improves the performance of the filter. This suggests that in real applications where the ensemble size is relatively small, the filter can achieve better performance than pure IL if stochastic parameterization methods are combined with IL
    • …
    corecore