1,509 research outputs found

    Controlling overestimation of error covariance in ensemble Kalman filters with sparse observations: A variance limiting Kalman filter

    Full text link
    We consider the problem of an ensemble Kalman filter when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables we derive a variance limiting Kalman filter (VLKF) in a variational setting. We analyze the variance limiting Kalman filter for a simple linear toy model and determine its range of optimal performance. We explore the variance limiting Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information of the variance of some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure.Comment: 32 pages, 11 figure

    Linear theory for filtering nonlinear multiscale systems with model error

    Full text link
    We study filtering of multiscale dynamical systems with model error arising from unresolved smaller scale processes. The analysis assumes continuous-time noisy observations of all components of the slow variables alone. For a linear model with Gaussian noise, we prove existence of a unique choice of parameters in a linear reduced model for the slow variables. The linear theory extends to to a non-Gaussian, nonlinear test problem, where we assume we know the optimal stochastic parameterization and the correct observation model. We show that when the parameterization is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa. Given the correct parameterization, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that parameters estimated online, as part of a filtering procedure, produce accurate filtering and equilibrium statistical prediction. In contrast, a linear regression based offline method, which fits the parameters to a given training data set independently from the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed

    Kalman-Takens filtering in the presence of dynamical noise

    Full text link
    The use of data assimilation for the merging of observed data with dynamical models is becoming standard in modern physics. If a parametric model is known, methods such as Kalman filtering have been developed for this purpose. If no model is known, a hybrid Kalman-Takens method has been recently introduced, in order to exploit the advantages of optimal filtering in a nonparametric setting. This procedure replaces the parametric model with dynamics reconstructed from delay coordinates, while using the Kalman update formulation to assimilate new observations. We find that this hybrid approach results in comparable efficiency to parametric methods in identifying underlying dynamics, even in the presence of dynamical noise. By combining the Kalman-Takens method with an adaptive filtering procedure we are able to estimate the statistics of the observational and dynamical noise. This solves a long standing problem of separating dynamical and observational noise in time series data, which is especially challenging when no dynamical model is specified

    A Nonparametric Adaptive Nonlinear Statistical Filter

    Full text link
    We use statistical learning methods to construct an adaptive state estimator for nonlinear stochastic systems. Optimal state estimation, in the form of a Kalman filter, requires knowledge of the system's process and measurement uncertainty. We propose that these uncertainties can be estimated from (conditioned on) past observed data, and without making any assumptions of the system's prior distribution. The system's prior distribution at each time step is constructed from an ensemble of least-squares estimates on sub-sampled sets of the data via jackknife sampling. As new data is acquired, the state estimates, process uncertainty, and measurement uncertainty are updated accordingly, as described in this manuscript.Comment: Accepted at the 2014 IEEE Conference on Decision and Contro
    • …
    corecore