7,851 research outputs found

    Balanced data assimilation for highly-oscillatory mechanical systems

    Get PDF
    Data assimilation algorithms are used to estimate the states of a dynamical system using partial and noisy observations. The ensemble Kalman filter has become a popular data assimilation scheme due to its simplicity and robustness for a wide range of application areas. Nevertheless, the ensemble Kalman filter also has limitations due to its inherent Gaussian and linearity assumptions. These limitations can manifest themselves in dynamically inconsistent state estimates. We investigate this issue in this paper for highly oscillatory Hamiltonian systems with a dynamical behavior which satisfies certain balance relations. We first demonstrate that the standard ensemble Kalman filter can lead to estimates which do not satisfy those balance relations, ultimately leading to filter divergence. We also propose two remedies for this phenomenon in terms of blended time-stepping schemes and ensemble-based penalty methods. The effect of these modifications to the standard ensemble Kalman filter are discussed and demonstrated numerically for two model scenarios. First, we consider balanced motion for highly oscillatory Hamiltonian systems and, second, we investigate thermally embedded highly oscillatory Hamiltonian systems. The first scenario is relevant for applications from meteorology while the second scenario is relevant for applications of data assimilation to molecular dynamics

    A comparison of assimilation results from the ensemble Kalman Filter and a reduced-rank extended Kalman Filter

    No full text
    International audienceThe goal of this study is to compare the performances of the ensemble Kalman filter and a reduced-rank extended Kalman filter when applied to different dynamic regimes. Data assimilation experiments are performed using an eddy-resolving quasi-geostrophic model of the wind-driven ocean circulation. By changing eddy viscosity, this model exhibits two qualitatively distinct behaviors: strongly chaotic for the low viscosity case and quasi-periodic for the high viscosity case. In the reduced-rank extended Kalman filter algorithm, the model is linearized with respect to the time-mean from a long model run without assimilation, a reduced state space is obtained from a small number (100 for the low viscosity case and 20 for the high viscosity case) of leading empirical orthogonal functions (EOFs) derived from the long model run without assimilation. Corrections to the forecasts are only made in the reduced state space at the analysis time, and it is assumed that a steady state filter exists so that a faster filter algorithm is obtained. The ensemble Kalman filter is based on estimating the state-dependent forecast error statistics using Monte Carlo methods. The ensemble Kalman filter is computationally more expensive than the reduced-rank extended Kalman filter.The results show that for strongly nonlinear case, chaotic regime, about 32 ensemble members are sufficient to accurately describe the non-stationary, inhomogeneous, and anisotropic structure of the forecast error covariance and the performance of the reduced-rank extended Kalman filter is very similar to simple optimal interpolation and the ensemble Kalman filter greatly outperforms the reduced-rank extended Kalman filter. For the high viscosity case, both the reduced-rank extended Kalman filter and the ensemble Kalman filter are able to significantly reduce the analysis error and their performances are similar. For the high viscosity case, the model has three preferred regimes, each with distinct energy levels. Therefore, the probability density of the system has a multi-modal distribution and the error of the ensemble mean for the ensemble Kalman filter using larger ensembles can be larger than with smaller ensembles

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation

    Controlling overestimation of error covariance in ensemble Kalman filters with sparse observations: A variance limiting Kalman filter

    Full text link
    We consider the problem of an ensemble Kalman filter when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. To limit the variance of the latter poorly resolved variables we derive a variance limiting Kalman filter (VLKF) in a variational setting. We analyze the variance limiting Kalman filter for a simple linear toy model and determine its range of optimal performance. We explore the variance limiting Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information of the variance of some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure.Comment: 32 pages, 11 figure

    Long-time stability and accuracy of the ensemble Kalman--Bucy filter for fully observed processes and small measurement noise

    Get PDF
    The ensemble Kalman filter has become a popular data assimilation technique in the geosciences. However, little is known theoretically about its long term stability and accuracy. In this paper, we investigate the behavior of an ensemble Kalman--Bucy filter applied to continuous-time filtering problems. We derive mean field limiting equations as the ensemble size goes to infinity as well as uniform-in-time accuracy and stability results for finite ensemble sizes. The later results require that the process is fully observed and that the measurement noise is small. We also demonstrate that our ensemble Kalman--Bucy filter is consistent with the classic Kalman--Bucy filter for linear systems and Gaussian processes. We finally verify our theoretical findings for the Lorenz-63 system

    Data Assimilation Fundamentals

    Get PDF
    This open-access textbook's significant contribution is the unified derivation of data-assimilation techniques from a common fundamental and optimal starting point, namely Bayes' theorem. Unique for this book is the "top-down" derivation of the assimilation methods. It starts from Bayes theorem and gradually introduces the assumptions and approximations needed to arrive at today's popular data-assimilation methods. This strategy is the opposite of most textbooks and reviews on data assimilation that typically take a bottom-up approach to derive a particular assimilation method. E.g., the derivation of the Kalman Filter from control theory and the derivation of the ensemble Kalman Filter as a low-rank approximation of the standard Kalman Filter. The bottom-up approach derives the assimilation methods from different mathematical principles, making it difficult to compare them. Thus, it is unclear which assumptions are made to derive an assimilation method and sometimes even which problem it aspires to solve. The book's top-down approach allows categorizing data-assimilation methods based on the approximations used. This approach enables the user to choose the most suitable method for a particular problem or application. Have you ever wondered about the difference between the ensemble 4DVar and the "ensemble randomized likelihood" (EnRML) methods? Do you know the differences between the ensemble smoother and the ensemble-Kalman smoother? Would you like to understand how a particle flow is related to a particle filter? In this book, we will provide clear answers to several such questions. The book provides the basis for an advanced course in data assimilation. It focuses on the unified derivation of the methods and illustrates their properties on multiple examples. It is suitable for graduate students, post-docs, scientists, and practitioners working in data assimilation
    • …
    corecore