3,384 research outputs found

    Evaluating Data Assimilation Algorithms

    Get PDF
    Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given the observations, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probability distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms. A key aspect of geophysical data assimilation is the high dimensionality and low predictability of the computational model. With this in mind, yet with the goal of allowing an explicit and accurate computation of the posterior distribution, we study the 2D Navier-Stokes equations in a periodic geometry. We compute the posterior probability distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that we evaluate against this accurate gold standard, as quantified by comparing the relative error in reproducing its moments, are 4DVAR and a variety of sequential filtering approximations based on 3DVAR and on extended and ensemble Kalman filters. The primary conclusions are that: (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution; (ii) however they typically perform poorly when attempting to reproduce the covariance; (iii) this poor performance is compounded by the need to modify the covariance, in order to induce stability. Thus, whilst filters can be a useful tool in predicting mean behavior, they should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms and will not change if the model complexity is increased, for example by employing a smaller viscosity, or by using a detailed NWP model

    Well-Posedness And Accuracy Of The Ensemble Kalman Filter In Discrete And Continuous Time

    Get PDF
    The ensemble Kalman filter (EnKF) is a method for combining a dynamical model with data in a sequential fashion. Despite its widespread use, there has been little analysis of its theoretical properties. Many of the algorithmic innovations associated with the filter, which are required to make a useable algorithm in practice, are derived in an ad hoc fashion. The aim of this paper is to initiate the development of a systematic analysis of the EnKF, in particular to do so in the small ensemble size limit. The perspective is to view the method as a state estimator, and not as an algorithm which approximates the true filtering distribution. The perturbed observation version of the algorithm is studied, without and with variance inflation. Without variance inflation well-posedness of the filter is established; with variance inflation accuracy of the filter, with resepct to the true signal underlying the data, is established. The algorithm is considered in discrete time, and also for a continuous time limit arising when observations are frequent and subject to large noise. The underlying dynamical model, and assumptions about it, is sufficiently general to include the Lorenz '63 and '96 models, together with the incompressible Navier-Stokes equation on a two-dimensional torus. The analysis is limited to the case of complete observation of the signal with additive white noise. Numerical results are presented for the Navier-Stokes equation on a two-dimensional torus for both complete and partial observations of the signal with additive white noise

    Accuracy and stability of filters for dissipative PDEs

    Get PDF
    Data assimilation methodologies are designed to incorporate noisy observations of a physical system into an underlying model in order to infer the properties of the state of the system. Filters refer to a class of data assimilation algorithms designed to update the estimation of the state in an on-line fashion, as data is acquired sequentially. For linear problems subject to Gaussian noise, filtering can be performed exactly using the Kalman filter. For nonlinear systems filtering can be approximated in a systematic way by particle filters. However in high dimensions these particle filtering methods can break down. Hence, for the large nonlinear systems arising in applications such as oceanography and weather forecasting, various ad hoc filters are used, mostly based on making Gaussian approximations. The purpose of this work is to study the accuracy and stability properties of these ad hoc filters. We work in the context of the 2D incompressible Navier-Stokes equation, although the ideas readily generalize to a range of dissipative partial differential equations (PDEs). By working in this infinite dimensional setting we provide an analysis which is useful for the understanding of high dimensional filtering, and is robust to mesh-refinement. We describe theoretical results showing that, in the small observational noise limit, the filters can be tuned to perform accurately in tracking the signal itself (filter accuracy), provided the system is observed in a sufficiently large low dimensional space; roughly speaking this space should be large enough to contain the unstable modes of the linearized dynamics. The tuning corresponds to what is known as variance inflation in the applied literature. Numerical results are given which illustrate the theory. The positive results herein concerning filter stability complement recent numerical studies which demonstrate that the ad hoc filters can perform poorly in reproducing statistical variation about the true signal

    Regularization modeling for large-eddy simulation of homogeneous isotropic decaying turbulence

    Get PDF
    Inviscid regularization modeling of turbulent flow is investigated. Homogeneous, isotropic, decaying turbulence is simulated at a range of filter widths. A coarse-graining of turbulent flow arises from the direct regularization of the convective nonlinearity in the Navier–Stokes equations. The regularization is translated into its corresponding sub-filter model to close the equations for large-eddy simulation (LES). The accuracy with which primary turbulent flow features are captured by this modeling is investigated for the Leray regularization, the Navier–Stokes-α formulation (NS-α), the simplified Bardina model and a modified Leray approach. On a PDE level, each regularization principle is known to possess a unique, strong solution with known regularity properties. When used as turbulence closure for numerical simulations, significant differences between these models are observed. Through a comparison with direct numerical simulation (DNS) results, a detailed assessment of these regularization principles is made. The regularization models retain much of the small-scale variability in the solution. The smaller resolved scales are dominated by the specific sub-filter model adopted. We find that the Leray model is in general closest to the filtered DNS results, the modified Leray model is found least accurate and the simplified Bardina and NS-α models are in between, as far as accuracy is concerned. This rough ordering is based on the energy decay, the Taylor Reynolds number and the velocity skewness, and on detailed characteristics of the energy dynamics, including spectra of the energy, the energy transfer and the transfer power. At filter widths up to about 10% of the computational domain-size, the Leray and NS-α predictions were found to correlate well with the filtered DNS data. Each of the regularization models underestimates the energy decay rate and overestimates the tail of the energy spectrum. The correspondence with unfiltered DNS spectra was observed often to be closer than with filtered DNS for several of the regularization models

    Dynamic Model for LES Without Test Filtering: Quantifying the Accuracy of Taylor Series Approximations

    Get PDF
    The dynamic model for large-eddy simulation (LES) of turbulent flows requires test filtering the resolved velocity fields in order to determine model coefficients. However, test filtering is costly to perform in large-eddy simulation of complex geometry flows, especially on unstructured grids. The objective of this work is to develop and test an approximate but less costly dynamic procedure which does not require test filtering. The proposed method is based on Taylor series expansions of the resolved velocity fields. Accuracy is governed by the derivative schemes used in the calculation and the number of terms considered in the approximation to the test filtering operator. The expansion is developed up to fourth order, and results are tested a priori based on direct numerical simulation data of forced isotropic turbulence in the context of the dynamic Smagorinsky model. The tests compare the dynamic Smagorinsky coefficient obtained from filtering with those obtained from application of the Taylor series expansion. They show that the expansion up to second order provides a reasonable approximation to the true dynamic coefficient (with errors on the order of about 5 % for c_s^2), but that including higher-order terms does not necessarily lead to improvements in the results due to inherent limitations in accurately evaluating high-order derivatives. A posteriori tests using the Taylor series approximation in LES of forced isotropic turbulence and channel flow confirm that the Taylor series approximation yields accurate results for the dynamic coefficient. Moreover, the simulations are stable and yield accurate resolved velocity statistics.Comment: submitted to Theoretical and Computational Fluid Dynamics, 20 pages, 11 figure
    corecore