373 research outputs found
Model error and sequential data assimilation. A deterministic formulation
Data assimilation schemes are confronted with the presence of model errors
arising from the imperfect description of atmospheric dynamics. These errors
are usually modeled on the basis of simple assumptions such as bias, white
noise, first order Markov process. In the present work, a formulation of the
sequential extended Kalman filter is proposed, based on recent findings on the
universal deterministic behavior of model errors in deep contrast with previous
approaches (Nicolis, 2004). This new scheme is applied in the context of a
spatially distributed system proposed by Lorenz (1996). It is found that (i)
for short times, the estimation error is accurately approximated by an
evolution law in which the variance of the model error (assumed to be a
deterministic process) evolves according to a quadratic law, in agreement with
the theory. Moreover, the correlation with the initial condition error appears
to play a secondary role in the short time dynamics of the estimation error
covariance. (ii) The deterministic description of the model error evolution,
incorporated into the classical extended Kalman filter equations, reveals that
substantial improvements of the filter accuracy can be gained as compared with
the classical white noise assumption. The universal, short time, quadratic law
for the evolution of the model error covariance matrix seems very promising for
modeling estimation error dynamics in sequential data assimilation
Controlling instabilities along a 3DVar analysis cycle by assimilating in the unstable subspace: a comparison with the EnKF
Abstract. A hybrid scheme obtained by combining 3DVar with the Assimilation in the Unstable Subspace (3DVar-AUS) is tested in a QG model, under perfect model conditions, with a fixed observational network, with and without observational noise. The AUS scheme, originally formulated to assimilate adaptive observations, is used here to assimilate the fixed observations that are found in the region of local maxima of BDAS vectors (Bred vectors subject to assimilation), while the remaining observations are assimilated by 3DVar. The performance of the hybrid scheme is compared with that of 3DVar and of an EnKF. The improvement gained by 3DVar-AUS and the EnKF with respect to 3DVar alone is similar in the present model and observational configuration, while 3DVar-AUS outperforms the EnKF during the forecast stage. The 3DVar-AUS algorithm is easy to implement and the results obtained in the idealized conditions of this study encourage further investigation toward an implementation in more realistic contexts
Recommended from our members
Developing a dynamically based assimilation method for targeted and standard observations
International audienceIn a recent study, a new method for assimilating observations has been proposed and applied to a small size nonlinear model. The assimilation is obtained by confining the analysis increment in the unstable subspace of the Observation-Analysis-Forecast (OAF) cycle system, in order to systematically eliminate the dynamically unstable components, present in the forecast error, which are responsible for error growth. Based on the same ideas, applications to more complex models and different, standard and adaptive, observation networks are in progress. Observing System Simulation Experiments (OSSE), performed with an atmospheric quasi-geostrophic model, with a restricted "land" area where vertical profiles are systematically observed, and a wider "ocean" area where a single supplementary observation is taken at each analysis time, are reviewed. The adaptive observation is assimilated either with the proposed method or, for comparison, with a 3-D VAR scheme. The performance of the dynamic assimilation is very good: a reduction of the error of almost an order of magnitude is obtained in the data void region. The same method is applied to a primitive equation ocean model, where "satellite altimetry" observations are assimilated. In this standard observational configuration, preliminary results show a less spectacular but significant improvement obtained by the introduction of the dynamical assimilation
Recommended from our members
Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations
Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate.
The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand.
Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades.
We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed.
Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year.
Finally, the application of these results in the context of realistic climate models is discussed
DADA: data assimilation for the detection and attribution of weather and climate-related events
A new nudging method for data assimilation, delay‐coordinate nudging, is presented. Delay‐coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time step. Numerical experiments with a low‐order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an unoptimized formulation of the delay‐nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay‐coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal‐to‐decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures
Rank deficiency of Kalman error covariance matrices in linear time-varying system with deterministic evolution
We prove that for-linear, discrete, time-varying, deterministic system (perfect-model) with noisy outputs, the Riccati transformation in the Kalman filter asymptotically bounds the rank of the forecast and the analysis error covariance matrices to be less than or equal to the number of nonnegative Lyapunov exponents of the system. Further, the support of these error covariance matrices is shown to be confined to the space spanned by the unstable-neutral backward Lyapunov vectors, providing the theoretical justification for the methodology of the algorithms that perform assimilation only in the unstable-neutral subspace. The equivalent property of the autonomous system is investigated as a special case
The Role of Scanning Electron Microscopy in Periodontal Research
During recent years a great amount of research has led to a better understanding of the etiology, pathogenesis and pattern of progression of periodontal diseases. Scanning electron microscopy (SEM) has contributed to this improvement, mainly with respect to histology of periodontal tissues, the description of the morphology and distribution of bacteria on the exposed root surface, analysis of the host-parasite interactions on the gingival pocket wall, and morphological evaluation of root treatment. This review deals with all these topics. Unusual types of SEM research are also described and discussed. Uncommon sample preparation techniques for SEM in periodontal research are described. SEM in periodontal research should be of great application in the near future. Cathodoluminescence, backscattered emission and immunolabelling techniques will be formidable tools in this field of dentistry
Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models
Recent progress in machine learning has shown how to forecast and, to some extent, learn the dynamics of a model from its output, resorting in particular to neural networks and deep learning techniques. We will show how the same goal can be directly achieved using data assimilation techniques without leveraging on machine learning software libraries, with a view to high-dimensional models. The dynamics of a model are learned from its observation and an ordinary differential equation (ODE) representation of this model is inferred using a recursive nonlinear regression. Because the method is embedded in a Bayesian data assimilation framework, it can learn from partial and noisy observations of a state trajectory of the physical model. Moreover, a space-wise local representation of the ODE system is introduced and is key to coping with high-dimensional models. It has recently been suggested that neural network architectures could be interpreted as dynamical systems. Reciprocally, we show that our ODE representations are reminiscent of deep learning architectures. Furthermore, numerical analysis considerations of stability shed light on the assets and limitations of the method. The method is illustrated on several chaotic discrete and continuous models of various dimensions, with or without noisy observations, with the goal of identifying or improving the model dynamics, building a surrogate or reduced model, or producing forecasts solely from observations of the physical model
Evaluating Data Assimilation Algorithms
Data assimilation leads naturally to a Bayesian formulation in which the
posterior probability distribution of the system state, given the observations,
plays a central conceptual role. The aim of this paper is to use this Bayesian
posterior probability distribution as a gold standard against which to evaluate
various commonly used data assimilation algorithms.
A key aspect of geophysical data assimilation is the high dimensionality and
low predictability of the computational model. With this in mind, yet with the
goal of allowing an explicit and accurate computation of the posterior
distribution, we study the 2D Navier-Stokes equations in a periodic geometry.
We compute the posterior probability distribution by state-of-the-art
statistical sampling techniques. The commonly used algorithms that we evaluate
against this accurate gold standard, as quantified by comparing the relative
error in reproducing its moments, are 4DVAR and a variety of sequential
filtering approximations based on 3DVAR and on extended and ensemble Kalman
filters.
The primary conclusions are that: (i) with appropriate parameter choices,
approximate filters can perform well in reproducing the mean of the desired
probability distribution; (ii) however they typically perform poorly when
attempting to reproduce the covariance; (iii) this poor performance is
compounded by the need to modify the covariance, in order to induce stability.
Thus, whilst filters can be a useful tool in predicting mean behavior, they
should be viewed with caution as predictors of uncertainty. These conclusions
are intrinsic to the algorithms and will not change if the model complexity is
increased, for example by employing a smaller viscosity, or by using a detailed
NWP model
Improving weather and climate predictions by training of supermodels
Recent studies demonstrate that weather and climate predictions potentially improve by dynamically combining different models into a so-called "supermodel". Here, we focus on the weighted supermodel - the supermodel's time derivative is a weighted superposition of the time derivatives of the imperfect models, referred to as weighted supermodeling. A crucial step is to train the weights of the supermodel on the basis of historical observations. Here, we apply two different training methods to a supermodel of up to four different versions of the global atmosphere-ocean-land model SPEEDO. The standard version is regarded as truth. The first training method is based on an idea called cross pollination in time (CPT), where models exchange states during the training. The second method is a synchronization-based learning rule, originally developed for parameter estimation. We demonstrate that both training methods yield climate simulations and weather predictions of superior quality as compared to the individual model versions. Supermodel predictions also outperform predictions based on the commonly used multi-model ensemble (MME) mean. Furthermore, we find evidence that negative weights can improve predictions in cases where model errors do not cancel (for instance, all models are warm with respect to the truth). In principle, the proposed training schemes are applicable to state-of-the-art models and historical observations. A prime advantage of the proposed training schemes is that in the present context relatively short training periods suffice to find good solutions. Additional work needs to be done to assess the limitations due to incomplete and noisy data, to combine models that are structurally different (different resolution and state representation, for instance) and to evaluate cases for which the truth falls outside of the model class
- …
