311 research outputs found

    Model error and sequential data assimilation. A deterministic formulation

    Full text link
    Data assimilation schemes are confronted with the presence of model errors arising from the imperfect description of atmospheric dynamics. These errors are usually modeled on the basis of simple assumptions such as bias, white noise, first order Markov process. In the present work, a formulation of the sequential extended Kalman filter is proposed, based on recent findings on the universal deterministic behavior of model errors in deep contrast with previous approaches (Nicolis, 2004). This new scheme is applied in the context of a spatially distributed system proposed by Lorenz (1996). It is found that (i) for short times, the estimation error is accurately approximated by an evolution law in which the variance of the model error (assumed to be a deterministic process) evolves according to a quadratic law, in agreement with the theory. Moreover, the correlation with the initial condition error appears to play a secondary role in the short time dynamics of the estimation error covariance. (ii) The deterministic description of the model error evolution, incorporated into the classical extended Kalman filter equations, reveals that substantial improvements of the filter accuracy can be gained as compared with the classical white noise assumption. The universal, short time, quadratic law for the evolution of the model error covariance matrix seems very promising for modeling estimation error dynamics in sequential data assimilation

    Controlling instabilities along a 3DVar analysis cycle by assimilating in the unstable subspace: a comparison with the EnKF

    Get PDF
    A hybrid scheme obtained by combining 3DVar with the Assimilation in the Unstable Subspace (3DVar-AUS) is tested in a QG model, under perfect model conditions, with a fixed observational network, with and without observational noise. The AUS scheme, originally formulated to assimilate adaptive observations, is used here to assimilate the fixed observations that are found in the region of local maxima of BDAS vectors (Bred vectors subject to assimilation), while the remaining observations are assimilated by 3DVar. The performance of the hybrid scheme is compared with that of 3DVar and of an EnKF. The improvement gained by 3DVar-AUS and the EnKF with respect to 3DVar alone is similar in the present model and observational configuration, while 3DVar-AUS outperforms the EnKF during the forecast stage. The 3DVar-AUS algorithm is easy to implement and the results obtained in the idealized conditions of this study encourage further investigation toward an implementation in more realistic contexts

    Estimating model evidence using data assimilation

    Get PDF
    We review the field of data assimilation (DA) from a Bayesian perspective and show that, in addition to its by now common application to state estimation, DA may be used for model selection. An important special case of the latter is the discrimination between a factual model–which corresponds, to the best of the modeller's knowledge, to the situation in the actual world in which a sequence of events has occurred–and a counterfactual model, in which a particular forcing or process might be absent or just quantitatively different from the actual world. Three different ensemble‐DA methods are reviewed for this purpose: the ensemble Kalman filter (EnKF), the ensemble four‐dimensional variational smoother (En‐4D‐Var), and the iterative ensemble Kalman smoother (IEnKS). An original contextual formulation of model evidence (CME) is introduced. It is shown how to apply these three methods to compute CME, using the approximated time‐dependent probability distribution functions (pdfs) each of them provide in the process of state estimation. The theoretical formulae so derived are applied to two simplified nonlinear and chaotic models: (i) the Lorenz three‐variable convection model (L63), and (ii) the Lorenz 40‐variable midlatitude atmospheric dynamics model (L95). The numerical results of these three DA‐based methods and those of an integration based on importance sampling are compared. It is found that better CME estimates are obtained by using DA, and the IEnKS method appears to be best among the DA methods. Differences among the performance of the three DA‐based methods are discussed as a function of model properties. Finally, the methodology is implemented for parameter estimation and for event attribution

    The Role of Scanning Electron Microscopy in Periodontal Research

    Get PDF
    During recent years a great amount of research has led to a better understanding of the etiology, pathogenesis and pattern of progression of periodontal diseases. Scanning electron microscopy (SEM) has contributed to this improvement, mainly with respect to histology of periodontal tissues, the description of the morphology and distribution of bacteria on the exposed root surface, analysis of the host-parasite interactions on the gingival pocket wall, and morphological evaluation of root treatment. This review deals with all these topics. Unusual types of SEM research are also described and discussed. Uncommon sample preparation techniques for SEM in periodontal research are described. SEM in periodontal research should be of great application in the near future. Cathodoluminescence, backscattered emission and immunolabelling techniques will be formidable tools in this field of dentistry

    Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models

    Get PDF
    Recent progress in machine learning has shown how to forecast and, to some extent, learn the dynamics of a model from its output, resorting in particular to neural networks and deep learning techniques. We will show how the same goal can be directly achieved using data assimilation techniques without leveraging on machine learning software libraries, with a view to high-dimensional models. The dynamics of a model are learned from its observation and an ordinary differential equation (ODE) representation of this model is inferred using a recursive nonlinear regression. Because the method is embedded in a Bayesian data assimilation framework, it can learn from partial and noisy observations of a state trajectory of the physical model. Moreover, a space-wise local representation of the ODE system is introduced and is key to coping with high-dimensional models. It has recently been suggested that neural network architectures could be interpreted as dynamical systems. Reciprocally, we show that our ODE representations are reminiscent of deep learning architectures. Furthermore, numerical analysis considerations of stability shed light on the assets and limitations of the method. The method is illustrated on several chaotic discrete and continuous models of various dimensions, with or without noisy observations, with the goal of identifying or improving the model dynamics, building a surrogate or reduced model, or producing forecasts solely from observations of the physical model

    Improving weather and climate predictions by training of supermodels

    Get PDF
    Recent studies demonstrate that weather and climate predictions potentially improve by dynamically combining different models into a so-called "supermodel". Here, we focus on the weighted supermodel - the supermodel's time derivative is a weighted superposition of the time derivatives of the imperfect models, referred to as weighted supermodeling. A crucial step is to train the weights of the supermodel on the basis of historical observations. Here, we apply two different training methods to a supermodel of up to four different versions of the global atmosphere-ocean-land model SPEEDO. The standard version is regarded as truth. The first training method is based on an idea called cross pollination in time (CPT), where models exchange states during the training. The second method is a synchronization-based learning rule, originally developed for parameter estimation. We demonstrate that both training methods yield climate simulations and weather predictions of superior quality as compared to the individual model versions. Supermodel predictions also outperform predictions based on the commonly used multi-model ensemble (MME) mean. Furthermore, we find evidence that negative weights can improve predictions in cases where model errors do not cancel (for instance, all models are warm with respect to the truth). In principle, the proposed training schemes are applicable to state-of-the-art models and historical observations. A prime advantage of the proposed training schemes is that in the present context relatively short training periods suffice to find good solutions. Additional work needs to be done to assess the limitations due to incomplete and noisy data, to combine models that are structurally different (different resolution and state representation, for instance) and to evaluate cases for which the truth falls outside of the model class

    A study on the forecast quality of the mediterranean cyclones

    Get PDF
    ComunicaciĂłn presentada en: 4th Plinius Conference on Mediterranean Storms celebrada del 2 al 4 de octubre de 2002 en Palma de Mallorca.The main general objective of MEDEX is stated to be the improvement of knowledge and forecasting of cyclones that produce high impact weather in the Mediterranean area. So, for this scope one of the intermediate goals of the project concerns the development of an objective method to evaluate the quality of the forecast of the cyclones. The topic of the present study is to investigate the cyclone's forecast errors in that area and to propose an objective methodology to quantify them. An investigation on the performance of the HIRLAM(INM)-0.5 model in the forecast of cyclonic centres has been done. Databases of analysed and forecasted cyclones for the Western Mediterranean have been used in this study. The "distance" between the analysed and forecasted cyclone has been measured calculating the differences in the value of the parameters chosen to describe them at the sea level surface. Results on the characteristics of the errors are shown. An index constructed by means of these differences has been introduced to evaluate the ability of the model forecasting cyclones, and to quantify it. From this index, two others indexes have been derived in order to discriminate if the forecast has overestimated or underestimated some magnitudes in the description of the cyclone. Three different time forecast ranges, H+12,H+24 and H+48, have been considered to investigate temporal trend in their quality. Finally, to check this methodology, it has been applied to some MEDEX cases

    Data assimilation using adaptive, non-conservative, moving mesh models

    Get PDF
    Numerical models solved on adaptive moving meshes have become increasingly prevalent in recent years. Motivating problems include the study of fluids in a Lagrangian frame and the presence of highly localized structures such as shock waves or interfaces. In the former case, Lagrangian solvers move the nodes of the mesh with the dynamical flow; in the latter, mesh resolution is increased in the proximity of the localized structure. Mesh adaptation can include remeshing, a procedure that adds or removes mesh nodes according to specific rules reflecting constraints in the numerical solver. In this case, the number of mesh nodes will change during the integration and, as a result, the dimension of the model's state vector will not be conserved. This work presents a novel approach to the formulation of ensemble data assimilation (DA) for models with this underlying computational structure. The challenge lies in the fact that remeshing entails a different state space dimension across members of the ensemble, thus impeding the usual computation of consistent ensemble-based statistics. Our methodology adds one forward and one backward mapping step before and after the ensemble Kalman filter (EnKF) analysis, respectively. This mapping takes all the ensemble members onto a fixed, uniform reference mesh where the EnKF analysis can be performed. We consider a high-resolution (HR) and a low-resolution (LR) fixed uniform reference mesh, whose resolutions are determined by the remeshing tolerances. This way the reference meshes embed the model numerical constraints and are also upper and lower uniform meshes bounding the resolutions of the individual ensemble meshes. Numerical experiments are carried out using 1-D prototypical models: Burgers and Kuramoto-Sivashinsky equations and both Eulerian and Lagrangian synthetic observations. While the HR strategy generally outperforms that of LR, their skill difference can be reduced substantially by an optimal tuning of the data assimilation parameters. The LR case is appealing in high dimensions because of its lower computational burden. Lagrangian observations are shown to be very effective in that fewer of them are able to keep the analysis error at a level comparable to the more numerous observers for the Eulerian case. This study is motivated by the development of suitable EnKF strategies for 2-D models of the sea ice that are numerically solved on a Lagrangian mesh with remeshing
    • 

    corecore