316 research outputs found
Model error and sequential data assimilation. A deterministic formulation
Data assimilation schemes are confronted with the presence of model errors
arising from the imperfect description of atmospheric dynamics. These errors
are usually modeled on the basis of simple assumptions such as bias, white
noise, first order Markov process. In the present work, a formulation of the
sequential extended Kalman filter is proposed, based on recent findings on the
universal deterministic behavior of model errors in deep contrast with previous
approaches (Nicolis, 2004). This new scheme is applied in the context of a
spatially distributed system proposed by Lorenz (1996). It is found that (i)
for short times, the estimation error is accurately approximated by an
evolution law in which the variance of the model error (assumed to be a
deterministic process) evolves according to a quadratic law, in agreement with
the theory. Moreover, the correlation with the initial condition error appears
to play a secondary role in the short time dynamics of the estimation error
covariance. (ii) The deterministic description of the model error evolution,
incorporated into the classical extended Kalman filter equations, reveals that
substantial improvements of the filter accuracy can be gained as compared with
the classical white noise assumption. The universal, short time, quadratic law
for the evolution of the model error covariance matrix seems very promising for
modeling estimation error dynamics in sequential data assimilation
Controlling instabilities along a 3DVar analysis cycle by assimilating in the unstable subspace: a comparison with the EnKF
A hybrid scheme obtained by combining 3DVar with the Assimilation in the
Unstable Subspace (3DVar-AUS) is tested in a QG model, under perfect model
conditions, with a fixed observational network, with and without observational
noise. The AUS scheme, originally formulated to assimilate adaptive
observations, is used here to assimilate the fixed observations that are found
in the region of local maxima of BDAS vectors (Bred vectors subject to
assimilation), while the remaining observations are assimilated by 3DVar.
The performance of the hybrid scheme is compared with that of 3DVar and of an
EnKF. The improvement gained by 3DVar-AUS and the EnKF with respect to 3DVar
alone is similar in the present model and observational configuration, while
3DVar-AUS outperforms the EnKF during the forecast stage. The 3DVar-AUS
algorithm is easy to implement and the results obtained in the idealized
conditions of this study encourage further investigation toward an
implementation in more realistic contexts
Recommended from our members
Developing a dynamically based assimilation method for targeted and standard observations
International audienceIn a recent study, a new method for assimilating observations has been proposed and applied to a small size nonlinear model. The assimilation is obtained by confining the analysis increment in the unstable subspace of the Observation-Analysis-Forecast (OAF) cycle system, in order to systematically eliminate the dynamically unstable components, present in the forecast error, which are responsible for error growth. Based on the same ideas, applications to more complex models and different, standard and adaptive, observation networks are in progress. Observing System Simulation Experiments (OSSE), performed with an atmospheric quasi-geostrophic model, with a restricted "land" area where vertical profiles are systematically observed, and a wider "ocean" area where a single supplementary observation is taken at each analysis time, are reviewed. The adaptive observation is assimilated either with the proposed method or, for comparison, with a 3-D VAR scheme. The performance of the dynamic assimilation is very good: a reduction of the error of almost an order of magnitude is obtained in the data void region. The same method is applied to a primitive equation ocean model, where "satellite altimetry" observations are assimilated. In this standard observational configuration, preliminary results show a less spectacular but significant improvement obtained by the introduction of the dynamical assimilation
Estimating model evidence using data assimilation
We review the field of data assimilation (DA) from a Bayesian perspective and show that, in addition to its by now common application to state estimation, DA may be used for model selection. An important special case of the latter is the discrimination between a factual modelâwhich corresponds, to the best of the modeller's knowledge, to the situation in the actual world in which a sequence of events has occurredâand a counterfactual model, in which a particular forcing or process might be absent or just quantitatively different from the actual world. Three different ensembleâDA methods are reviewed for this purpose: the ensemble Kalman filter (EnKF), the ensemble fourâdimensional variational smoother (Enâ4DâVar), and the iterative ensemble Kalman smoother (IEnKS). An original contextual formulation of model evidence (CME) is introduced. It is shown how to apply these three methods to compute CME, using the approximated timeâdependent probability distribution functions (pdfs) each of them provide in the process of state estimation. The theoretical formulae so derived are applied to two simplified nonlinear and chaotic models: (i) the Lorenz threeâvariable convection model (L63), and (ii) the Lorenz 40âvariable midlatitude atmospheric dynamics model (L95). The numerical results of these three DAâbased methods and those of an integration based on importance sampling are compared. It is found that better CME estimates are obtained by using DA, and the IEnKS method appears to be best among the DA methods. Differences among the performance of the three DAâbased methods are discussed as a function of model properties. Finally, the methodology is implemented for parameter estimation and for event attribution
The Role of Scanning Electron Microscopy in Periodontal Research
During recent years a great amount of research has led to a better understanding of the etiology, pathogenesis and pattern of progression of periodontal diseases. Scanning electron microscopy (SEM) has contributed to this improvement, mainly with respect to histology of periodontal tissues, the description of the morphology and distribution of bacteria on the exposed root surface, analysis of the host-parasite interactions on the gingival pocket wall, and morphological evaluation of root treatment. This review deals with all these topics. Unusual types of SEM research are also described and discussed. Uncommon sample preparation techniques for SEM in periodontal research are described. SEM in periodontal research should be of great application in the near future. Cathodoluminescence, backscattered emission and immunolabelling techniques will be formidable tools in this field of dentistry
Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models
Recent progress in machine learning has shown how to forecast and, to some extent, learn the dynamics of a model from its output, resorting in particular to neural networks and deep learning techniques. We will show how the same goal can be directly achieved using data assimilation techniques without leveraging on machine learning software libraries, with a view to high-dimensional models. The dynamics of a model are learned from its observation and an ordinary differential equation (ODE) representation of this model is inferred using a recursive nonlinear regression. Because the method is embedded in a Bayesian data assimilation framework, it can learn from partial and noisy observations of a state trajectory of the physical model. Moreover, a space-wise local representation of the ODE system is introduced and is key to coping with high-dimensional models. It has recently been suggested that neural network architectures could be interpreted as dynamical systems. Reciprocally, we show that our ODE representations are reminiscent of deep learning architectures. Furthermore, numerical analysis considerations of stability shed light on the assets and limitations of the method. The method is illustrated on several chaotic discrete and continuous models of various dimensions, with or without noisy observations, with the goal of identifying or improving the model dynamics, building a surrogate or reduced model, or producing forecasts solely from observations of the physical model
Recommended from our members
Full-field and anomaly initialization using a low-order climate model: a comparison and proposals for advanced formulations
Initialization techniques for seasonal-to-decadal climate predictions fall into two main categories; namely full-field initialization (FFI) and anomaly initialization (AI). In the FFI case the initial model state is replaced by the best possible available estimate of the real state. By doing so the initial error is efficiently reduced but, due to the unavoidable presence of model deficiencies, once the model is let free to run a prediction, its trajectory drifts away from the observations no matter how small the initial error is. This problem is partly overcome with AI where the aim is to forecast future anomalies by assimilating observed anomalies on an estimate of the model climate.
The large variety of experimental setups, models and observational networks adopted worldwide make it difficult to draw firm conclusions on the respective advantages and drawbacks of FFI and AI, or to identify distinctive lines for improvement. The lack of a unified mathematical framework adds an additional difficulty toward the design of adequate initialization strategies that fit the desired forecast horizon, observational network and model at hand.
Here we compare FFI and AI using a low-order climate model of nine ordinary differential equations and use the notation and concepts of data assimilation theory to highlight their error scaling properties. This analysis suggests better performances using FFI when a good observational network is available and reveals the direct relation of its skill with the observational accuracy. The skill of AI appears, however, mostly related to the model quality and clear increases of skill can only be expected in coincidence with model upgrades.
We have compared FFI and AI in experiments in which either the full system or the atmosphere and ocean were independently initialized. In the former case FFI shows better and longer-lasting improvements, with skillful predictions until month 30. In the initialization of single compartments, the best performance is obtained when the stabler component of the model (the ocean) is initialized, but with FFI it is possible to have some predictive skill even when the most unstable compartment (the extratropical atmosphere) is observed.
Two advanced formulations, least-square initialization (LSI) and exploring parameter uncertainty (EPU), are introduced. Using LSI the initialization makes use of model statistics to propagate information from observation locations to the entire model domain. Numerical results show that LSI improves the performance of FFI in all the situations when only a portion of the system's state is observed. EPU is an online drift correction method in which the drift caused by the parametric error is estimated using a short-time evolution law and is then removed during the forecast run. Its implementation in conjunction with FFI allows us to improve the prediction skill within the first forecast year.
Finally, the application of these results in the context of realistic climate models is discussed
Improving weather and climate predictions by training of supermodels
Recent studies demonstrate that weather and climate predictions potentially improve by dynamically combining different models into a so-called "supermodel". Here, we focus on the weighted supermodel - the supermodel's time derivative is a weighted superposition of the time derivatives of the imperfect models, referred to as weighted supermodeling. A crucial step is to train the weights of the supermodel on the basis of historical observations. Here, we apply two different training methods to a supermodel of up to four different versions of the global atmosphere-ocean-land model SPEEDO. The standard version is regarded as truth. The first training method is based on an idea called cross pollination in time (CPT), where models exchange states during the training. The second method is a synchronization-based learning rule, originally developed for parameter estimation. We demonstrate that both training methods yield climate simulations and weather predictions of superior quality as compared to the individual model versions. Supermodel predictions also outperform predictions based on the commonly used multi-model ensemble (MME) mean. Furthermore, we find evidence that negative weights can improve predictions in cases where model errors do not cancel (for instance, all models are warm with respect to the truth). In principle, the proposed training schemes are applicable to state-of-the-art models and historical observations. A prime advantage of the proposed training schemes is that in the present context relatively short training periods suffice to find good solutions. Additional work needs to be done to assess the limitations due to incomplete and noisy data, to combine models that are structurally different (different resolution and state representation, for instance) and to evaluate cases for which the truth falls outside of the model class
A study on the forecast quality of the mediterranean cyclones
ComunicaciĂłn presentada en: 4th Plinius Conference on Mediterranean Storms celebrada del 2 al 4 de octubre de 2002 en Palma de Mallorca.The main general objective of MEDEX is stated to be the improvement of knowledge and forecasting of cyclones that
produce high impact weather in the Mediterranean area. So, for this scope one of the intermediate goals of the project
concerns the development of an objective method to evaluate the quality of the forecast of the cyclones. The topic of the
present study is to investigate the cyclone's forecast errors in that area and to propose an objective methodology to
quantify them. An investigation on the performance of the HIRLAM(INM)-0.5 model in the forecast of cyclonic centres
has been done. Databases of analysed and forecasted cyclones for the Western Mediterranean have been used in this
study. The "distance" between the analysed and forecasted cyclone has been measured calculating the differences in the
value of the parameters chosen to describe them at the sea level surface. Results on the characteristics of the errors are
shown. An index constructed by means of these differences has been introduced to evaluate the ability of the model
forecasting cyclones, and to quantify it. From this index, two others indexes have been derived in order to discriminate
if the forecast has overestimated or underestimated some magnitudes in the description of the cyclone. Three different
time forecast ranges, H+12,H+24 and H+48, have been considered to investigate temporal trend in their quality.
Finally, to check this methodology, it has been applied to some MEDEX cases
Recommended from our members
Tailoring data assimilation to discontinuous Galerkin models
In recent years discontinuous Galerkin (DG) methods have received increased interest from the geophysical community. In these methods the solution in each grid cell is approximated as a linear combination of basis functions. Ensemble data assimilation (DA) aims to approximate the true state by combining model outputs with observations using error statistics estimated from an ensemble of model runs. Ensemble data assimilation in geophysical models faces several well-documented issues. In this work we exploit the expansion of the solution in DG basis functions to address some of these issues. Specifically, it is investigated whether a DA-DG combination (a) mitigates the need for observation thinning, (b) reduces errors in the field's gradients, and (c) can be used to set up scale-dependent localisation. Numerical experiments are carried out using stochastically generated ensembles of model states, with different noise properties, and with Legendre polynomials as basis functions. It is found that strong reduction in the analysis error is achieved by using DA-DG and that the benefit increases with increasing DG order. This is especially the case when small scales dominate the background error. The DA improvement in the first derivative is, on the other hand, marginal. We think this to be a counter-effect of the power of DG to fit the observations closely, which can deteriorate the estimates of the derivatives. Applying optimal localisation to the different polynomial orders, thus exploiting their different spatial length, is beneficial: it results in a covariance matrix closer to the true covariance than the matrix obtained using traditional optimal localisation in state space
- âŠ