7,593 research outputs found
The transformation of earth-system observations into information of socio-economic value in GEOSS
The Group on Earth Observations System of Systems, GEOSS, is a co-ordinated initiative by many nations to address the needs for earth-system information expressed by the 2002 World Summit on Sustainable Development. We discuss the role of earth-system modelling and data assimilation in transforming earth-system observations into the predictive and status-assessment products required by GEOSS, across many areas of socio-economic interest. First we review recent gains in the predictive skill of operational global earth-system models, on time-scales of days to several seasons. We then discuss recent work to develop from the global predictions a diverse set of end-user applications which can meet GEOSS requirements for information of socio-economic benefit; examples include forecasts of coastal storm surges, floods in large river basins, seasonal crop yield forecasts and seasonal lead-time alerts for malaria epidemics. We note ongoing efforts to extend operational earth-system modelling and assimilation capabilities to atmospheric composition, in support of improved services for air-quality forecasts and for treaty assessment. We next sketch likely GEOSS observational requirements in the coming decades. In concluding, we reflect on the cost of earth observations relative to the modest cost of transforming the observations into information of socio-economic value
A 4D-Var Method with Flow-Dependent Background Covariances for the Shallow-Water Equations
The 4D-Var method for filtering partially observed nonlinear chaotic
dynamical systems consists of finding the maximum a-posteriori (MAP) estimator
of the initial condition of the system given observations over a time window,
and propagating it forward to the current time via the model dynamics. This
method forms the basis of most currently operational weather forecasting
systems. In practice the optimization becomes infeasible if the time window is
too long due to the non-convexity of the cost function, the effect of model
errors, and the limited precision of the ODE solvers. Hence the window has to
be kept sufficiently short, and the observations in the previous windows can be
taken into account via a Gaussian background (prior) distribution. The choice
of the background covariance matrix is an important question that has received
much attention in the literature. In this paper, we define the background
covariances in a principled manner, based on observations in the previous
assimilation windows, for a parameter . The method is at most times
more computationally expensive than using fixed background covariances,
requires little tuning, and greatly improves the accuracy of 4D-Var. As a
concrete example, we focus on the shallow-water equations. The proposed method
is compared against state-of-the-art approaches in data assimilation and is
shown to perform favourably on simulated data. We also illustrate our approach
on data from the recent tsunami of 2011 in Fukushima, Japan.Comment: 32 pages, 5 figure
Data Assimilation by Artificial Neural Networks for an Atmospheric General Circulation Model: Conventional Observation
This paper presents an approach for employing artificial neural networks (NN)
to emulate an ensemble Kalman filter (EnKF) as a method of data assimilation.
The assimilation methods are tested in the Simplified Parameterizations
PrimitivE-Equation Dynamics (SPEEDY) model, an atmospheric general circulation
model (AGCM), using synthetic observational data simulating localization of
balloon soundings. For the data assimilation scheme, the supervised NN, the
multilayer perceptrons (MLP-NN), is applied. The MLP-NN are able to emulate the
analysis from the local ensemble transform Kalman filter (LETKF). After the
training process, the method using the MLP-NN is seen as a function of data
assimilation. The NN were trained with data from first three months of 1982,
1983, and 1984. A hind-casting experiment for the 1985 data assimilation cycle
using MLP-NN were performed with synthetic observations for January 1985. The
numerical results demonstrate the effectiveness of the NN technique for
atmospheric data assimilation. The results of the NN analyses are very close to
the results from the LETKF analyses, the differences of the monthly average of
absolute temperature analyses is of order 0.02. The simulations show that the
major advantage of using the MLP-NN is better computational performance, since
the analyses have similar quality. The CPU-time cycle assimilation with MLP-NN
is 90 times faster than cycle assimilation with LETKF for the numerical
experiment.Comment: 17 pages, 16 figures, monthly weather revie
Recommended from our members
Snow model verification using ensemble prediction and operational benchmarks
Hydrologic model evaluations have traditionally focused on measuring how closely the model can simulate various characteristics of historical observations. Although advancing hydrologic forecasting is an often-stated goal of numerous modeling studies, testing in a forecasting mode is seldom undertaken, limiting information derived from these analyses. One can overcome this limitation through generation, and subsequent analysis, of ensemble hindcasts. In this study, long-range ensemble hindcasts are generated for the available period of record for a basin in southwestern Idaho for the purpose of evaluating the Snow-Atmosphere-Soil Transfer (SAST) model against the current operational benchmark, the National Weather Service's (NWS) snow accumulation and ablation model SNOW17. Both snow models were coupled with the NWS operational rainfall runoff model and ensembles of seasonal discharge and weekly snow water equivalent (SWE) were evaluated. Ensemble predictions from both the SAST and SNOW17 models were better than climatology forecasts, for the period studied. In most cases, the accuracy of the SAST-generated predictions was similar to the SNOW17-generated predictions, except during periods of significant melting. Differences in model performance are partially attributed to initial condition errors. After updating the SWE state in the snow models with the observed SWE, the forecasts were improved during the first 2-4 weeks of the forecast window and the skills were essentially equal in both forecasting systems for the study watershed. Climate dominated the forecast uncertainty in the latter part of the forecast window while initial conditions controlled the forecast skill in the first 3-4 weeks of the forecast. The use of hindcasting in the snow model analysis revealed that, given the dominance of the initial conditions on forecast skill, streamflow predictions will be most improved through the use of state updating. © 2008 American Meteorological Society
DADA: data assimilation for the detection and attribution of weather and climate-related events
A new nudging method for data assimilation, delayâcoordinate nudging, is presented. Delayâcoordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time step. Numerical experiments with a lowâorder chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an unoptimized formulation of the delayânudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delayâcoordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonalâtoâdecadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures
Comment on \u201cCan assimilation of crowdsourced data in hydrological modelling improve flood prediction?\u201d by Mazzoleni et al. (2017)
Citizen science and crowdsourcing are gaining increasing attention among hydrologists. In a recent contribution, Mazzoleni et al. (2017) investigated the integration of crowdsourced data (CSD) into hydrological models to improve the accuracy of real-time flood forecasts. The authors used synthetic CSD (i.e. not actually measured), because real CSD were not available at the time of the study. In their work, which is a proof-of-concept study, Mazzoleni et al. (2017) showed that assimilation of CSD improves the overall model performance; the impact of irregular frequency of available CSD, and that of data uncertainty, were also deeply assessed. However, the use of synthetic CSD in conjunction with (semi-)distributed hydrological models deserves further discussion. As a result of equifinality, poor model identifiability, and deficiencies in model structure, internal states of (semi-)distributed models can hardly mimic the actual states of complex systems away from calibration points. Accordingly, the use of synthetic CSD that are drawn from model internal states under best-fit conditions can lead to overestimation of the effectiveness of CSD assimilation in improving flood prediction. Operational flood forecasting, which results in decisions of high societal value, requires robust knowledge of the model behaviour and an in-depth assessment of both model structure and forcing data. Additional guidelines are given that are useful for the a priori evaluation of CSD for real-time flood forecasting and, hopefully, for planning apt design strategies for both model calibration and collection of CSD
- âŠ