7,593 research outputs found

    The transformation of earth-system observations into information of socio-economic value in GEOSS

    Get PDF
    The Group on Earth Observations System of Systems, GEOSS, is a co-ordinated initiative by many nations to address the needs for earth-system information expressed by the 2002 World Summit on Sustainable Development. We discuss the role of earth-system modelling and data assimilation in transforming earth-system observations into the predictive and status-assessment products required by GEOSS, across many areas of socio-economic interest. First we review recent gains in the predictive skill of operational global earth-system models, on time-scales of days to several seasons. We then discuss recent work to develop from the global predictions a diverse set of end-user applications which can meet GEOSS requirements for information of socio-economic benefit; examples include forecasts of coastal storm surges, floods in large river basins, seasonal crop yield forecasts and seasonal lead-time alerts for malaria epidemics. We note ongoing efforts to extend operational earth-system modelling and assimilation capabilities to atmospheric composition, in support of improved services for air-quality forecasts and for treaty assessment. We next sketch likely GEOSS observational requirements in the coming decades. In concluding, we reflect on the cost of earth observations relative to the modest cost of transforming the observations into information of socio-economic value

    A 4D-Var Method with Flow-Dependent Background Covariances for the Shallow-Water Equations

    Get PDF
    The 4D-Var method for filtering partially observed nonlinear chaotic dynamical systems consists of finding the maximum a-posteriori (MAP) estimator of the initial condition of the system given observations over a time window, and propagating it forward to the current time via the model dynamics. This method forms the basis of most currently operational weather forecasting systems. In practice the optimization becomes infeasible if the time window is too long due to the non-convexity of the cost function, the effect of model errors, and the limited precision of the ODE solvers. Hence the window has to be kept sufficiently short, and the observations in the previous windows can be taken into account via a Gaussian background (prior) distribution. The choice of the background covariance matrix is an important question that has received much attention in the literature. In this paper, we define the background covariances in a principled manner, based on observations in the previous bb assimilation windows, for a parameter b≄1b\ge 1. The method is at most bb times more computationally expensive than using fixed background covariances, requires little tuning, and greatly improves the accuracy of 4D-Var. As a concrete example, we focus on the shallow-water equations. The proposed method is compared against state-of-the-art approaches in data assimilation and is shown to perform favourably on simulated data. We also illustrate our approach on data from the recent tsunami of 2011 in Fukushima, Japan.Comment: 32 pages, 5 figure

    Data Assimilation by Artificial Neural Networks for an Atmospheric General Circulation Model: Conventional Observation

    Full text link
    This paper presents an approach for employing artificial neural networks (NN) to emulate an ensemble Kalman filter (EnKF) as a method of data assimilation. The assimilation methods are tested in the Simplified Parameterizations PrimitivE-Equation Dynamics (SPEEDY) model, an atmospheric general circulation model (AGCM), using synthetic observational data simulating localization of balloon soundings. For the data assimilation scheme, the supervised NN, the multilayer perceptrons (MLP-NN), is applied. The MLP-NN are able to emulate the analysis from the local ensemble transform Kalman filter (LETKF). After the training process, the method using the MLP-NN is seen as a function of data assimilation. The NN were trained with data from first three months of 1982, 1983, and 1984. A hind-casting experiment for the 1985 data assimilation cycle using MLP-NN were performed with synthetic observations for January 1985. The numerical results demonstrate the effectiveness of the NN technique for atmospheric data assimilation. The results of the NN analyses are very close to the results from the LETKF analyses, the differences of the monthly average of absolute temperature analyses is of order 0.02. The simulations show that the major advantage of using the MLP-NN is better computational performance, since the analyses have similar quality. The CPU-time cycle assimilation with MLP-NN is 90 times faster than cycle assimilation with LETKF for the numerical experiment.Comment: 17 pages, 16 figures, monthly weather revie

    DADA: data assimilation for the detection and attribution of weather and climate-related events

    Get PDF
    A new nudging method for data assimilation, delay‐coordinate nudging, is presented. Delay‐coordinate nudging makes explicit use of present and past observations in the formulation of the forcing driving the model evolution at each time step. Numerical experiments with a low‐order chaotic system show that the new method systematically outperforms standard nudging in different model and observational scenarios, also when using an unoptimized formulation of the delay‐nudging coefficients. A connection between the optimal delay and the dominant Lyapunov exponent of the dynamics is found based on heuristic arguments and is confirmed by the numerical results, providing a guideline for the practical implementation of the algorithm. Delay‐coordinate nudging preserves the easiness of implementation, the intuitive functioning and the reduced computational cost of the standard nudging, making it a potential alternative especially in the field of seasonal‐to‐decadal predictions with large Earth system models that limit the use of more sophisticated data assimilation procedures

    Comment on \u201cCan assimilation of crowdsourced data in hydrological modelling improve flood prediction?\u201d by Mazzoleni et al. (2017)

    Get PDF
    Citizen science and crowdsourcing are gaining increasing attention among hydrologists. In a recent contribution, Mazzoleni et al. (2017) investigated the integration of crowdsourced data (CSD) into hydrological models to improve the accuracy of real-time flood forecasts. The authors used synthetic CSD (i.e. not actually measured), because real CSD were not available at the time of the study. In their work, which is a proof-of-concept study, Mazzoleni et al. (2017) showed that assimilation of CSD improves the overall model performance; the impact of irregular frequency of available CSD, and that of data uncertainty, were also deeply assessed. However, the use of synthetic CSD in conjunction with (semi-)distributed hydrological models deserves further discussion. As a result of equifinality, poor model identifiability, and deficiencies in model structure, internal states of (semi-)distributed models can hardly mimic the actual states of complex systems away from calibration points. Accordingly, the use of synthetic CSD that are drawn from model internal states under best-fit conditions can lead to overestimation of the effectiveness of CSD assimilation in improving flood prediction. Operational flood forecasting, which results in decisions of high societal value, requires robust knowledge of the model behaviour and an in-depth assessment of both model structure and forcing data. Additional guidelines are given that are useful for the a priori evaluation of CSD for real-time flood forecasting and, hopefully, for planning apt design strategies for both model calibration and collection of CSD
    • 

    corecore