132 research outputs found

    Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification

    Get PDF
    Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake). Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the Outer Product Emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the Leave-One-Out method

    Global sensitivity analysis of the climate–vegetation system to astronomical forcing: an emulator-based approach

    Get PDF
    A global sensitivity analysis is performed to describe the effects of astronomical forcing on the climate–vegetation system simulated by the model of intermediate complexity LOVECLIM in interglacial conditions. The methodology relies on the estimation of sensitivity measures, using a Gaussian process emulator as a fast surrogate of the climate model, calibrated on a set of well-chosen experiments. The outputs considered are the annual mean temperature and precipitation and the growing degree days (GDD). The experiments were run on two distinct land surface schemes to estimate the importance of vegetation feedbacks on climate variance. This analysis provides a spatial description of the variance due to the factors and their combinations, in the form of "fingerprints" obtained from the covariance indices. The results are broadly consistent with the current under-standing of Earth's climate response to the astronomical forcing. In particular, precession and obliquity are found to contribute in LOVECLIM equally to GDD in the Northern Hemisphere, and the effect of obliquity on the response of Southern Hemisphere temperature dominates precession effects. Precession dominates precipitation changes in subtropical areas. Compared to standard approaches based on a small number of simulations, the methodology presented here allows us to identify more systematically regions susceptible to experiencing rapid climate change in response to the smooth astronomical forcing change. In particular, we find that using interactive vegetation significantly enhances the expected rates of climate change, specifically in the Sahel (up to 50% precipitation change in 1000 years) and in the Canadian Arctic region (up to 3° in 1000 years). None of the tested astronomical configurations were found to induce multiple steady states, but, at low obliquity, we observed the development of an oscillatory pattern that has already been reported in LOVECLIM. Although the mathematics of the analysis are fairly straightforward, the emulation approach still requires considerable care in its implementation. We discuss the effect of the choice of length scales and the type of emulator, and estimate uncertainties associated with specific computational aspects, to conclude that the principal component emulator is a good option for this kind of application

    Calibrating an ice sheet model using high-dimensional binary spatial data

    Full text link
    Rapid retreat of ice in the Amundsen Sea sector of West Antarctica may cause drastic sea level rise, posing significant risks to populations in low-lying coastal regions. Calibration of computer models representing the behavior of the West Antarctic Ice Sheet is key for informative projections of future sea level rise. However, both the relevant observations and the model output are high-dimensional binary spatial data; existing computer model calibration methods are unable to handle such data. Here we present a novel calibration method for computer models whose output is in the form of binary spatial data. To mitigate the computational and inferential challenges posed by our approach, we apply a generalized principal component based dimension reduction method. To demonstrate the utility of our method, we calibrate the PSU3D-ICE model by comparing the output from a 499-member perturbed-parameter ensemble with observations from the Amundsen Sea sector of the ice sheet. Our methods help rigorously characterize the parameter uncertainty even in the presence of systematic data-model discrepancies and dependence in the errors. Our method also helps inform environmental risk analyses by contributing to improved projections of sea level rise from the ice sheets

    Comparison of surrogate-based uncertainty quantification methods for computationally expensive simulators

    Get PDF
    This version: arXiv:1511.00926v4 [math.ST] Available from ArXiv.org via the link in this record.Polynomial chaos and Gaussian process emulation are methods for surrogate-based uncertainty quantification, and have been developed independently in their respective communities over the last 25 years. Despite tackling similar problems in the field, to our knowledge there has yet to be a critical comparison of the two approaches in the literature. We begin by providing a detailed description of polynomial chaos and Gaussian process approaches for building a surrogate model of a black-box function. The accuracy of each surrogate method is then tested and compared for two simulators used in industry: a land-surface model (adJULES) and a launch vehicle controller (VEGACONTROL). We analyse surrogates built on experimental designs of various size and type to investigate their performance in a range of modelling scenarios. Specifically, polynomial chaos and Gaussian process surrogates are built on Sobol sequence and tensor grid designs. Their accuracy is measured by their ability to estimate the mean, standard deviation, exceedance probabilities and probability density function of the simulator output, as well as a root mean square error metric, based on an independent validation design. We find that one method does not unanimously outperform the other, but advantages can be gained in some cases, such that the preferred method depends on the modelling goals of the practitioner. Our conclusions are likely to depend somewhat on the modelling choices for the surrogates as well as the design strategy. We hope that this work will spark future comparisons of the two methods in their more advanced formulations and for different sampling strategies

    Tuning without over-tuning: parametric uncertainty quantification for the NEMO ocean model

    Get PDF
    In this paper we discuss climate model tuning and present an iterative automatic tuning method from the statistical science literature. The method, which we refer to here as iterative refocussing (though also known as history matching), avoids many of the common pitfalls of automatic tuning procedures that are based on optimisation of a cost function; principally the over-tuning of a climate model due to using only partial observations. This avoidance comes by seeking to rule out parameter choices that we are confident could not reproduce the observations, rather than seeking the model that is closest to them (a procedure that risks over-tuning). We comment on the state of climate model tuning and illustrate our approach through 3 waves of iterative refocussing of the NEMO ORCA2 global ocean model run at 2° resolution. We show how at certain depths the anomalies of global mean temperature and salinity in a standard configuration of the model exceeds 10 standard deviations away from observations and show the extent to which this can be alleviated by iterative refocussing without compromising model performance spatially. We show how model improvements can be achieved by simultaneously perturbing multiple parameters, and illustrate the potential of using low resolution ensembles to tune NEMO ORCA configurations at higher resolutions

    Developing Efficient Strategies For Global Sensitivity Analysis Of Complex Environmental Systems Models

    Get PDF
    Complex Environmental Systems Models (CESMs) have been developed and applied as vital tools to tackle the ecological, water, food, and energy crises that humanity faces, and have been used widely to support decision-making about management of the quality and quantity of Earth’s resources. CESMs are often controlled by many interacting and uncertain parameters, and typically integrate data from multiple sources at different spatio-temporal scales, which make them highly complex. Global Sensitivity Analysis (GSA) techniques have proven to be promising for deepening our understanding of the model complexity and interactions between various parameters and providing helpful recommendations for further model development and data acquisition. Aside from the complexity issue, the computationally expensive nature of the CESMs precludes effective application of the existing GSA techniques in quantifying the global influence of each parameter on variability of the CESMs’ outputs. This is because a comprehensive sensitivity analysis often requires performing a very large number of model runs. Therefore, there is a need to break down this barrier by the development of more efficient strategies for sensitivity analysis. The research undertaken in this dissertation is mainly focused on alleviating the computational burden associated with GSA of the computationally expensive CESMs through developing efficiency-increasing strategies for robust sensitivity analysis. This is accomplished by: (1) proposing an efficient sequential sampling strategy for robust sampling-based analysis of CESMs; (2) developing an automated parameter grouping strategy of high-dimensional CESMs, (3) introducing a new robustness measure for convergence assessment of the GSA methods; and (4) investigating time-saving strategies for handling simulation failures/crashes during the sensitivity analysis of computationally expensive CESMs. This dissertation provides a set of innovative numerical techniques that can be used in conjunction with any GSA algorithm and be integrated in model building and systems analysis procedures in any field where models are used. A range of analytical test functions and environmental models with varying complexity and dimensionality are utilized across this research to test the performance of the proposed methods. These methods, which are embedded in the VARS–TOOL software package, can also provide information useful for diagnostic testing, parameter identifiability analysis, model simplification, model calibration, and experimental design. They can be further applied to address a range of decision making-related problems such as characterizing the main causes of risk in the context of probabilistic risk assessment and exploring the CESMs’ sensitivity to a wide range of plausible future changes (e.g., hydrometeorological conditions) in the context of scenario analysis
    • …
    corecore