476 research outputs found

    Linked Deep Gaussian Process Emulation for Model Networks

    Full text link
    Modern scientific problems are often multi-disciplinary and require integration of computer models from different disciplines, each with distinct functional complexities, programming environments, and computation times. Linked Gaussian process (LGP) emulation tackles this challenge through a divide-and-conquer strategy that integrates Gaussian process emulators of the individual computer models in a network. However, the required stationarity of the component Gaussian process emulators within the LGP framework limits its applicability in many real-world applications. In this work, we conceptualize a network of computer models as a deep Gaussian process with partial exposure of its hidden layers. We develop a method for inference for these partially exposed deep networks that retains a key strength of the LGP framework, whereby each model can be emulated separately using a DGP and then linked together. We show in both synthetic and empirical examples that our linked deep Gaussian process emulators exhibit significantly better predictive performance than standard LGP emulators in terms of accuracy and uncertainty quantification. They also outperform single DGPs fitted to the network as a whole because they are able to integrate information from the partially exposed hidden layers. Our methods are implemented in an R package dgpsi\texttt{dgpsi} that is freely available on CRAN

    Deep Gaussian Process Emulation using Stochastic Imputation

    Get PDF
    We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using stochastic imputation. By stochastically imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs. This transformation renders a simple while efficient DGP training procedure that only involves optimizations of conventional stationary GPs. In addition, the analytically tractable mean and variance of the linked GP allows one to implement predictions from DGP emulators in a fast and accurate manner. We demonstrate the method in a series of synthetic examples and real-world applications, and show that it is a competitive candidate for efficient DGP surrogate modeling in comparison to the variational inference and the fully-Bayesian approach. A Python\texttt{Python} package dgpsi\texttt{dgpsi} implementing the method is also produced and available at https://github.com/mingdeyu/DGP

    Managing uncertainty in integrated environmental modelling:the UncertWeb framework

    Get PDF
    Web-based distributed modelling architectures are gaining increasing recognition as potentially useful tools to build holistic environmental models, combining individual components in complex workflows. However, existing web-based modelling frameworks currently offer no support for managing uncertainty. On the other hand, the rich array of modelling frameworks and simulation tools which support uncertainty propagation in complex and chained models typically lack the benefits of web based solutions such as ready publication, discoverability and easy access. In this article we describe the developments within the UncertWeb project which are designed to provide uncertainty support in the context of the proposed ‘Model Web’. We give an overview of uncertainty in modelling, review uncertainty management in existing modelling frameworks and consider the semantic and interoperability issues raised by integrated modelling. We describe the scope and architecture required to support uncertainty management as developed in UncertWeb. This includes tools which support elicitation, aggregation/disaggregation, visualisation and uncertainty/sensitivity analysis. We conclude by highlighting areas that require further research and development in UncertWeb, such as model calibration and inference within complex environmental models

    Crustal constraint through complete model space screening for diverse geophysical datasets facilitated by emulation

    Get PDF
    Deep crustal constraint is often carried out using deterministic inverse methods, sometimes using seismic refraction, gravity and electromagnetic datasets in a complementary or “joint” scheme. With increasingly powerful parallel computer systems it is now possible to apply joint inversion schemes to derive an optimum model from diverse input data. These methods are highly effective where the uncertainty in the system is small. However, given the complex nature of these schemes it is often difficult to discern the uniqueness of the output model given the noise in the data, and the application of necessary regularization and weighting in the inversion process means that the extent of user prejudice pertaining to the final result may be unclear. We can rigorously address the subject of uncertainty using standard statistical tools but these methods also become less feasible if the prior model space is large or the forward simulations are computationally expensive. We present a simple Monte Carlo scheme to screen model space in a fully joint fashion, in which we replace the forward simulation with a fast and uncertainty-calibrated mathematical function, or emulator. This emulator is used as a proxy to run the very large number of models necessary to fully explore the plausible model space. We develop the method using a simple synthetic dataset then demonstrate its use on a joint data set comprising first-arrival seismic refraction, MT and scalar gravity data over a diapiric salt body. This study demonstrates both the value of a forward Monte Carlo approach (as distinct from a search-based or conventional inverse approach) in incorporating all kinds of uncertainty in the modelling process, exploring the entire model space, and shows the potential value of applying emulator technology throughout geophysics. Though the target here is relatively shallow, the methodology can be readily extended to address the whole crust

    Linking Remote Sensing with APSIM through Emulation and Bayesian Optimization to Improve Yield Prediction

    Get PDF
    The enormous increase in the volume of Earth Observations (EOs) has provided the scientific community with unprecedented temporal, spatial, and spectral information. However, this increase in the volume of EOs has not yet resulted in proportional progress with our ability to forecast agricultural systems. This study examines the applicability of EOs obtained from Sentinel-2 and Landsat-8 for constraining the APSIM-Maize model parameters. We leveraged leaf area index (LAI) retrieved from Sentinel-2 and Landsat-8 NDVI (Normalized Difference Vegetation Index) to constrain a series of APSIM-Maize model parameters in three different Bayesian multi-criteria optimization frameworks across 13 different calibration sites in the U.S. Midwest. The novelty of the current study lies in its approach in providing a mathematical framework to directly integrate EOs into process-based models for improved parameter estimation and system representation. Thus, a time variant sensitivity analysis was performed to identify the most influential parameters driving the LAI (Leaf Area Index) estimates in APSIM-Maize model. Then surrogate models were developed using random samples taken from the parameter space using Latin hypercube sampling to emulate APSIM’s behavior in simulating NDVI and LAI at all sites. Site-level, global and hierarchical Bayesian optimization models were then developed using the site-level emulators to simultaneously constrain all parameters and estimate the site to site variability in crop parameters. For within sample predictions, site-level optimization showed the largest predictive uncertainty around LAI and crop yield, whereas the global optimization showed the most constraint predictions for these variables. The lowest RMSE within sample yield prediction was found for hierarchical optimization scheme (1423 Kg ha−1) while the largest RMSE was found for site-level (1494 Kg ha−1). In out-of-sample predictions for within the spatio-temporal extent of the training sites, global optimization showed lower RMSE (1627 Kg ha−1) compared to the hierarchical approach (1822 Kg ha−1) across 90 independent sites in the U.S. Midwest. On comparison between these two optimization schemes across another 242 independent sites outside the spatio-temporal extent of the training sites, global optimization also showed substantially lower RMSE (1554 Kg ha−1) as compared to the hierarchical approach (2532 Kg ha−1). Overall, EOs demonstrated their real use case for constraining process-based crop models and showed comparable results to model calibration exercises using only field measurements

    Modelling the interaction between induced pluripotent stem cells derived cardiomyocytes patches and the recipient hearts

    Get PDF
    Cardiovascular diseases are the main cause of death worldwide. The single biggest killer is represented by ischemic heart disease. Myocardial infarction causes the formation of non-conductive and non-contractile, scar-like tissue in the heart, which can hamper the heart's physiological function and cause pathologies ranging from arrhythmias to heart failure. The heart can not recover the tissue lost due to myocardial infarction due to the myocardium's limited ability to regenerate. The only available treatment is heart transpalant, which is limited by the number of donors and can elicit an adverse response from the recipients immune system. Recently, regenerative medicine has been proposed as an alternative approach to help post-myocardial infarction hearts recover their functionality. Among the various techniques, the application of cardiac patches of engineered heart tissue in combination with electroactive materials constitutes a promising technology. However, many challenges need to be faced in the development of this treatment. One of the main concerns is represented by the immature phenotype of the stem cells-derived cardiomyocytes used to fabricate the engineered heart tissue. Their electrophysiological differences with respect to the host myocardium may contribute to an increased arrhythmia risk. A large number of animal experiments are needed to optimize the patches' characteristics and to better understand the implications of the electrical interaction between patches and host myocardium. In this Thesis we leveraged cardiac computational modelling to simulate \emph{in silico} electrical propagation in scarred heart tissue in the presence of a patch of engineered heart tissue and conductive polymer engrafted at the epicardium. This work is composed by two studies. In the first study we designed a tissue model with simplified geometry and used machine learning and global sensitivity analysis techniques to identify engineered heart tissue patch design variables that are important for restoring physiological electrophysiology in the host myocardium. Additionally, we showed how engineered heart tissue properties could be tuned to restore physiological activation while reducing arrhythmic risk. In the second study we moved to more realistic geometries and we devised a way to manipulate ventricle meshes obtained from magnetic resonance images to apply \emph{in silico} engineered heart tissue epicardial patches. We then investigated how patches with different conduction velocity and action potential duration influence the host ventricle electrophysiology. Specifically, we showed that appropriately located patches can reduce the predisposition to anatomical isthmus mediated re-entry and that patches with a physiological action potential duration and higher conduction velocity were most effective in reducing this risk. We also demonstrated that patches with conduction velocity and action potential duration typical of immature stem cells-derived cardiomyocytes were associated with the onset of sustained functional re-entry in an ischemic cardiomyopathy model with a large transmural scar. Finally, we demonstrated that patches electrically coupled to host myocardium reduce the likelihood of propagation of focal ectopic impulses. This Thesis demonstrates how computational modelling can be successfully applied to the field of regenerative medicine and constitutes the first step towards the creation of patient-specific models for developing and testing patches for cardiac regeneration.Open Acces
    corecore