1,214 research outputs found

    Toward improved identifiability of hydrologic model parameters: The information content of experimental data

    Get PDF
    We have developed a sequential optimization methodology, entitled the parameter identification method based on the localization of information (PIMLI) that increases information retrieval from the data by inferring the location and type of measurements that are most informative for the model parameters. The PIMLI approach merges the strengths of the generalized sensitivity analysis (GSA) method [Spear and Hornberger, 1980], the Bayesian recursive estimation (BARE) algorithm [Thiemann et al., 2001], and the Metropolis algorithm [Metropolis et al., 1953]. Three case studies with increasing complexity are used to illustrate the usefulness and applicability of the PIMLI methodology. The first two case studies consider the identification of soil hydraulic parameters using soil water retention data and a transient multistep outflow experiment (MSO), whereas the third study involves the calibration of a conceptual rainfall-runoff model

    Approximate Models and Robust Decisions

    Full text link
    Decisions based partly or solely on predictions from probabilistic models may be sensitive to model misspecification. Statisticians are taught from an early stage that "all models are wrong", but little formal guidance exists on how to assess the impact of model approximation on decision making, or how to proceed when optimal actions appear sensitive to model fidelity. This article presents an overview of recent developments across different disciplines to address this. We review diagnostic techniques, including graphical approaches and summary statistics, to help highlight decisions made through minimised expected loss that are sensitive to model misspecification. We then consider formal methods for decision making under model misspecification by quantifying stability of optimal actions to perturbations to the model within a neighbourhood of model space. This neighbourhood is defined in either one of two ways. Firstly, in a strong sense via an information (Kullback-Leibler) divergence around the approximating model. Or using a nonparametric model extension, again centred at the approximating model, in order to `average out' over possible misspecifications. This is presented in the context of recent work in the robust control, macroeconomics and financial mathematics literature. We adopt a Bayesian approach throughout although the methods are agnostic to this position

    On digital twins, mirrors and virtualisations

    Get PDF
    A powerful new idea in the computational representation of structures is that of the digital twin. The concept of the digital twin emerged and developed over the last two decades, and has been identified by many industries as a highly-desired technology. The current situation is that individual companies often have their own definitions of a digital twin, and no clear consensus has emerged. In particular, there is no current mathematical formulation of a digital twin. A companion paper to the current one will attempt to present the essential components of the desired formulation. One of those components is identified as a rigorous representation theory of models, how they are validated, and how validation information can be transferred between models. The current paper will outline the basic ingredients of such a theory, based on the introduction of two new concepts: mirrors and virtualisations. The paper is not intended as a passive wish-list; it is intended as a rallying call. The new theory will require the active participation of researchers across a number of domains including: pure and applied mathematics, physics, computer science and engineering. The paper outlines the main objects of the theory and gives examples of the sort of theorems and hypotheses that might be proved in the new framework

    Modelling discrepancy in Bayesian calibration of reservoir models

    Get PDF
    Simulation models of physical systems such as oil field reservoirs are subject to numerous uncertainties such as observation errors and inaccurate initial and boundary conditions. However, after accounting for these uncertainties, it is usually observed that the mismatch between the simulator output and the observations remains and the model is still inadequate. This incapability of computer models to reproduce the real-life processes is referred to as model inadequacy. This thesis presents a comprehensive framework for modelling discrepancy in the Bayesian calibration and probabilistic forecasting of reservoir models. The framework efficiently implements data-driven approaches to handle uncertainty caused by ignoring the modelling discrepancy in reservoir predictions using two major hierarchical strategies, parametric and non-parametric hierarchical models. The central focus of this thesis is on an appropriate way of modelling discrepancy and the importance of the model selection in controlling overfitting rather than different solutions to different noise models. The thesis employs a model selection code to obtain the best candidate solutions to the form of non-parametric error models. This enables us to, first, interpolate the error in history period and, second, propagate it towards unseen data (i.e. error generalisation). The error models constructed by inferring parameters of selected models can predict the response variable (e.g. oil rate) at any point in input space (e.g. time) with corresponding generalisation uncertainty. In the real field applications, the error models reliably track down the uncertainty regardless of the type of the sampling method and achieve a better model prediction score compared to the models that ignore discrepancy. All the case studies confirm the enhancement of field variables prediction when the discrepancy is modelled. As for the model parameters, hierarchical error models render less global bias concerning the reference case. However, in the considered case studies, the evidence for better prediction of each of the model parameters by error modelling is inconclusive

    Stochastic ordinary differential equations in applied and computational mathematics

    Get PDF
    Using concrete examples, we discuss the current and potential use of stochastic ordinary differential equations (SDEs) from the perspective of applied and computational mathematics. Assuming only a minimal background knowledge in probability and stochastic processes, we focus on aspects that distinguish SDEs from their deterministic counterparts. To illustrate a multiscale modelling framework, we explain how SDEs arise naturally as diffusion limits in the type of discrete-valued stochastic models used in chemical kinetics, population dynamics, and, most topically, systems biology. We outline some key issues in existence, uniqueness and stability that arise when SDEs are used as physical models, and point out possible pitfalls. We also discuss the use of numerical methods to simulate trajectories of an SDE and explain how both weak and strong convergence properties are relevant for highly-efficient multilevel Monte Carlo simulations. We flag up what we believe to be key topics for future research, focussing especially on nonlinear models, parameter estimation, model comparison and multiscale simulation

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Inverse Uncertainty Quantification using the Modular Bayesian Approach based on Gaussian Process, Part 1: Theory

    Full text link
    In nuclear reactor system design and safety analysis, the Best Estimate plus Uncertainty (BEPU) methodology requires that computer model output uncertainties must be quantified in order to prove that the investigated design stays within acceptance criteria. "Expert opinion" and "user self-evaluation" have been widely used to specify computer model input uncertainties in previous uncertainty, sensitivity and validation studies. Inverse Uncertainty Quantification (UQ) is the process to inversely quantify input uncertainties based on experimental data in order to more precisely quantify such ad-hoc specifications of the input uncertainty information. In this paper, we used Bayesian analysis to establish the inverse UQ formulation, with systematic and rigorously derived metamodels constructed by Gaussian Process (GP). Due to incomplete or inaccurate underlying physics, as well as numerical approximation errors, computer models always have discrepancy/bias in representing the realities, which can cause over-fitting if neglected in the inverse UQ process. The model discrepancy term is accounted for in our formulation through the "model updating equation". We provided a detailed introduction and comparison of the full and modular Bayesian approaches for inverse UQ, as well as pointed out their limitations when extrapolated to the validation/prediction domain. Finally, we proposed an improved modular Bayesian approach that can avoid extrapolating the model discrepancy that is learnt from the inverse UQ domain to the validation/prediction domain.Comment: 27 pages, 10 figures, articl
    • …
    corecore