Bayesian Inference of Model Error in Imprecise Models

Abstract

International audienceModern science makes use of computer models to reproduce and predict complex physical systems. Every model involves parameters, which can be measured experimentally (e.g., mass of a solid), or not (e.g., coefficients in the k − ε turbulence model). The latter parameters can be inferred from experimental data, through a procedure called calibration of the computer model. However, some models may not be able to represent reality accurately, due to their limited structure : this is the definition of model error. The "best value" of the parameters of a model is traditionnally defined as the best fit to the data. It depends on the experiment, the quantities of interest considered, and also on the supposed underlying statistical structure of the error. Bayesian methods allow the calibration of the model by taking into account its error. The fit to the data is balanced with the complexity of the model, following Occam's principle. Kennedy and O'Hagan's innovative method [1] to represent model error with a Gaussian process is a reference in this field. Recently, Tuo and Wu [3] proposed a frequentist addition to this method, to deal with the identifiability problem between model error and calibration error. Plumlee [2] applied the method to simple situations and demonstrated the potential of the approach. In this work, we compare Kennedy and O'Hagan's method with its frequentist version, which involves an optimization problem, on several numerical examples with varying degrees of model error. The calibration provides estimates of the model parameters and model predictions, while also inferring model error within observed and not observed parts of the experimental design space. The case of non-linear costly computer models is also considered, and we propose a new algorithm to reduce the numerical complexity of Bayesian calibration techniques

    Similar works