223,150 research outputs found
Validating Predictions of Unobserved Quantities
The ultimate purpose of most computational models is to make predictions,
commonly in support of some decision-making process (e.g., for design or
operation of some system). The quantities that need to be predicted (the
quantities of interest or QoIs) are generally not experimentally observable
before the prediction, since otherwise no prediction would be needed. Assessing
the validity of such extrapolative predictions, which is critical to informed
decision-making, is challenging. In classical approaches to validation, model
outputs for observed quantities are compared to observations to determine if
they are consistent. By itself, this consistency only ensures that the model
can predict the observed quantities under the conditions of the observations.
This limitation dramatically reduces the utility of the validation effort for
decision making because it implies nothing about predictions of unobserved QoIs
or for scenarios outside of the range of observations. However, there is no
agreement in the scientific community today regarding best practices for
validation of extrapolative predictions made using computational models. The
purpose of this paper is to propose and explore a validation and predictive
assessment process that supports extrapolative predictions for models with
known sources of error. The process includes stochastic modeling, calibration,
validation, and predictive assessment phases where representations of known
sources of uncertainty and error are built, informed, and tested. The proposed
methodology is applied to an illustrative extrapolation problem involving a
misspecified nonlinear oscillator
Understanding from Machine Learning Models
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding
Screening and metamodeling of computer experiments with functional outputs. Application to thermal-hydraulic computations
To perform uncertainty, sensitivity or optimization analysis on scalar
variables calculated by a cpu time expensive computer code, a widely accepted
methodology consists in first identifying the most influential uncertain inputs
(by screening techniques), and then in replacing the cpu time expensive model
by a cpu inexpensive mathematical function, called a metamodel. This paper
extends this methodology to the functional output case, for instance when the
model output variables are curves. The screening approach is based on the
analysis of variance and principal component analysis of output curves. The
functional metamodeling consists in a curve classification step, a dimension
reduction step, then a classical metamodeling step. An industrial nuclear
reactor application (dealing with uncertainties in the pressurized thermal
shock analysis) illustrates all these steps
- …