52 research outputs found
Optimal uncertainty quantification for legacy data observations of Lipschitz functions
We consider the problem of providing optimal uncertainty quantification (UQ)
--- and hence rigorous certification --- for partially-observed functions. We
present a UQ framework within which the observations may be small or large in
number, and need not carry information about the probability distribution of
the system in operation. The UQ objectives are posed as optimization problems,
the solutions of which are optimal bounds on the quantities of interest; we
consider two typical settings, namely parameter sensitivities (McDiarmid
diameters) and output deviation (or failure) probabilities. The solutions of
these optimization problems depend non-trivially (even non-monotonically and
discontinuously) upon the specified legacy data. Furthermore, the extreme
values are often determined by only a few members of the data set; in our
principal physically-motivated example, the bounds are determined by just 2 out
of 32 data points, and the remainder carry no information and could be
neglected without changing the final answer. We propose an analogue of the
simplex algorithm from linear programming that uses these observations to offer
efficient and rigorous UQ for high-dimensional systems with high-cardinality
legacy data. These findings suggest natural methods for selecting optimal
(maximally informative) next experiments.Comment: 38 page
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the
forefront. This framework, which we call \emph{Optimal Uncertainty
Quantification} (OUQ), is based on the observation that, given a set of
assumptions and information about the problem, there exist optimal bounds on
uncertainties: these are obtained as values of well-defined optimization
problems corresponding to extremizing probabilities of failure, or of
deviations, subject to the constraints imposed by the scenarios compatible with
the assumptions and information. In particular, this framework does not
implicitly impose inappropriate assumptions, nor does it repudiate relevant
information. Although OUQ optimization problems are extremely large, we show
that under general conditions they have finite-dimensional reductions. As an
application, we develop \emph{Optimal Concentration Inequalities} (OCI) of
Hoeffding and McDiarmid type. Surprisingly, these results show that
uncertainties in input parameters, which propagate to output uncertainties in
the classical sensitivity analysis paradigm, may fail to do so if the transfer
functions (or probability distributions) are imperfectly known. We show how,
for hierarchical structures, this phenomenon may lead to the non-propagation of
uncertainties or information across scales. In addition, a general algorithmic
framework is developed for OUQ and is tested on the Caltech surrogate model for
hypervelocity impact and on the seismic safety assessment of truss structures,
suggesting the feasibility of the framework for important complex systems. The
introduction of this paper provides both an overview of the paper and a
self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository
Research Papers). See SIAM Review for higher quality figure
- …