131,863 research outputs found
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the forefront.
This framework, which we call Optimal Uncertainty Quantification (OUQ), is based
on the observation that, given a set of assumptions and information about the problem,
there exist optimal bounds on uncertainties: these are obtained as extreme
values of well-defined optimization problems corresponding to extremizing probabilities
of failure, or of deviations, subject to the constraints imposed by the scenarios
compatible with the assumptions and information. In particular, this framework
does not implicitly impose inappropriate assumptions, nor does it repudiate relevant
information.
Although OUQ optimization problems are extremely large, we show that under
general conditions, they have finite-dimensional reductions. As an application,
we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid
type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results
show that uncertainties in input parameters do not necessarily propagate to
output uncertainties.
In addition, a general algorithmic framework is developed for OUQ and is tested
on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility
of the framework for important complex systems
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the
forefront. This framework, which we call \emph{Optimal Uncertainty
Quantification} (OUQ), is based on the observation that, given a set of
assumptions and information about the problem, there exist optimal bounds on
uncertainties: these are obtained as values of well-defined optimization
problems corresponding to extremizing probabilities of failure, or of
deviations, subject to the constraints imposed by the scenarios compatible with
the assumptions and information. In particular, this framework does not
implicitly impose inappropriate assumptions, nor does it repudiate relevant
information. Although OUQ optimization problems are extremely large, we show
that under general conditions they have finite-dimensional reductions. As an
application, we develop \emph{Optimal Concentration Inequalities} (OCI) of
Hoeffding and McDiarmid type. Surprisingly, these results show that
uncertainties in input parameters, which propagate to output uncertainties in
the classical sensitivity analysis paradigm, may fail to do so if the transfer
functions (or probability distributions) are imperfectly known. We show how,
for hierarchical structures, this phenomenon may lead to the non-propagation of
uncertainties or information across scales. In addition, a general algorithmic
framework is developed for OUQ and is tested on the Caltech surrogate model for
hypervelocity impact and on the seismic safety assessment of truss structures,
suggesting the feasibility of the framework for important complex systems. The
introduction of this paper provides both an overview of the paper and a
self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository
Research Papers). See SIAM Review for higher quality figure
Convex Optimal Uncertainty Quantification
Optimal uncertainty quantification (OUQ) is a framework for numerical
extreme-case analysis of stochastic systems with imperfect knowledge of the
underlying probability distribution. This paper presents sufficient conditions
under which an OUQ problem can be reformulated as a finite-dimensional convex
optimization problem, for which efficient numerical solutions can be obtained.
The sufficient conditions include that the objective function is piecewise
concave and the constraints are piecewise convex. In particular, we show that
piecewise concave objective functions may appear in applications where the
objective is defined by the optimal value of a parameterized linear program.Comment: Accepted for publication in SIAM Journal on Optimizatio
Optimal uncertainty quantification for legacy data observations of Lipschitz functions
We consider the problem of providing optimal uncertainty quantification (UQ)
--- and hence rigorous certification --- for partially-observed functions. We
present a UQ framework within which the observations may be small or large in
number, and need not carry information about the probability distribution of
the system in operation. The UQ objectives are posed as optimization problems,
the solutions of which are optimal bounds on the quantities of interest; we
consider two typical settings, namely parameter sensitivities (McDiarmid
diameters) and output deviation (or failure) probabilities. The solutions of
these optimization problems depend non-trivially (even non-monotonically and
discontinuously) upon the specified legacy data. Furthermore, the extreme
values are often determined by only a few members of the data set; in our
principal physically-motivated example, the bounds are determined by just 2 out
of 32 data points, and the remainder carry no information and could be
neglected without changing the final answer. We propose an analogue of the
simplex algorithm from linear programming that uses these observations to offer
efficient and rigorous UQ for high-dimensional systems with high-cardinality
legacy data. These findings suggest natural methods for selecting optimal
(maximally informative) next experiments.Comment: 38 page
The Optimal Uncertainty Algorithm in the Mystic Framework
We have recently proposed a rigorous framework for Uncertainty Quantification
(UQ) in which UQ objectives and assumption/information set are brought into the
forefront, providing a framework for the communication and comparison of UQ
results. In particular, this framework does not implicitly impose inappropriate
assumptions nor does it repudiate relevant information. This framework, which
we call Optimal Uncertainty Quantification (OUQ), is based on the observation
that given a set of assumptions and information, there exist bounds on
uncertainties obtained as values of optimization problems and that these bounds
are optimal. It provides a uniform environment for the optimal solution of the
problems of validation, certification, experimental design, reduced order
modeling, prediction, extrapolation, all under aleatoric and epistemic
uncertainties. OUQ optimization problems are extremely large, and even though
under general conditions they have finite-dimensional reductions, they must
often be solved numerically. This general algorithmic framework for OUQ has
been implemented in the mystic optimization framework. We describe this
implementation, and demonstrate its use in the context of the Caltech surrogate
model for hypervelocity impact
A closed-form solution to estimate uncertainty in non-rigid structure from motion
Semi-Definite Programming (SDP) with low-rank prior has been widely applied
in Non-Rigid Structure from Motion (NRSfM). Based on a low-rank constraint, it
avoids the inherent ambiguity of basis number selection in conventional
base-shape or base-trajectory methods. Despite the efficiency in deformable
shape reconstruction, it remains unclear how to assess the uncertainty of the
recovered shape from the SDP process. In this paper, we present a statistical
inference on the element-wise uncertainty quantification of the estimated
deforming 3D shape points in the case of the exact low-rank SDP problem. A
closed-form uncertainty quantification method is proposed and tested. Moreover,
we extend the exact low-rank uncertainty quantification to the approximate
low-rank scenario with a numerical optimal rank selection method, which enables
solving practical application in SDP based NRSfM scenario. The proposed method
provides an independent module to the SDP method and only requires the
statistic information of the input 2D tracked points. Extensive experiments
prove that the output 3D points have identical normal distribution to the 2D
trackings, the proposed method and quantify the uncertainty accurately, and
supports that it has desirable effects on routinely SDP low-rank based NRSfM
solver.Comment: 9 pages, 2 figure
Optimal Forecast Reconciliation with Uncertainty Quantification
We propose to estimate the weight matrix used for forecast reconciliation as
parameters in a general linear model in order to quantify its uncertainty. This
implies that forecast reconciliation can be formulated as an orthogonal
projection from the space of base-forecast errors into a coherent linear
subspace. We use variance decomposition together with the Wishart distribution
to derive the central estimator for the forecast-error covariance matrix. In
addition, we prove that distance-reducing properties apply to the reconciled
forecasts at all levels of the hierarchy as well as to the forecast-error
covariance. A covariance matrix for the reconciliation weight matrix is
derived, which leads to improved estimates of the forecast-error covariance
matrix. We show how shrinkage can be introduced in the formulated model by
imposing specific priors on the weight matrix and the forecast-error covariance
matrix. The method is illustrated in a simulation study that shows consistent
improvements in the log-score. Finally, standard errors for the weight matrix
and the variance-separation formula are illustrated using a case study of
forecasting electricity load in Sweden.Comment: 51 page
- …