17,845 research outputs found
High dimensional Bayesian optimization via supervised dimension reduction
© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Bayesian optimization (BO) has been broadly applied to computational expensive problems, but it is still challenging to extend BO to high dimensions. Existing works are usually under strict assumption of an additive or a linear embedding structure for objective functions. This paper directly introduces a supervised dimension reduction method, Sliced Inverse Regression (SIR), to high dimensional Bayesian optimization, which could effectively learn the intrinsic sub-structure of objective function during the optimization. Furthermore, a kernel trick is developed to reduce computational complexity and learn nonlinear subset of the unknowing function when applying SIR to extremely high dimensional BO. We present several computational benefits and derive theoretical regret bounds of our algorithm. Extensive experiments on synthetic examples and two real applications demonstrate the superiority of our algorithms for high dimensional Bayesian optimization
Bayesian Inference on Matrix Manifolds for Linear Dimensionality Reduction
We reframe linear dimensionality reduction as a problem of Bayesian inference
on matrix manifolds. This natural paradigm extends the Bayesian framework to
dimensionality reduction tasks in higher dimensions with simpler models at
greater speeds. Here an orthogonal basis is treated as a single point on a
manifold and is associated with a linear subspace on which observations vary
maximally. Throughout this paper, we employ the Grassmann and Stiefel manifolds
for various dimensionality reduction problems, explore the connection between
the two manifolds, and use Hybrid Monte Carlo for posterior sampling on the
Grassmannian for the first time. We delineate in which situations either
manifold should be considered. Further, matrix manifold models are used to
yield scientific insight in the context of cognitive neuroscience, and we
conclude that our methods are suitable for basic inference as well as accurate
prediction.Comment: All datasets and computer programs are publicly available at
http://www.ics.uci.edu/~babaks/Site/Codes.htm
The ROMES method for statistical modeling of reduced-order-model error
This work presents a technique for statistically modeling errors introduced
by reduced-order models. The method employs Gaussian-process regression to
construct a mapping from a small number of computationally inexpensive `error
indicators' to a distribution over the true error. The variance of this
distribution can be interpreted as the (epistemic) uncertainty introduced by
the reduced-order model. To model normed errors, the method employs existing
rigorous error bounds and residual norms as indicators; numerical experiments
show that the method leads to a near-optimal expected effectivity in contrast
to typical error bounds. To model errors in general outputs, the method uses
dual-weighted residuals---which are amenable to uncertainty control---as
indicators. Experiments illustrate that correcting the reduced-order-model
output with this surrogate can improve prediction accuracy by an order of
magnitude; this contrasts with existing `multifidelity correction' approaches,
which often fail for reduced-order models and suffer from the curse of
dimensionality. The proposed error surrogates also lead to a notion of
`probabilistic rigor', i.e., the surrogate bounds the error with specified
probability
- …