19,444 research outputs found
A Dynamically Adaptive Sparse Grid Method for Quasi-Optimal Interpolation of Multidimensional Analytic Functions
In this work we develop a dynamically adaptive sparse grids (SG) method for
quasi-optimal interpolation of multidimensional analytic functions defined over
a product of one dimensional bounded domains. The goal of such approach is to
construct an interpolant in space that corresponds to the "best -terms"
based on sharp a priori estimate of polynomial coefficients. In the past, SG
methods have been successful in achieving this, with a traditional construction
that relies on the solution to a Knapsack problem: only the most profitable
hierarchical surpluses are added to the SG. However, this approach requires
additional sharp estimates related to the size of the analytic region and the
norm of the interpolation operator, i.e., the Lebesgue constant. Instead, we
present an iterative SG procedure that adaptively refines an estimate of the
region and accounts for the effects of the Lebesgue constant. Our approach does
not require any a priori knowledge of the analyticity or operator norm, is
easily generalized to both affine and non-affine analytic functions, and can be
applied to sparse grids build from one dimensional rules with arbitrary growth
of the number of nodes. In several numerical examples, we utilize our
dynamically adaptive SG to interpolate quantities of interest related to the
solutions of parametrized elliptic and hyperbolic PDEs, and compare the
performance of our quasi-optimal interpolant to several alternative SG schemes
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations
We construct a new framework for accelerating Markov chain Monte Carlo in
posterior sampling problems where standard methods are limited by the
computational cost of the likelihood, or of numerical models embedded therein.
Our approach introduces local approximations of these models into the
Metropolis-Hastings kernel, borrowing ideas from deterministic approximation
theory, optimization, and experimental design. Previous efforts at integrating
approximate models into inference typically sacrifice either the sampler's
exactness or efficiency; our work seeks to address these limitations by
exploiting useful convergence characteristics of local approximations. We prove
the ergodicity of our approximate Markov chain, showing that it samples
asymptotically from the \emph{exact} posterior distribution of interest. We
describe variations of the algorithm that employ either local polynomial
approximations or local Gaussian process regressors. Our theoretical results
reinforce the key observation underlying this paper: when the likelihood has
some \emph{local} regularity, the number of model evaluations per MCMC step can
be greatly reduced without biasing the Monte Carlo average. Numerical
experiments demonstrate multiple order-of-magnitude reductions in the number of
forward model evaluations used in representative ODE and PDE inference
problems, with both synthetic and real data.Comment: A major update of the theory and example
Far-Field Compression for Fast Kernel Summation Methods in High Dimensions
We consider fast kernel summations in high dimensions: given a large set of
points in dimensions (with ) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.Comment: 43 pages, 21 figure
Improving Efficiency and Scalability of Sum of Squares Optimization: Recent Advances and Limitations
It is well-known that any sum of squares (SOS) program can be cast as a
semidefinite program (SDP) of a particular structure and that therein lies the
computational bottleneck for SOS programs, as the SDPs generated by this
procedure are large and costly to solve when the polynomials involved in the
SOS programs have a large number of variables and degree. In this paper, we
review SOS optimization techniques and present two new methods for improving
their computational efficiency. The first method leverages the sparsity of the
underlying SDP to obtain computational speed-ups. Further improvements can be
obtained if the coefficients of the polynomials that describe the problem have
a particular sparsity pattern, called chordal sparsity. The second method
bypasses semidefinite programming altogether and relies instead on solving a
sequence of more tractable convex programs, namely linear and second order cone
programs. This opens up the question as to how well one can approximate the
cone of SOS polynomials by second order representable cones. In the last part
of the paper, we present some recent negative results related to this question.Comment: Tutorial for CDC 201
Stochastic collocation on unstructured multivariate meshes
Collocation has become a standard tool for approximation of parameterized
systems in the uncertainty quantification (UQ) community. Techniques for
least-squares regularization, compressive sampling recovery, and interpolatory
reconstruction are becoming standard tools used in a variety of applications.
Selection of a collocation mesh is frequently a challenge, but methods that
construct geometrically "unstructured" collocation meshes have shown great
potential due to attractive theoretical properties and direct, simple
generation and implementation. We investigate properties of these meshes,
presenting stability and accuracy results that can be used as guides for
generating stochastic collocation grids in multiple dimensions.Comment: 29 pages, 6 figure
- …