16,465 research outputs found
Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations
We analyze the convergence of compressive sensing based sampling techniques
for the efficient evaluation of functionals of solutions for a class of
high-dimensional, affine-parametric, linear operator equations which depend on
possibly infinitely many parameters. The proposed algorithms are based on
so-called "non-intrusive" sampling of the high-dimensional parameter space,
reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a
functional of the parametric solution is then computed via compressive sensing
methods from samples of functionals of the solution. A key ingredient in our
analysis of independent interest consists in a generalization of recent results
on the approximate sparsity of generalized polynomial chaos representations
(gpc) of the parametric solution families, in terms of the gpc series with
respect to tensorized Chebyshev polynomials. In particular, we establish
sufficient conditions on the parametric inputs to the parametric operator
equation such that the Chebyshev coefficients of the gpc expansion are
contained in certain weighted -spaces for . Based on this we
show that reconstructions of the parametric solutions computed from the sampled
problems converge, with high probability, at the , resp.
convergence rates afforded by best -term approximations of the parametric
solution up to logarithmic factors.Comment: revised version, 27 page
Multi-index Stochastic Collocation convergence rates for random PDEs with parametric regularity
We analyze the recent Multi-index Stochastic Collocation (MISC) method for
computing statistics of the solution of a partial differential equation (PDEs)
with random data, where the random coefficient is parametrized by means of a
countable sequence of terms in a suitable expansion. MISC is a combination
technique based on mixed differences of spatial approximations and quadratures
over the space of random data and, naturally, the error analysis uses the joint
regularity of the solution with respect to both the variables in the physical
domain and parametric variables. In MISC, the number of problem solutions
performed at each discretization level is not determined by balancing the
spatial and stochastic components of the error, but rather by suitably
extending the knapsack-problem approach employed in the construction of the
quasi-optimal sparse-grids and Multi-index Monte Carlo methods. We use a greedy
optimization procedure to select the most effective mixed differences to
include in the MISC estimator. We apply our theoretical estimates to a linear
elliptic PDEs in which the log-diffusion coefficient is modeled as a random
field, with a covariance similar to a Mat\'ern model, whose realizations have
spatial regularity determined by a scalar parameter. We conduct a complexity
analysis based on a summability argument showing algebraic rates of convergence
with respect to the overall computational work. The rate of convergence depends
on the smoothness parameter, the physical dimensionality and the efficiency of
the linear solver. Numerical experiments show the effectiveness of MISC in this
infinite-dimensional setting compared with the Multi-index Monte Carlo method
and compare the convergence rate against the rates predicted in our theoretical
analysis
Monte Carlo Greeks for financial products via approximative transition densities
In this paper we introduce efficient Monte Carlo estimators for the valuation
of high-dimensional derivatives and their sensitivities (''Greeks''). These
estimators are based on an analytical, usually approximative representation of
the underlying density. We study approximative densities obtained by the WKB
method. The results are applied in the context of a Libor market model.Comment: 24 page
Multilevel Sparse Grid Methods for Elliptic Partial Differential Equations with Random Coefficients
Stochastic sampling methods are arguably the most direct and least intrusive
means of incorporating parametric uncertainty into numerical simulations of
partial differential equations with random inputs. However, to achieve an
overall error that is within a desired tolerance, a large number of sample
simulations may be required (to control the sampling error), each of which may
need to be run at high levels of spatial fidelity (to control the spatial
error). Multilevel sampling methods aim to achieve the same accuracy as
traditional sampling methods, but at a reduced computational cost, through the
use of a hierarchy of spatial discretization models. Multilevel algorithms
coordinate the number of samples needed at each discretization level by
minimizing the computational cost, subject to a given error tolerance. They can
be applied to a variety of sampling schemes, exploit nesting when available,
can be implemented in parallel and can be used to inform adaptive spatial
refinement strategies. We extend the multilevel sampling algorithm to sparse
grid stochastic collocation methods, discuss its numerical implementation and
demonstrate its efficiency both theoretically and by means of numerical
examples
Multi-level higher order QMC Galerkin discretization for affine parametric operator equations
We develop a convergence analysis of a multi-level algorithm combining higher
order quasi-Monte Carlo (QMC) quadratures with general Petrov-Galerkin
discretizations of countably affine parametric operator equations of elliptic
and parabolic type, extending both the multi-level first order analysis in
[\emph{F.Y.~Kuo, Ch.~Schwab, and I.H.~Sloan, Multi-level quasi-Monte Carlo
finite element methods for a class of elliptic partial differential equations
with random coefficient} (in review)] and the single level higher order
analysis in [\emph{J.~Dick, F.Y.~Kuo, Q.T.~Le~Gia, D.~Nuyens, and Ch.~Schwab,
Higher order QMC Galerkin discretization for parametric operator equations} (in
review)]. We cover, in particular, both definite as well as indefinite,
strongly elliptic systems of partial differential equations (PDEs) in
non-smooth domains, and discuss in detail the impact of higher order
derivatives of {\KL} eigenfunctions in the parametrization of random PDE inputs
on the convergence results. Based on our \emph{a-priori} error bounds, concrete
choices of algorithm parameters are proposed in order to achieve a prescribed
accuracy under minimal computational work. Problem classes and sufficient
conditions on data are identified where multi-level higher order QMC
Petrov-Galerkin algorithms outperform the corresponding single level versions
of these algorithms. Numerical experiments confirm the theoretical results
Sparse Deterministic Approximation of Bayesian Inverse Problems
We present a parametric deterministic formulation of Bayesian inverse
problems with input parameter from infinite dimensional, separable Banach
spaces. In this formulation, the forward problems are parametric, deterministic
elliptic partial differential equations, and the inverse problem is to
determine the unknown, parametric deterministic coefficients from noisy
observations comprising linear functionals of the solution.
We prove a generalized polynomial chaos representation of the posterior
density with respect to the prior measure, given noisy observational data. We
analyze the sparsity of the posterior density in terms of the summability of
the input data's coefficient sequence. To this end, we estimate the
fluctuations in the prior. We exhibit sufficient conditions on the prior model
in order for approximations of the posterior density to converge at a given
algebraic rate, in terms of the number of unknowns appearing in the
parameteric representation of the prior measure. Similar sparsity and
approximation results are also exhibited for the solution and covariance of the
elliptic partial differential equation under the posterior. These results then
form the basis for efficient uncertainty quantification, in the presence of
data with noise
- …