1,683 research outputs found
Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format
We apply the Tensor Train (TT) decomposition to construct the tensor product
Polynomial Chaos Expansion (PCE) of a random field, to solve the stochastic
elliptic diffusion PDE with the stochastic Galerkin discretization, and to
compute some quantities of interest (mean, variance, exceedance probabilities).
We assume that the random diffusion coefficient is given as a smooth
transformation of a Gaussian random field. In this case, the PCE is delivered
by a complicated formula, which lacks an analytic TT representation. To
construct its TT approximation numerically, we develop the new block TT cross
algorithm, a method that computes the whole TT decomposition from a few
evaluations of the PCE formula. The new method is conceptually similar to the
adaptive cross approximation in the TT format, but is more efficient when
several tensors must be stored in the same TT representation, which is the case
for the PCE. Besides, we demonstrate how to assemble the stochastic Galerkin
matrix and to compute the solution of the elliptic equation and its
post-processing, staying in the TT format.
We compare our technique with the traditional sparse polynomial chaos and the
Monte Carlo approaches. In the tensor product polynomial chaos, the polynomial
degree is bounded for each random variable independently. This provides higher
accuracy than the sparse polynomial set or the Monte Carlo method, but the
cardinality of the tensor product set grows exponentially with the number of
random variables. However, when the PCE coefficients are implicitly
approximated in the TT format, the computations with the full tensor product
polynomial set become possible. In the numerical experiments, we confirm that
the new methodology is competitive in a wide range of parameters, especially
where high accuracy and high polynomial degrees are required.Comment: This is a major revision of the manuscript arXiv:1406.2816 with
significantly extended numerical experiments. Some unused material is remove
Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations
Stochastic Galerkin methods for non-affine coefficient representations are
known to cause major difficulties from theoretical and numerical points of
view. In this work, an adaptive Galerkin FE method for linear parametric PDEs
with lognormal coefficients discretized in Hermite chaos polynomials is
derived. It employs problem-adapted function spaces to ensure solvability of
the variational formulation. The inherently high computational complexity of
the parametric operator is made tractable by using hierarchical tensor
representations. For this, a new tensor train format of the lognormal
coefficient is derived and verified numerically. The central novelty is the
derivation of a reliable residual-based a posteriori error estimator. This can
be regarded as a unique feature of stochastic Galerkin methods. It allows for
an adaptive algorithm to steer the refinements of the physical mesh and the
anisotropic Wiener chaos polynomial degrees. For the evaluation of the error
estimator to become feasible, a numerically efficient tensor format
discretization is developed. Benchmark examples with unbounded lognormal
coefficient fields illustrate the performance of the proposed Galerkin
discretization and the fully adaptive algorithm
A Dimension-Adaptive Multi-Index Monte Carlo Method Applied to a Model of a Heat Exchanger
We present an adaptive version of the Multi-Index Monte Carlo method,
introduced by Haji-Ali, Nobile and Tempone (2016), for simulating PDEs with
coefficients that are random fields. A classical technique for sampling from
these random fields is the Karhunen-Lo\`eve expansion. Our adaptive algorithm
is based on the adaptive algorithm used in sparse grid cubature as introduced
by Gerstner and Griebel (2003), and automatically chooses the number of terms
needed in this expansion, as well as the required spatial discretizations of
the PDE model. We apply the method to a simplified model of a heat exchanger
with random insulator material, where the stochastic characteristics are
modeled as a lognormal random field, and we show consistent computational
savings
Hot new directions for quasi-Monte Carlo research in step with applications
This article provides an overview of some interfaces between the theory of
quasi-Monte Carlo (QMC) methods and applications. We summarize three QMC
theoretical settings: first order QMC methods in the unit cube and in
, and higher order QMC methods in the unit cube. One important
feature is that their error bounds can be independent of the dimension
under appropriate conditions on the function spaces. Another important feature
is that good parameters for these QMC methods can be obtained by fast efficient
algorithms even when is large. We outline three different applications and
explain how they can tap into the different QMC theory. We also discuss three
cost saving strategies that can be combined with QMC in these applications.
Many of these recent QMC theory and methods are developed not in isolation, but
in close connection with applications
Multilevel quasi-Monte Carlo for random elliptic eigenvalue problems II: Efficient algorithms and numerical results
Stochastic PDE eigenvalue problems often arise in the field of uncertainty
quantification, whereby one seeks to quantify the uncertainty in an eigenvalue,
or its eigenfunction. In this paper we present an efficient multilevel
quasi-Monte Carlo (MLQMC) algorithm for computing the expectation of the
smallest eigenvalue of an elliptic eigenvalue problem with stochastic
coefficients. Each sample evaluation requires the solution of a PDE eigenvalue
problem, and so tackling this problem in practice is notoriously
computationally difficult. We speed up the approximation of this expectation in
four ways: 1) we use a multilevel variance reduction scheme to spread the work
over a hierarchy of FE meshes and truncation dimensions; 2) we use QMC methods
to efficiently compute the expectations on each level; 3) we exploit the
smoothness in parameter space and reuse the eigenvector from a nearby QMC point
to reduce the number of iterations of the eigensolver; and 4) we utilise a
two-grid discretisation scheme to obtain the eigenvalue on the fine mesh with a
single linear solve. The full error analysis of a basic MLQMC algorithm is
given in the companion paper [Gilbert and Scheichl, 2021], and so in this paper
we focus on how to further improve the efficiency and provide theoretical
justification of the enhancement strategies 3) and 4). Numerical results are
presented that show the efficiency of our algorithm, and also show that the
four strategies we employ are complementary
- …