792 research outputs found
Hot new directions for quasi-Monte Carlo research in step with applications
This article provides an overview of some interfaces between the theory of
quasi-Monte Carlo (QMC) methods and applications. We summarize three QMC
theoretical settings: first order QMC methods in the unit cube and in
, and higher order QMC methods in the unit cube. One important
feature is that their error bounds can be independent of the dimension
under appropriate conditions on the function spaces. Another important feature
is that good parameters for these QMC methods can be obtained by fast efficient
algorithms even when is large. We outline three different applications and
explain how they can tap into the different QMC theory. We also discuss three
cost saving strategies that can be combined with QMC in these applications.
Many of these recent QMC theory and methods are developed not in isolation, but
in close connection with applications
Application of quasi-Monte Carlo methods to PDEs with random coefficients -- an overview and tutorial
This article provides a high-level overview of some recent works on the
application of quasi-Monte Carlo (QMC) methods to PDEs with random
coefficients. It is based on an in-depth survey of a similar title by the same
authors, with an accompanying software package which is also briefly discussed
here. Embedded in this article is a step-by-step tutorial of the required
analysis for the setting known as the uniform case with first order QMC rules.
The aim of this article is to provide an easy entry point for QMC experts
wanting to start research in this direction and for PDE analysts and
practitioners wanting to tap into contemporary QMC theory and methods.Comment: arXiv admin note: text overlap with arXiv:1606.0661
Ensemble Kalman filter for neural network based one-shot inversion
We study the use of novel techniques arising in machine learning for inverse
problems. Our approach replaces the complex forward model by a neural network,
which is trained simultaneously in a one-shot sense when estimating the unknown
parameters from data, i.e. the neural network is trained only for the unknown
parameter. By establishing a link to the Bayesian approach to inverse problems,
an algorithmic framework is developed which ensures the feasibility of the
parameter estimate w.r. to the forward model. We propose an efficient,
derivative-free optimization method based on variants of the ensemble Kalman
inversion. Numerical experiments show that the ensemble Kalman filter for
neural network based one-shot inversion is a promising direction combining
optimization and machine learning techniques for inverse problems
Sparse Quadrature for High-Dimensional Integration with Gaussian Measure
In this work we analyze the dimension-independent convergence property of an
abstract sparse quadrature scheme for numerical integration of functions of
high-dimensional parameters with Gaussian measure. Under certain assumptions of
the exactness and the boundedness of univariate quadrature rules as well as the
regularity of the parametric functions with respect to the parameters, we
obtain the convergence rate , where is the number of indices,
and is independent of the number of the parameter dimensions. Moreover, we
propose both an a-priori and an a-posteriori schemes for the construction of a
practical sparse quadrature rule and perform numerical experiments to
demonstrate their dimension-independent convergence rates
- …