22 research outputs found
Hot new directions for quasi-Monte Carlo research in step with applications
This article provides an overview of some interfaces between the theory of
quasi-Monte Carlo (QMC) methods and applications. We summarize three QMC
theoretical settings: first order QMC methods in the unit cube and in
, and higher order QMC methods in the unit cube. One important
feature is that their error bounds can be independent of the dimension
under appropriate conditions on the function spaces. Another important feature
is that good parameters for these QMC methods can be obtained by fast efficient
algorithms even when is large. We outline three different applications and
explain how they can tap into the different QMC theory. We also discuss three
cost saving strategies that can be combined with QMC in these applications.
Many of these recent QMC theory and methods are developed not in isolation, but
in close connection with applications
Combining Normalizing Flows and Quasi-Monte Carlo
Recent advances in machine learning have led to the development of new
methods for enhancing Monte Carlo methods such as Markov chain Monte Carlo
(MCMC) and importance sampling (IS). One such method is normalizing flows,
which use a neural network to approximate a distribution by evaluating it
pointwise. Normalizing flows have been shown to improve the performance of MCMC
and IS. On the other side, (randomized) quasi-Monte Carlo methods are used to
perform numerical integration. They replace the random sampling of Monte Carlo
by a sequence which cover the hypercube more uniformly, resulting in better
convergence rates for the error that plain Monte Carlo. In this work, we
combine these two methods by using quasi-Monte Carlo to sample the initial
distribution that is transported by the flow. We demonstrate through numerical
experiments that this combination can lead to an estimator with significantly
lower variance than if the flow was sampled with a classic Monte Carlo
Fully Parallel Hyperparameter Search: Reshaped Space-Filling
Space-filling designs such as scrambled-Hammersley, Latin Hypercube Sampling
and Jittered Sampling have been proposed for fully parallel hyperparameter
search, and were shown to be more effective than random or grid search. In this
paper, we show that these designs only improve over random search by a constant
factor. In contrast, we introduce a new approach based on reshaping the search
distribution, which leads to substantial gains over random search, both
theoretically and empirically. We propose two flavors of reshaping. First, when
the distribution of the optimum is some known , we propose Recentering,
which uses as search distribution a modified version of tightened closer
to the center of the domain, in a dimension-dependent and budget-dependent
manner. Second, we show that in a wide range of experiments with unknown,
using a proposed Cauchy transformation, which simultaneously has a heavier tail
(for unbounded hyperparameters) and is closer to the boundaries (for bounded
hyperparameters), leads to improved performances. Besides artificial
experiments and simple real world tests on clustering or Salmon mappings, we
check our proposed methods on expensive artificial intelligence tasks such as
attend/infer/repeat, video next frame segmentation forecasting and progressive
generative adversarial networks
On the quasi-Monte Carlo quadrature with Halton points for elliptic PDEs with log-normal diffusion
This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. Especially, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings. This article is dedicated to the computation of the moments of the solution to elliptic partial differential equations with random, log-normally distributed diffusion coefficients by the quasi-Monte Carlo method. Our main result is that the convergence rate of the quasi-Monte Carlo method based on the Halton sequence for the moment computation depends only linearly on the dimensionality of the stochastic input parameters. Especially, we attain this rather mild dependence on the stochastic dimensionality without any randomization of the quasi-Monte Carlo method under consideration. For the proof of the main result, we require related regularity estimates for the solution and its powers. These estimates are also provided here. Numerical experiments are given to validate the theoretical findings
Nonasymptotic Convergence Rate of Quasi-Monte Carlo: Applications to Linear Elliptic PDEs with Lognormal Coefficients and Importance Samplings
This study analyzes the nonasymptotic convergence behavior of the quasi-Monte
Carlo (QMC) method with applications to linear elliptic partial differential
equations (PDEs) with lognormal coefficients. Building upon the error analysis
presented in (Owen, 2006), we derive a nonasymptotic convergence estimate
depending on the specific integrands, the input dimensionality, and the finite
number of samples used in the QMC quadrature. We discuss the effects of the
variance and dimensionality of the input random variable. Then, we apply the
QMC method with importance sampling (IS) to approximate deterministic,
real-valued, bounded linear functionals that depend on the solution of a linear
elliptic PDE with a lognormal diffusivity coefficient in bounded domains of
, where the random coefficient is modeled as a stationary
Gaussian random field parameterized by the trigonometric and wavelet-type
basis. We propose two types of IS distributions, analyze their effects on the
QMC convergence rate, and observe the improvements