752 research outputs found
Population Quasi-Monte Carlo
Monte Carlo methods are widely used for approximating complicated,
multidimensional integrals for Bayesian inference. Population Monte Carlo (PMC)
is an important class of Monte Carlo methods, which utilizes a population of
proposals to generate weighted samples that approximate the target
distribution. The generic PMC framework iterates over three steps: samples are
simulated from a set of proposals, weights are assigned to such samples to
correct for mismatch between the proposal and target distributions, and the
proposals are then adapted via resampling from the weighted samples. When the
target distribution is expensive to evaluate, the PMC has its computational
limitation since the convergence rate is . To address
this, we propose in this paper a new Population Quasi-Monte Carlo (PQMC)
framework, which integrates Quasi-Monte Carlo ideas within the sampling and
adaptation steps of PMC. A key novelty in PQMC is the idea of importance
support points resampling, a deterministic method for finding an "optimal"
subsample from the weighted proposal samples. Moreover, within the PQMC
framework, we develop an efficient covariance adaptation strategy for
multivariate normal proposals. Lastly, a new set of correction weights is
introduced for the weighted PMC estimator to improve the efficiency from the
standard PMC estimator. We demonstrate the improved empirical convergence of
PQMC over PMC in extensive numerical simulations and a friction drilling
application.Comment: Submitted to Journal of Computational and Graphical Statistic
Space-time multilevel quadrature methods and their application for cardiac electrophysiology
We present a novel approach which aims at high-performance uncertainty quantification for cardiac electrophysiology simulations. Employing the monodomain equation to model the transmembrane potential inside the cardiac cells, we evaluate the effect of spatially correlated perturbations of the heart fibers on the statistics of the resulting quantities of interest. Our methodology relies on a close integration of multilevel quadrature methods, parallel iterative solvers and space-time finite element discretizations, allowing for a fully parallelized framework in space, time and stochastics. Extensive numerical studies are presented to evaluate convergence rates and to compare the performance of classical Monte Carlo methods such as standard Monte Carlo (MC) and quasi-Monte Carlo (QMC), as well as multilevel strategies, i.e. multilevel Monte Carlo (MLMC) and multilevel quasi-Monte Carlo (MLQMC) on hierarchies of nested meshes. Finally, we employ a recently suggested variant of the multilevel approach for non-nested meshes to deal with a realistic heart geometry
Loop integration results using numerical extrapolation for a non-scalar integral
Loop integration results have been obtained using numerical integration and
extrapolation. An extrapolation to the limit is performed with respect to a
parameter in the integrand which tends to zero. Results are given for a
non-scalar four-point diagram. Extensions to accommodate loop integration by
existing integration packages are also discussed. These include: using
previously generated partitions of the domain and roundoff error guards.Comment: 4 pages, 3 figures, revised, contribution to ACAT03 (Dec. 2003
Memory-Based Monte Carlo Integration for Solving Partial Differential Equations Using Neural Networks
Monte Carlo integration is a widely used quadrature rule to solve Partial Differential Equations with neural networks due to its ability to guarantee overfitting-free solutions and high-dimensional scalability. However, this stochastic method produces noisy losses and gradients during training, which hinders a proper convergence diagnosis. Typically, this is overcome using an immense (disproportionate) amount of integration points, which deteriorates the training performance. This work proposes a memory-based Monte Carlo integration method that produces accurate integral approximations without requiring the high computational costs of processing large samples during training
Two-Qubit Separability Probabilities and Beta Functions
Due to recent important work of Zyczkowski and Sommers (quant-ph/0302197 and
quant-ph/0304041), exact formulas are available (both in terms of the
Hilbert-Schmidt and Bures metrics) for the (n^2-1)-dimensional and
(n(n-1)/2-1)-dimensional volumes of the complex and real n x n density
matrices. However, no comparable formulas are available for the volumes (and,
hence, probabilities) of various separable subsets of them. We seek to clarify
this situation for the Hilbert-Schmidt metric for the simplest possible case of
n=4, that is, the two-qubit systems. Making use of the density matrix (rho)
parameterization of Bloore (J. Phys. A 9, 2059 [1976]), we are able to reduce
each of the real and complex volume problems to the calculation of a
one-dimensional integral, the single relevant variable being a certain ratio of
diagonal entries, nu = (rho_{11} rho_{44})/{rho_{22} rho_{33})$. The associated
integrand in each case is the product of a known (highly oscillatory near nu=1)
jacobian and a certain unknown univariate function, which our extensive
numerical (quasi-Monte Carlo) computations indicate is very closely
proportional to an (incomplete) beta function B_{nu}(a,b), with a=1/2,
b=sqrt{3}in the real case, and a=2 sqrt{6}/5, b =3/sqrt{2} in the complex case.
Assuming the full applicability of these specific incomplete beta functions, we
undertake separable volume calculations.Comment: 17 pages, 4 figures, paper is substantially rewritten and
reorganized, with the quasi-Monte Carlo integration sample size being greatly
increase
- âŠ