271 research outputs found

    Scrambled geometric net integration over general product spaces

    Full text link
    Quasi-Monte Carlo (QMC) sampling has been developed for integration over [0,1]s[0,1]^s where it has superior accuracy to Monte Carlo (MC) for integrands of bounded variation. Scrambled net quadrature gives allows replication based error estimation for QMC with at least the same accuracy and for smooth enough integrands even better accuracy than plain QMC. Integration over triangles, spheres, disks and Cartesian products of such spaces is more difficult for QMC because the induced integrand on a unit cube may fail to have the desired regularity. In this paper, we present a construction of point sets for numerical integration over Cartesian products of ss spaces of dimension dd, with triangles (d=2d=2) being of special interest. The point sets are transformations of randomized (t,m,s)(t,m,s)-nets using recursive geometric partitions. The resulting integral estimates are unbiased and their variance is o(1/n)o(1/n) for any integrand in L2L^2 of the product space. Under smoothness assumptions on the integrand, our randomized QMC algorithm has variance O(n12/d(logn)s1)O(n^{-1 - 2/d} (\log n)^{s-1}), for integration over ss-fold Cartesian products of dd-dimensional domains, compared to O(n1)O(n^{-1}) for ordinary Monte Carlo.Comment: 29 pages; 5 figure

    Asymptotic Normality of Extensible Grid Sampling

    Full text link
    Recently, He and Owen (2016) proposed the use of Hilbert's space filling curve (HSFC) in numerical integration as a way of reducing the dimension from d>1d>1 to d=1d=1. This paper studies the asymptotic normality of the HSFC-based estimate when using scrambled van der Corput sequence as input. We show that the estimate has an asymptotic normal distribution for functions in C1([0,1]d)C^1([0,1]^d), excluding the trivial case of constant functions. The asymptotic normality also holds for discontinuous functions under mild conditions. It was previously known only that scrambled (0,m,d)(0,m,d)-net quadratures enjoy the asymptotic normality for smooth enough functions, whose mixed partial gradients satisfy a H\"older condition. As a by-product, we find lower bounds for the variance of the HSFC-based estimate. Particularly, for nontrivial functions in C1([0,1]d)C^1([0,1]^d), the low bound is of order n12/dn^{-1-2/d}, which matches the rate of the upper bound established in He and Owen (2016)

    On Integration Methods Based on Scrambled Nets of Arbitrary Size

    Full text link
    We consider the problem of evaluating I(φ):=[0,1)sφ(x)dxI(\varphi):=\int_{[0,1)^s}\varphi(x) dx for a function φL2[0,1)s\varphi \in L^2[0,1)^{s}. In situations where I(φ)I(\varphi) can be approximated by an estimate of the form N1n=0N1φ(xn)N^{-1}\sum_{n=0}^{N-1}\varphi(x^n), with {xn}n=0N1\{x^n\}_{n=0}^{N-1} a point set in [0,1)s[0,1)^s, it is now well known that the OP(N1/2)O_P(N^{-1/2}) Monte Carlo convergence rate can be improved by taking for {xn}n=0N1\{x^n\}_{n=0}^{N-1} the first N=λbmN=\lambda b^m points, λ{1,,b1}\lambda\in\{1,\dots,b-1\}, of a scrambled (t,s)(t,s)-sequence in base b2b\geq 2. In this paper we derive a bound for the variance of scrambled net quadrature rules which is of order o(N1)o(N^{-1}) without any restriction on NN. As a corollary, this bound allows us to provide simple conditions to get, for any pattern of NN, an integration error of size oP(N1/2)o_P(N^{-1/2}) for functions that depend on the quadrature size NN. Notably, we establish that sequential quasi-Monte Carlo (M. Gerber and N. Chopin, 2015, \emph{J. R. Statist. Soc. B, to appear.}) reaches the oP(N1/2)o_P(N^{-1/2}) convergence rate for any values of NN. In a numerical study, we show that for scrambled net quadrature rules we can relax the constraint on NN without any loss of efficiency when the integrand φ\varphi is a discontinuous function while, for sequential quasi-Monte Carlo, taking N=λbmN=\lambda b^m may only provide moderate gains.Comment: 27 pages, 2 figures (final version, to appear in The Journal of Complexity

    High Performance Financial Simulation Using Randomized Quasi-Monte Carlo Methods

    Full text link
    GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. Since most Monte Carlo algorithms are embarrassingly parallel, they benefit greatly from parallel implementations, and consequently Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasi-Monte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in GPU libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasi-Monte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities

    Construction of interlaced scrambled polynomial lattice rules of arbitrary high order

    Full text link
    Higher order scrambled digital nets are randomized quasi-Monte Carlo rules which have recently been introduced in [J. Dick, Ann. Statist., 39 (2011), 1372--1398] and shown to achieve the optimal rate of convergence of the root mean square error for numerical integration of smooth functions defined on the ss-dimensional unit cube. The key ingredient there is a digit interlacing function applied to the components of a randomly scrambled digital net whose number of components is dsds, where the integer dd is the so-called interlacing factor. In this paper, we replace the randomly scrambled digital nets by randomly scrambled polynomial lattice point sets, which allows us to obtain a better dependence on the dimension while still achieving the optimal rate of convergence. Our results apply to Owen's full scrambling scheme as well as the simplifications studied by Hickernell, Matou\v{s}ek and Owen. We consider weighted function spaces with general weights, whose elements have square integrable partial mixed derivatives of order up to α1\alpha\ge 1, and derive an upper bound on the variance of the estimator for higher order scrambled polynomial lattice rules. Employing our obtained bound as a quality criterion, we prove that the component-by-component construction can be used to obtain explicit constructions of good polynomial lattice point sets. By first constructing classical polynomial lattice point sets in base bb and dimension dsds, to which we then apply the interlacing scheme of order dd, we obtain a construction cost of the algorithm of order O(dsmbm)\mathcal{O}(dsmb^m) operations using O(bm)\mathcal{O}(b^m) memory in case of product weights, where bmb^m is the number of points in the polynomial lattice point set

    Monte Carlo Methods and Path-Generation techniques for Pricing Multi-asset Path-dependent Options

    Full text link
    We consider the problem of pricing path-dependent options on a basket of underlying assets using simulations. As an example we develop our studies using Asian options. Asian options are derivative contracts in which the underlying variable is the average price of given assets sampled over a period of time. Due to this structure, Asian options display a lower volatility and are therefore cheaper than their standard European counterparts. This paper is a survey of some recent enhancements to improve efficiency when pricing Asian options by Monte Carlo simulation in the Black-Scholes model. We analyze the dynamics with constant and time-dependent volatilities of the underlying asset returns. We present a comparison between the precision of the standard Monte Carlo method (MC) and the stratified Latin Hypercube Sampling (LHS). In particular, we discuss the use of low-discrepancy sequences, also known as Quasi-Monte Carlo method (QMC), and a randomized version of these sequences, known as Randomized Quasi Monte Carlo (RQMC). The latter has proven to be a useful variance reduction technique for both problems of up to 20 dimensions and for very high dimensions. Moreover, we present and test a new path generation approach based on a Kronecker product approximation (KPA) in the case of time-dependent volatilities. KPA proves to be a fast generation technique and reduces the computational cost of the simulation procedure.Comment: 34 pages, 4 figure, 15 table

    Application of Sequential Quasi-Monte Carlo to Autonomous Positioning

    Full text link
    Sequential Monte Carlo algorithms (also known as particle filters) are popular methods to approximate filtering (and related) distributions of state-space models. However, they converge at the slow 1/N1/\sqrt{N} rate, which may be an issue in real-time data-intensive scenarios. We give a brief outline of SQMC (Sequential Quasi-Monte Carlo), a variant of SMC based on low-discrepancy point sets proposed by Gerber and Chopin (2015), which converges at a faster rate, and we illustrate the greater performance of SQMC on autonomous positioning problems.Comment: 5 pages, 4 figure

    On the dependence structure and quality of scrambled (t,m,s)(t,m,s)-nets

    Full text link
    In this paper we develop a framework to study the dependence structure of scrambled (t,m,s)(t,m,s)-nets. It relies on values denoted by Cb(k;Pn)C_b(\mathbf{k};P_n), which are related to how many distinct pairs of points from PnP_n lie in the same elementary k\mathbf{k}-interval in base bb. These values quantify the equidistribution properties of PnP_n in a more informative way than the parameter tt. They also play a key role in determining if a scrambled set P~n\tilde{P}_n is negative lower orthant dependent (NLOD). Indeed this property holds if and only if Cb(k;Pn)1C_b(\mathbf{k};P_n) \le 1 for all kNs\mathbf{k} \in \mathbb{N}^s, which in turn implies that a scrambled digital (t,m,s)(t,m,s)-net in base bb is NLOD if and only if t=0t=0. Through numerical examples we demonstrate that these Cb(k;Pn)C_b(\mathbf{k};P_n) values are a powerful tool to compare the quality of different (t,m,s)(t,m,s)-nets, and to enhance our understanding of how scrambling can improve the quality of deterministic point sets.Comment: 28 page

    Spatial low-discrepancy sequences, spherical cone discrepancy, and applications in financial modeling

    Full text link
    In this paper we introduce a reproducing kernel Hilbert space defined on Rd+1\mathbb{R}^{d+1} as the tensor product of a reproducing kernel defined on the unit sphere Sd\mathbb{S}^{d} in Rd+1\mathbb{R}^{d+1} and a reproducing kernel defined on [0,)[0,\infty). We extend Stolarsky's invariance principle to this case and prove upper and lower bounds for numerical integration in the corresponding reproducing kernel Hilbert space. The idea of separating the direction from the distance from the origin can also be applied to the construction of quadrature methods. An extension of the area-preserving Lambert transform is used to generate points on Sd1\mathbb{S}^{d-1} via lifting Sobol' points in [0,1)d[0,1)^{d} to the sphere. The dd-th component of each Sobol' point, suitably transformed, provides the distance information so that the resulting point set is normally distributed in Rd\mathbb{R}^{d}. Numerical tests provide evidence of the usefulness of constructing Quasi-Monte Carlo type methods for integration in such spaces. We also test this method on examples from financial applications (option pricing problems) and compare the results with traditional methods for numerical integration in Rd\mathbb{R}^{d}.Comment: 37 pages, 6 table

    Local antithetic sampling with scrambled nets

    Full text link
    We consider the problem of computing an approximation to the integral I=[0,1]df(x)dxI=\int_{[0,1]^d}f(x) dx. Monte Carlo (MC) sampling typically attains a root mean squared error (RMSE) of O(n1/2)O(n^{-1/2}) from nn independent random function evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully equispaced evaluation points can attain the rate O(n1+ε)O(n^{-1+\varepsilon}) for any ε>0\varepsilon>0 and randomized QMC (RQMC) can attain the RMSE O(n3/2+ε)O(n^{-3/2+\varepsilon}), both under mild conditions on ff. Classical variance reduction methods for MC can be adapted to QMC. Published results combining QMC with importance sampling and with control variates have found worthwhile improvements, but no change in the error rate. This paper extends the classical variance reduction method of antithetic sampling and combines it with RQMC. One such method is shown to bring a modest improvement in the RMSE rate, attaining O(n3/21/d+ε)O(n^{-3/2-1/d+\varepsilon}) for any ε>0\varepsilon>0, for smooth enough ff.Comment: Published in at http://dx.doi.org/10.1214/07-AOS548 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore