104 research outputs found

    Integral equations, quasi-Monte Carlo methods and risk modelling

    Full text link
    We survey a QMC approach to integral equations and develop some new applications to risk modeling. In particular, a rigorous error bound derived from Koksma-Hlawka type inequalities is achieved for certain expectations related to the probability of ruin in Markovian models. The method is based on a new concept of isotropic discrepancy and its applications to numerical integration. The theoretical results are complemented by numerical examples and computations

    On functions of bounded variation

    Full text link
    The recently introduced concept of D\mathcal{D}-variation unifies previous concepts of variation of multivariate functions. In this paper, we give an affirmative answer to the open question from Pausinger \& Svane (J. Complexity, 2014) whether every function of bounded Hardy--Krause variation is Borel measurable and has bounded D\mathcal{D}-variation. Moreover, we show that the space of functions of bounded D\mathcal{D}-variation can be turned into a commutative Banach algebra

    Some highlights of Harald Niederreiter's work

    Full text link
    In this paper we give a short biography of Harald Niederreiter and we spotlight some cornerstones from his wide-ranging work. We focus on his results on uniform distribution, algebraic curves, polynomials and quasi-Monte Carlo methods. In the flavor of Harald's work we also mention some applications including numerical integration, coding theory and cryptography

    Low-discrepancy point sets for non-uniform measures

    Full text link
    In the present paper we prove several results concerning the existence of low-discrepancy point sets with respect to an arbitrary non-uniform measure ΞΌ\mu on the dd-dimensional unit cube. We improve a theorem of Beck, by showing that for any dβ‰₯1d \geq 1, Nβ‰₯1,N \geq 1, and any non-negative, normalized Borel measure ΞΌ\mu on [0,1]d[0,1]^d there exists a point set x1,…,xN∈[0,1]dx_1, \dots, x_N \in [0,1]^d whose star-discrepancy with respect to ΞΌ\mu is of order DNβˆ—(x1,…,xN;ΞΌ)β‰ͺ(log⁑N)(3d+1)/2N. D_N^*(x_1, \dots, x_N; \mu) \ll \frac{(\log N)^{(3d+1)/2}}{N}. For the proof we use a theorem of Banaszczyk concerning the balancing of vectors, which implies an upper bound for the linear discrepancy of hypergraphs. Furthermore, the theory of large deviation bounds for empirical processes indexed by sets is discussed, and we prove a numerically explicit upper bound for the inverse of the discrepancy for Vapnik--\v{C}ervonenkis classes. Finally, using a recent version of the Koksma--Hlawka inequality due to Brandolini, Colzani, Gigante and Travaglini, we show that our results imply the existence of cubature rules yielding fast convergence rates for the numerical integration of functions having discontinuities of a certain form.Comment: 24 page

    Tusn\'ady's problem, the transference principle, and non-uniform QMC sampling

    Full text link
    It is well-known that for every Nβ‰₯1N \geq 1 and dβ‰₯1d \geq 1 there exist point sets x1,…,xN∈[0,1]dx_1, \dots, x_N \in [0,1]^d whose discrepancy with respect to the Lebesgue measure is of order at most (log⁑N)dβˆ’1Nβˆ’1(\log N)^{d-1} N^{-1}. In a more general setting, the first author proved together with Josef Dick that for any normalized measure ΞΌ\mu on [0,1]d[0,1]^d there exist points x1,…,xNx_1, \dots, x_N whose discrepancy with respect to ΞΌ\mu is of order at most (log⁑N)(3d+1)/2Nβˆ’1(\log N)^{(3d+1)/2} N^{-1}. The proof used methods from combinatorial mathematics, and in particular a result of Banaszczyk on balancings of vectors. In the present note we use a version of the so-called transference principle together with recent results on the discrepancy of red-blue colorings to show that for any ΞΌ\mu there even exist points having discrepancy of order at most (log⁑N)dβˆ’12Nβˆ’1(\log N)^{d-\frac12} N^{-1}, which is almost as good as the discrepancy bound in the case of the Lebesgue measure.Comment: 11 page

    Proof Techniques in Quasi-Monte Carlo Theory

    Full text link
    In this survey paper we discuss some tools and methods which are of use in quasi-Monte Carlo (QMC) theory. We group them in chapters on Numerical Analysis, Harmonic Analysis, Algebra and Number Theory, and Probability Theory. We do not provide a comprehensive survey of all tools, but focus on a few of them, including reproducing and covariance kernels, Littlewood-Paley theory, Riesz products, Minkowski's fundamental theorem, exponential sums, diophantine approximation, Hoeffding's inequality and empirical processes, as well as other tools. We illustrate the use of these methods in QMC using examples.Comment: Revised versio

    Walsh Figure of Merit for Digital Nets: An Easy Measure for Higher Order Convergent QMC

    Full text link
    Fix an integer ss. Let f:[0,1)sβ†’Rf:[0,1)^s \to \mathbb R be an integrable function. Let PβŠ‚[0,1]sP\subset [0,1]^s be a finite point set. Quasi-Monte Carlo integration of ff by PP is the average value of ff over PP that approximates the integration of ff over the ss-dimensional cube. Koksma-Hlawka inequality tells that, by a smart choice of PP, one may expect that the error decreases roughly O(Nβˆ’1(log⁑N)s)O(N^{-1}(\log N)^s). For any Ξ±β‰₯1\alpha \geq 1, J.\ Dick gave a construction of point sets such that for Ξ±\alpha-smooth ff, convergence rate O(Nβˆ’Ξ±(log⁑N)sΞ±)O(N^{-\alpha}(\log N)^{s\alpha}) is assured. As a coarse version of his theory, M-Saito-Matoba introduced Walsh figure of Merit (WAFOM), which gives the convergence rate O(Nβˆ’Clog⁑N/s)O(N^{-C\log N/s}). WAFOM is efficiently computable. By a brute-force search of low WAFOM point sets, we observe a convergence rate of order Nβˆ’Ξ±N^{-\alpha} with Ξ±>1\alpha>1, for several test integrands for s=4s=4 and 88.Comment: 17 pages, 4 figures. Submitted to: Monte Carlo and Quasi-Monte Carlo Methods 201

    A Discrepancy-Based Design for A/B Testing Experiments

    Full text link
    The aim of this paper is to introduce a new design of experiment method for A/B tests in order to balance the covariate information in all treatment groups. A/B tests (or "A/B/n tests") refer to the experiments and the corresponding inference on the treatment effect(s) of a two-level or multi-level controllable experimental factor. The common practice is to use a randomized design and perform hypothesis tests on the estimates. However, such estimation and inference are not always accurate when covariate imbalance exists among the treatment groups. To overcome this issue, we propose a discrepancy-based criterion and show that the design minimizing this criterion significantly improves the accuracy of the treatment effect(s) estimates. The discrepancy-based criterion is model-free and thus makes the estimation of the treatment effect(s) robust to the model assumptions. More importantly, the proposed design is applicable to both continuous and categorical response measurements. We develop two efficient algorithms to construct the designs by optimizing the criterion for both offline and online A/B tests. Through simulation study and a real example, we show that the proposed design approach achieves good covariate balance and accurate estimation.Comment: 42 Pages 10 Figure

    Metric number theory, lacunary series and systems of dilated functions

    Full text link
    By a classical result of Weyl, for any increasing sequence (nk)kβ‰₯1(n_k)_{k \geq 1} of integers the sequence of fractional parts ({nkx})kβ‰₯1(\{n_k x\})_{k \geq 1} is uniformly distributed modulo 1 for almost all x∈[0,1]x \in [0,1]. Except for a few special cases, e.g. when nk=k,kβ‰₯1n_k=k, k \geq 1, the exceptional set cannot be described explicitly. The exact asymptotic order of the discrepancy of ({nkx})kβ‰₯1(\{n_k x\})_{k \geq 1} is only known in a few special cases, for example when (nk)kβ‰₯1(n_k)_{k \geq 1} is a (Hadamard) lacunary sequence, that is when nk+1/nkβ‰₯q>1,kβ‰₯1n_{k+1}/n_k \geq q > 1, k \geq 1. In this case of quickly increasing (nk)kβ‰₯1(n_k)_{k \geq 1} the system ({nkx})kβ‰₯1(\{n_k x\})_{k \geq 1} (or, more general, (f(nkx))kβ‰₯1(f(n_k x))_{k \geq 1} for a 1-periodic function ff) shows many asymptotic properties which are typical for the behavior of systems of \emph{independent} random variables. Precise results depend on a fascinating interplay between analytic, probabilistic and number-theoretic phenomena. Without any growth conditions on (nk)kβ‰₯1(n_k)_{k \geq 1} the situation becomes much more complicated, and the system (f(nkx))kβ‰₯1(f(n_k x))_{k \geq 1} will typically fail to satisfy probabilistic limit theorems. An important problem which remains is to study the almost everywhere convergence of series βˆ‘k=1∞ckf(kx)\sum_{k=1}^\infty c_k f(k x), which is closely related to finding upper bounds for maximal L2L^2-norms of the form ∫01(max⁑1≀M≀Nβˆ£βˆ‘k=1Mckf(kx)∣2dx. \int_0^1 (\max_{1 \leq M \leq N}| \sum_{k=1}^M c_k f(kx)|^2 dx. The most striking example of this connection is the equivalence of the Carleson convergence theorem and the Carleson--Hunt inequality for maximal partial sums of Fourier series. For general functions ff this is a very difficult problem, which is related to finding upper bounds for certain sums involving greatest common divisors.Comment: Survey paper for the RICAM workshop on "Uniform Distribution and Quasi-Monte Carlo Methods", held from October 14-18, 2013, in Linz, Austria. This article will appear in the proceedings volume for this workshop, published as part of the "Radon Series on Computational and Applied Mathematics" by DeGruyte

    On quasi-Monte Carlo simulation of stochastic differential equations

    Get PDF
    In a number of problems of mathematical physics and other fields stochastic differential equations are used to model certain phenomena. Often the solution of those problems can be obtained as a functional of the solution of some specific stochastic differential equation. Then we may use the idea of weak approximation to carry out numerical simulation. We analyze some complexity issues for a class of linear stochastic differential equations (Langevin type), which can be given by dXt = -Ξ±:Xtdt + Ξ²(t)dWt, X0 ≔ 0, where Ξ± > 0 and Ξ²: [0, T] β†’ ℝ. It turns out that for a class of input data which are not more than Lipschitz continuous the explicit Euler scheme gives rise to an optimal (by order) numerical method. Then we study numerical phenomena which occur when switching from (real) Monte Carlo simulation to quasi-Monte Carlo one, which is the case when we carry out the simulation on computers. It will easily be seen that completely uniformly distributed sequences yield good substitutes for random variates, while not all uniformly distributed (mod1) sequences are suited. In fact we provide necessary conditions on a sequence in order to serve for quasi-Monte Carlo purposes. This condition is expressed in terms of the measure of well distribution. Numerical examples complement the theoretical analysis
    • …
    corecore