research

On the Discrepancy of Jittered Sampling

Abstract

We study the discrepancy of jittered sampling sets: such a set PβŠ‚[0,1]d\mathcal{P} \subset [0,1]^d is generated for fixed m∈Nm \in \mathbb{N} by partitioning [0,1]d[0,1]^d into mdm^d axis aligned cubes of equal measure and placing a random point inside each of the N=mdN = m^d cubes. We prove that, for NN sufficiently large, 110dN12+12d≀EDNβˆ—(P)≀d(log⁑N)12N12+12d, \frac{1}{10}\frac{d}{N^{\frac{1}{2} + \frac{1}{2d}}} \leq \mathbb{E} D_N^*(\mathcal{P}) \leq \frac{\sqrt{d} (\log{N})^{\frac{1}{2}}}{N^{\frac{1}{2} + \frac{1}{2d}}}, where the upper bound with an unspecified constant CdC_d was proven earlier by Beck. Our proof makes crucial use of the sharp Dvoretzky-Kiefer-Wolfowitz inequality and a suitably taylored Bernstein inequality; we have reasons to believe that the upper bound has the sharp scaling in NN. Additional heuristics suggest that jittered sampling should be able to improve known bounds on the inverse of the star-discrepancy in the regime N≳ddN \gtrsim d^d. We also prove a partition principle showing that every partition of [0,1]d[0,1]^d combined with a jittered sampling construction gives rise to a set whose expected squared L2βˆ’L^2-discrepancy is smaller than that of purely random points

    Similar works