82 research outputs found
Construction of Good Rank-1 Lattice Rules Based on the Weighted Star Discrepancy
The ‘goodness’ of a set of quadrature points in [0, 1]d may be measured by the weighted star discrepancy. If the weights for the weighted star discrepancy are summable, then we show that for n prime there exist n-point rank-1 lattice rules whose weighted star discrepancy is O(n−1+δ) for any δ>0, where the implied constant depends on δ and the weights, but is independent of d and n. Further, we show that the generating vector z for such lattice rules may be obtained using a component-by-component construction. The results given here for the weighted star discrepancy are used to derive corresponding results for a weighted Lp discrepancy
Component-by-component construction of good intermediate-rank lattice rules
It is known that the generating vector of a rank-1 lattice rule can be constructed component-by-component to achieve strong tractability error bounds in both weighted Korobov spaces and weighted Sobolev spaces. Since the weights for these spaces are nonincreasing, the first few variables are in a sense more important than the rest. We thus propose to copy the points of a rank-1 lattice rule a number of times in the first few dimensions to yield an intermediate-rank lattice rule. We show that the generating vector (and in weighted Sobolev spaces, the shift also) of an intermediate-rank lattice rule can also be constructed component-by-component to achieve strong tractability error bounds. In certain circumstances, these bounds are better than the corresponding bounds for rank-1 lattice rules
Good lattice rules with a composite number of points based on the product weighted star discrepancy
Rank-1 lattice rules based on a weighted star discrepancy with weights of a product form have been previously constructed under the assumption that the number of points is prime. Here, we extend these results to the non-prime case. We show that if the weights are summable, there exist lattice rules whose weighted star discrepancy is O(n−1+δ), for any δ > 0, with the implied constant independent of the dimension and the number of lattice points, but dependent on δ and the weights. Then we show that the generating vector of such a rule can be constructed using a component-by-component (CBC) technique. The cost of the CBC construction is analysed in the final part of the paper
Successive Coordinate Search and Component-by-Component Construction of Rank-1 Lattice Rules
The (fast) component-by-component (CBC) algorithm is an efficient tool for
the construction of generating vectors for quasi-Monte Carlo rank-1 lattice
rules in weighted reproducing kernel Hilbert spaces. We consider product
weights, which assigns a weight to each dimension. These weights encode the
effect a certain variable (or a group of variables by the product of the
individual weights) has. Smaller weights indicate less importance. Kuo (2003)
proved that the CBC algorithm achieves the optimal rate of convergence in the
respective function spaces, but this does not imply the algorithm will find the
generating vector with the smallest worst-case error. In fact it does not. We
investigate a generalization of the component-by-component construction that
allows for a general successive coordinate search (SCS), based on an initial
generating vector, and with the aim of getting closer to the smallest
worst-case error. The proposed method admits the same type of worst-case error
bounds as the CBC algorithm, independent of the choice of the initial vector.
Under the same summability conditions on the weights as in [Kuo,2003] the error
bound of the algorithm can be made independent of the dimension and we
achieve the same optimal order of convergence for the function spaces from
[Kuo,2003]. Moreover, a fast version of our method, based on the fast CBC
algorithm by Nuyens and Cools, is available, reducing the computational cost of
the algorithm to operations, where denotes the number
of function evaluations. Numerical experiments seeded by a Korobov-type
generating vector show that the new SCS algorithm will find better choices than
the CBC algorithm and the effect is better when the weights decay slower.Comment: 13 pages, 1 figure, MCQMC2016 conference (Stanford
- …