376 research outputs found
Some Results on the Complexity of Numerical Integration
This is a survey (21 pages, 124 references) written for the MCQMC 2014
conference in Leuven, April 2014. We start with the seminal paper of Bakhvalov
(1959) and end with new results on the curse of dimension and on the complexity
of oscillatory integrals. Some small errors of earlier versions are corrected
Pipage Rounding, Pessimistic Estimators and Matrix Concentration
Pipage rounding is a dependent random sampling technique that has several interesting properties and diverse applications. One property that has been particularly useful is negative correlation of the resulting vector. Unfortunately negative correlation has its limitations, and there are some further desirable properties that do not seem to follow from existing techniques. In particular, recent concentration results for sums of independent random matrices are not known to extend to a negatively dependent setting.
We introduce a simple but useful technique called concavity of pessimistic estimators. This technique allows us to show concentration of submodular functions and conc
Constructing Low Star Discrepancy Point Sets with Genetic Algorithms
Geometric discrepancies are standard measures to quantify the irregularity of
distributions. They are an important notion in numerical integration. One of
the most important discrepancy notions is the so-called \emph{star
discrepancy}. Roughly speaking, a point set of low star discrepancy value
allows for a small approximation error in quasi-Monte Carlo integration. It is
thus the most studied discrepancy notion.
In this work we present a new algorithm to compute point sets of low star
discrepancy. The two components of the algorithm (for the optimization and the
evaluation, respectively) are based on evolutionary principles. Our algorithm
clearly outperforms existing approaches. To the best of our knowledge, it is
also the first algorithm which can be adapted easily to optimize inverse star
discrepancies.Comment: Extended abstract appeared at GECCO 2013. v2: corrected 3 numbers in
table
Structure-Aware Sampling: Flexible and Accurate Summarization
In processing large quantities of data, a fundamental problem is to obtain a
summary which supports approximate query answering. Random sampling yields
flexible summaries which naturally support subset-sum queries with unbiased
estimators and well-understood confidence bounds.
Classic sample-based summaries, however, are designed for arbitrary subset
queries and are oblivious to the structure in the set of keys. The particular
structure, such as hierarchy, order, or product space (multi-dimensional),
makes range queries much more relevant for most analysis of the data.
Dedicated summarization algorithms for range-sum queries have also been
extensively studied. They can outperform existing sampling schemes in terms of
accuracy on range queries per summary size. Their accuracy, however, rapidly
degrades when, as is often the case, the query spans multiple ranges. They are
also less flexible - being targeted for range sum queries alone - and are often
quite costly to build and use.
In this paper we propose and evaluate variance optimal sampling schemes that
are structure-aware. These summaries improve over the accuracy of existing
structure-oblivious sampling schemes on range queries while retaining the
benefits of sample-based summaries: flexible summaries, with high accuracy on
both range queries and arbitrary subset queries
- …