3 research outputs found

    The Space Complexity of 2-Dimensional Approximate Range Counting

    Full text link
    We study the problem of 22-dimensional orthogonal range counting with additive error. Given a set PP of nn points drawn from an n×nn\times n grid and an error parameter \eps, the goal is to build a data structure, such that for any orthogonal range RR, it can return the number of points in PRP\cap R with additive error \eps n. A well-known solution for this problem is the {\em \eps-approximation}, which is a subset APA\subseteq P that can estimate the number of points in PRP\cap R with the number of points in ARA\cap R. It is known that an \eps-approximation of size O(\frac{1}{\eps} \log^{2.5} \frac{1}{\eps}) exists for any PP with respect to orthogonal ranges, and the best lower bound is \Omega(\frac{1}{\eps} \log \frac{1}{\eps}). The \eps-approximation is a rather restricted data structure, as we are not allowed to store any information other than the coordinates of the points in PP. In this paper, we explore what can be achieved without any restriction on the data structure. We first describe a simple data structure that uses O(\frac{1}{\eps}(\log^2\frac{1} {\eps} + \log n) ) bits and answers queries with error \eps n. We then prove a lower bound that any data structure that answers queries with error \eps n must use \Omega(\frac{1}{\eps}(\log^2\frac{1} {\eps} + \log n) ) bits. Our lower bound is information-theoretic: We show that there is a collection of 2Ω(nlogn)2^{\Omega(n\log n)} point sets with large {\em union combinatorial discrepancy}, and thus are hard to distinguish unless we use Ω(nlogn)\Omega(n\log n) bits.Comment: 19 pages, 5 figure

    The space complexity of 2-dimensional approximate range counting

    No full text
    We study the problem of 2-dimensional orthogonal range counting with additive error. Given a set P of n points drawn from an n x n grid and an error parameter ε, the goal is to build a data structure, such that for any orthogonal range R, the data structure can return the number of points in P ∩ R with additive error εn. A well-known solution for this problem is the ε-approximation. Informally speaking, an ε-approximation of P is a subset A ⊆ P that allows us to estimate the number of points in P ∩ R by counting the number of points in A ∩ R. It is known that an ε-approximation of size O(1/ε log2.5 1/ε) exists for any P with respect to orthogonal ranges, and the best lower bound is Ω(1/ε log 1/ε). The ε-approximation is a rather restricted data structure, as we are not allowed to store any information other than the coordinates of a subset of points in P. In this paper, we explore what can be achieved without any restriction on the data structure. We first describe a data structure that uses O(1/ε log 1/ε log log 1/ε log n) bits that answers queries with error εn. We then prove a lower bound that any data structure that answers queries with error O(log n) must use Ω(n log n) bits. This lower bound has two consequences: 1) answering queries with error O(log n) is as hard as answering the queries exactly; and 2) our upper bound cannot be improved in general by more than an O(log log 1/ε) factor. Copyright © SIAM
    corecore