55 research outputs found

### A New Lower Bound for Semigroup Orthogonal Range Searching

We report the first improvement in the space-time trade-off of lower bounds
for the orthogonal range searching problem in the semigroup model, since
Chazelle's result from 1990. This is one of the very fundamental problems in
range searching with a long history. Previously, Andrew Yao's influential
result had shown that the problem is already non-trivial in one
dimension~\cite{Yao-1Dlb}: using $m$ units of space, the query time $Q(n)$ must
be $\Omega( \alpha(m,n) + \frac{n}{m-n+1})$ where $\alpha(\cdot,\cdot)$ is the
inverse Ackermann's function, a very slowly growing function.
In $d$ dimensions, Bernard Chazelle~\cite{Chazelle.LB.II} proved that the
query time must be $Q(n) = \Omega( (\log_\beta n)^{d-1})$ where $\beta = 2m/n$.
Chazelle's lower bound is known to be tight for when space consumption is
`high' i.e., $m = \Omega(n \log^{d+\varepsilon}n)$. We have two main results.
The first is a lower bound that shows Chazelle's lower bound was not tight for
`low space': we prove that we must have $m (n) = \Omega(n (\log n \log\log
n)^{d-1})$. Our lower bound does not close the gap to the existing data
structures, however, our second result is that our analysis is tight. Thus, we
believe the gap is in fact natural since lower bounds are proven for idempotent
semigroups while the data structures are built for general semigroups and thus
they cannot assume (and use) the properties of an idempotent semigroup. As a
result, we believe to close the gap one must study lower bounds for
non-idempotent semigroups or building data structures for idempotent
semigroups. We develope significantly new ideas for both of our results that
could be useful in pursuing either of these directions

### On the complexity of range searching among curves

Modern tracking technology has made the collection of large numbers of
densely sampled trajectories of moving objects widely available. We consider a
fundamental problem encountered when analysing such data: Given $n$ polygonal
curves $S$ in $\mathbb{R}^d$, preprocess $S$ into a data structure that answers
queries with a query curve $q$ and radius $\rho$ for the curves of $S$ that
have \Frechet distance at most $\rho$ to $q$.
We initiate a comprehensive analysis of the space/query-time trade-off for
this data structuring problem. Our lower bounds imply that any data structure
in the pointer model model that achieves $Q(n) + O(k)$ query time, where $k$ is
the output size, has to use roughly $\Omega\left((n/Q(n))^2\right)$ space in
the worst case, even if queries are mere points (for the discrete \Frechet
distance) or line segments (for the continuous \Frechet distance). More
importantly, we show that more complex queries and input curves lead to
additional logarithmic factors in the lower bound. Roughly speaking, the number
of logarithmic factors added is linear in the number of edges added to the
query and input curve complexity. This means that the space/query time
trade-off worsens by an exponential factor of input and query complexity. This
behaviour addresses an open question in the range searching literature: whether
it is possible to avoid the additional logarithmic factors in the space and
query time of a multilevel partition tree. We answer this question negatively.
On the positive side, we show we can build data structures for the \Frechet
distance by using semialgebraic range searching. Our solution for the discrete
\Frechet distance is in line with the lower bound, as the number of levels in
the data structure is $O(t)$, where $t$ denotes the maximal number of vertices
of a curve. For the continuous \Frechet distance, the number of levels
increases to $O(t^2)$

### A New Lower Bound for Semigroup Orthogonal Range Searching

We report the first improvement in the space-time trade-off of lower bounds for the orthogonal range searching problem in the semigroup model, since Chazelle\u27s result from 1990. This is one of the very fundamental problems in range searching with a long history. Previously, Andrew Yao\u27s influential result had shown that the problem is already non-trivial in one dimension [Yao, 1982]: using m units of space, the query time Q(n) must be Omega(alpha(m,n) + n/(m-n+1)) where alpha(*,*) is the inverse Ackermann\u27s function, a very slowly growing function. In d dimensions, Bernard Chazelle [Chazelle, 1990] proved that the query time must be Q(n) = Omega((log_beta n)^{d-1}) where beta = 2m/n. Chazelle\u27s lower bound is known to be tight for when space consumption is "high" i.e., m = Omega(n log^{d+epsilon}n).
We have two main results. The first is a lower bound that shows Chazelle\u27s lower bound was not tight for "low space": we prove that we must have m Q(n) = Omega(n (log n log log n)^{d-1}). Our lower bound does not close the gap to the existing data structures, however, our second result is that our analysis is tight. Thus, we believe the gap is in fact natural since lower bounds are proven for idempotent semigroups while the data structures are built for general semigroups and thus they cannot assume (and use) the properties of an idempotent semigroup. As a result, we believe to close the gap one must study lower bounds for non-idempotent semigroups or building data structures for idempotent semigroups. We develope significantly new ideas for both of our results that could be useful in pursuing either of these directions

### Data Structure Lower Bounds for Document Indexing Problems

We study data structure problems related to document indexing and pattern
matching queries and our main contribution is to show that the pointer machine
model of computation can be extremely useful in proving high and unconditional
lower bounds that cannot be obtained in any other known model of computation
with the current techniques. Often our lower bounds match the known space-query
time trade-off curve and in fact for all the problems considered, there is a
very good and reasonable match between the our lower bounds and the known upper
bounds, at least for some choice of input parameters. The problems that we
consider are set intersection queries (both the reporting variant and the
semi-group counting variant), indexing a set of documents for two-pattern
queries, or forbidden- pattern queries, or queries with wild-cards, and
indexing an input set of gapped-patterns (or two-patterns) to find those
matching a document given at the query time.Comment: Full version of the conference version that appeared at ICALP 2016,
25 page

### Lower Bounds for Semialgebraic Range Searching and Stabbing Problems

In the semialgebraic range searching problem, we are to preprocess $n$ points
in $\mathbb{R}^d$ s.t. for any query range from a family of constant complexity
semialgebraic sets, all the points intersecting the range can be reported or
counted efficiently. When the ranges are composed of simplices, the problem can
be solved using $S(n)$ space and with $Q(n)$ query time with $S(n)Q^d(n) =
\tilde{O}(n^d)$ and this trade-off is almost tight. Consequently, there exists
low space structures that use $\tilde{O}(n)$ space with $O(n^{1-1/d})$ query
time and fast query structures that use $O(n^d)$ space with $O(\log^{d} n)$
query time. However, for the general semialgebraic ranges, only low space
solutions are known, but the best solutions match the same trade-off curve as
the simplex queries. It has been conjectured that the same could be done for
the fast query case but this open problem has stayed unresolved.
Here, we disprove this conjecture. We give the first nontrivial lower bounds
for semilagebraic range searching and related problems. We show that any data
structure for reporting the points between two concentric circles with $Q(n)$
query time must use $S(n)=\Omega(n^{3-o(1)}/Q(n)^5)$ space, meaning, for
$Q(n)=O(\log^{O(1)}n)$, $\Omega(n^{3-o(1)})$ space must be used. We also study
the problem of reporting the points between two polynomials of form
$Y=\sum_{i=0}^\Delta a_i X^i$ where $a_0, \cdots, a_\Delta$ are given at the
query time. We show $S(n)=\Omega(n^{\Delta+1-o(1)}/Q(n)^{\Delta^2+\Delta})$. So
for $Q(n)=O(\log^{O(1)}n)$, we must use $\Omega(n^{\Delta+1-o(1)})$ space. For
the dual semialgebraic stabbing problems, we show that in linear space, any
data structure that solves 2D ring stabbing must use $\Omega(n^{2/3})$ query
time. This almost matches the linearization upper bound. For general
semialgebraic slab stabbing problems, again, we show an almost tight lower
bounds.Comment: Submitted to SoCG'21; this version: readjust the table and other
minor change

- …