882,205 research outputs found
On Range Searching with Semialgebraic Sets II
Let be a set of points in . We present a linear-size data
structure for answering range queries on with constant-complexity
semialgebraic sets as ranges, in time close to . It essentially
matches the performance of similar structures for simplex range searching, and,
for , significantly improves earlier solutions by the first two authors
obtained in~1994. This almost settles a long-standing open problem in range
searching.
The data structure is based on the polynomial-partitioning technique of Guth
and Katz [arXiv:1011.4105], which shows that for a parameter , , there exists a -variate polynomial of degree such that
each connected component of contains at most points
of , where is the zero set of . We present an efficient randomized
algorithm for computing such a polynomial partition, which is of independent
interest and is likely to have additional applications
On the complexity of range searching among curves
Modern tracking technology has made the collection of large numbers of
densely sampled trajectories of moving objects widely available. We consider a
fundamental problem encountered when analysing such data: Given polygonal
curves in , preprocess into a data structure that answers
queries with a query curve and radius for the curves of that
have \Frechet distance at most to .
We initiate a comprehensive analysis of the space/query-time trade-off for
this data structuring problem. Our lower bounds imply that any data structure
in the pointer model model that achieves query time, where is
the output size, has to use roughly space in
the worst case, even if queries are mere points (for the discrete \Frechet
distance) or line segments (for the continuous \Frechet distance). More
importantly, we show that more complex queries and input curves lead to
additional logarithmic factors in the lower bound. Roughly speaking, the number
of logarithmic factors added is linear in the number of edges added to the
query and input curve complexity. This means that the space/query time
trade-off worsens by an exponential factor of input and query complexity. This
behaviour addresses an open question in the range searching literature: whether
it is possible to avoid the additional logarithmic factors in the space and
query time of a multilevel partition tree. We answer this question negatively.
On the positive side, we show we can build data structures for the \Frechet
distance by using semialgebraic range searching. Our solution for the discrete
\Frechet distance is in line with the lower bound, as the number of levels in
the data structure is , where denotes the maximal number of vertices
of a curve. For the continuous \Frechet distance, the number of levels
increases to
A Multi-variate Discrimination Technique Based on Range-Searching
We present a fast and transparent multi-variate event classification
technique, called PDE-RS, which is based on sampling the signal and background
densities in a multi-dimensional phase space using range-searching. The
employed algorithm is presented in detail and its behaviour is studied with
simple toy examples representing basic patterns of problems often encountered
in High Energy Physics data analyses. In addition an example relevant for the
search for instanton-induced processes in deep-inelastic scattering at HERA is
discussed. For all studied examples, the new presented method performs as good
as artificial Neural Networks and has furthermore the advantage to need less
computation time. This allows to carefully select the best combination of
observables which optimally separate the signal and background and for which
the simulations describe the data best. Moreover, the systematic and
statistical uncertainties can be easily evaluated. The method is therefore a
powerful tool to find a small number of signal events in the large data samples
expected at future particle colliders.Comment: Submitted to NIM, 18 pages, 8 figure
Output-Sensitive Tools for Range Searching in Higher Dimensions
Let be a set of points in . A point is
\emph{-shallow} if it lies in a halfspace which contains at most points
of (including ). We show that if all points of are -shallow, then
can be partitioned into subsets, so that any hyperplane
crosses at most subsets. Given such
a partition, we can apply the standard construction of a spanning tree with
small crossing number within each subset, to obtain a spanning tree for the
point set , with crossing number . This allows us to extend the construction of Har-Peled
and Sharir \cite{hs11} to three and higher dimensions, to obtain, for any set
of points in (without the shallowness assumption), a
spanning tree with {\em small relative crossing number}. That is, any
hyperplane which contains points of on one side, crosses
edges of . Using a
similar mechanism, we also obtain a data structure for halfspace range
counting, which uses space (and somewhat higher
preprocessing cost), and answers a query in time , where is the output size
A New Lower Bound for Semigroup Orthogonal Range Searching
We report the first improvement in the space-time trade-off of lower bounds
for the orthogonal range searching problem in the semigroup model, since
Chazelle's result from 1990. This is one of the very fundamental problems in
range searching with a long history. Previously, Andrew Yao's influential
result had shown that the problem is already non-trivial in one
dimension~\cite{Yao-1Dlb}: using units of space, the query time must
be where is the
inverse Ackermann's function, a very slowly growing function.
In dimensions, Bernard Chazelle~\cite{Chazelle.LB.II} proved that the
query time must be where .
Chazelle's lower bound is known to be tight for when space consumption is
`high' i.e., . We have two main results.
The first is a lower bound that shows Chazelle's lower bound was not tight for
`low space': we prove that we must have . Our lower bound does not close the gap to the existing data
structures, however, our second result is that our analysis is tight. Thus, we
believe the gap is in fact natural since lower bounds are proven for idempotent
semigroups while the data structures are built for general semigroups and thus
they cannot assume (and use) the properties of an idempotent semigroup. As a
result, we believe to close the gap one must study lower bounds for
non-idempotent semigroups or building data structures for idempotent
semigroups. We develope significantly new ideas for both of our results that
could be useful in pursuing either of these directions
A technique for adding range restrictions to generalized searching problems
In a generalized searching problem, a set of colored geometric objects has to be stored in a data structure, such that for any given query object , the distinct colors of the objects of intersected by can be reported efficiently. In this paper, a general technique is presented for adding a range restriction to such a problem. The technique is applied to the problem of querying a set of colored points (resp.\ fat triangles) with a fat triangle (resp.\ point). For both problems, a data structure is obtained having size and query time . Here, denotes the number of colors reported by the query, and is an arbitrarily small positive constant
Dynamic Colored Orthogonal Range Searching
In the colored orthogonal range reporting problem, we want a data structure for storing n colored points so that given a query axis-aligned rectangle, we can report the distinct colors among the points inside the rectangle. This natural problem has been studied in a series of papers, but most prior work focused on the static case. In this paper, we give a dynamic data structure in the 2D case which can answer queries in O(log^{1+o(1)} n + klog^{1/2+o(1)}n) time, where k denotes the output size (the number of distinct colors in the query range), and which can support insertions and deletions in O(log^{2+o(1)}n) time (amortized) in the standard RAM model. This is the first fully dynamic structure with polylogarithmic update time whose query cost per color reported is sublogarithmic (near ?{log n}). We also give an alternative data structure with O(log^{1+o(1)} n + klog^{3/4+o(1)}n) query time and O(log^{3/2+o(1)}n) update time (amortized). We also mention extensions to higher constant dimensions
- …