236 research outputs found

    A New Lower Bound for Semigroup Orthogonal Range Searching

    Get PDF
    We report the first improvement in the space-time trade-off of lower bounds for the orthogonal range searching problem in the semigroup model, since Chazelle's result from 1990. This is one of the very fundamental problems in range searching with a long history. Previously, Andrew Yao's influential result had shown that the problem is already non-trivial in one dimension~\cite{Yao-1Dlb}: using mm units of space, the query time Q(n)Q(n) must be Ω(α(m,n)+nmn+1)\Omega( \alpha(m,n) + \frac{n}{m-n+1}) where α(,)\alpha(\cdot,\cdot) is the inverse Ackermann's function, a very slowly growing function. In dd dimensions, Bernard Chazelle~\cite{Chazelle.LB.II} proved that the query time must be Q(n)=Ω((logβn)d1)Q(n) = \Omega( (\log_\beta n)^{d-1}) where β=2m/n\beta = 2m/n. Chazelle's lower bound is known to be tight for when space consumption is `high' i.e., m=Ω(nlogd+εn)m = \Omega(n \log^{d+\varepsilon}n). We have two main results. The first is a lower bound that shows Chazelle's lower bound was not tight for `low space': we prove that we must have m(n)=Ω(n(lognloglogn)d1)m (n) = \Omega(n (\log n \log\log n)^{d-1}). Our lower bound does not close the gap to the existing data structures, however, our second result is that our analysis is tight. Thus, we believe the gap is in fact natural since lower bounds are proven for idempotent semigroups while the data structures are built for general semigroups and thus they cannot assume (and use) the properties of an idempotent semigroup. As a result, we believe to close the gap one must study lower bounds for non-idempotent semigroups or building data structures for idempotent semigroups. We develope significantly new ideas for both of our results that could be useful in pursuing either of these directions

    On the complexity of range searching among curves

    Full text link
    Modern tracking technology has made the collection of large numbers of densely sampled trajectories of moving objects widely available. We consider a fundamental problem encountered when analysing such data: Given nn polygonal curves SS in Rd\mathbb{R}^d, preprocess SS into a data structure that answers queries with a query curve qq and radius ρ\rho for the curves of SS that have \Frechet distance at most ρ\rho to qq. We initiate a comprehensive analysis of the space/query-time trade-off for this data structuring problem. Our lower bounds imply that any data structure in the pointer model model that achieves Q(n)+O(k)Q(n) + O(k) query time, where kk is the output size, has to use roughly Ω((n/Q(n))2)\Omega\left((n/Q(n))^2\right) space in the worst case, even if queries are mere points (for the discrete \Frechet distance) or line segments (for the continuous \Frechet distance). More importantly, we show that more complex queries and input curves lead to additional logarithmic factors in the lower bound. Roughly speaking, the number of logarithmic factors added is linear in the number of edges added to the query and input curve complexity. This means that the space/query time trade-off worsens by an exponential factor of input and query complexity. This behaviour addresses an open question in the range searching literature: whether it is possible to avoid the additional logarithmic factors in the space and query time of a multilevel partition tree. We answer this question negatively. On the positive side, we show we can build data structures for the \Frechet distance by using semialgebraic range searching. Our solution for the discrete \Frechet distance is in line with the lower bound, as the number of levels in the data structure is O(t)O(t), where tt denotes the maximal number of vertices of a curve. For the continuous \Frechet distance, the number of levels increases to O(t2)O(t^2)

    Data Structure Lower Bounds for Document Indexing Problems

    Get PDF
    We study data structure problems related to document indexing and pattern matching queries and our main contribution is to show that the pointer machine model of computation can be extremely useful in proving high and unconditional lower bounds that cannot be obtained in any other known model of computation with the current techniques. Often our lower bounds match the known space-query time trade-off curve and in fact for all the problems considered, there is a very good and reasonable match between the our lower bounds and the known upper bounds, at least for some choice of input parameters. The problems that we consider are set intersection queries (both the reporting variant and the semi-group counting variant), indexing a set of documents for two-pattern queries, or forbidden- pattern queries, or queries with wild-cards, and indexing an input set of gapped-patterns (or two-patterns) to find those matching a document given at the query time.Comment: Full version of the conference version that appeared at ICALP 2016, 25 page

    Compressed Representations of Conjunctive Query Results

    Full text link
    Relational queries, and in particular join queries, often generate large output results when executed over a huge dataset. In such cases, it is often infeasible to store the whole materialized output if we plan to reuse it further down a data processing pipeline. Motivated by this problem, we study the construction of space-efficient compressed representations of the output of conjunctive queries, with the goal of supporting the efficient access of the intermediate compressed result for a given access pattern. In particular, we initiate the study of an important tradeoff: minimizing the space necessary to store the compressed result, versus minimizing the answer time and delay for an access request over the result. Our main contribution is a novel parameterized data structure, which can be tuned to trade off space for answer time. The tradeoff allows us to control the space requirement of the data structure precisely, and depends both on the structure of the query and the access pattern. We show how we can use the data structure in conjunction with query decomposition techniques, in order to efficiently represent the outputs for several classes of conjunctive queries.Comment: To appear in PODS'18; 35 pages; comments welcom

    A New Lower Bound for Semigroup Orthogonal Range Searching

    Get PDF
    We report the first improvement in the space-time trade-off of lower bounds for the orthogonal range searching problem in the semigroup model, since Chazelle\u27s result from 1990. This is one of the very fundamental problems in range searching with a long history. Previously, Andrew Yao\u27s influential result had shown that the problem is already non-trivial in one dimension [Yao, 1982]: using m units of space, the query time Q(n) must be Omega(alpha(m,n) + n/(m-n+1)) where alpha(*,*) is the inverse Ackermann\u27s function, a very slowly growing function. In d dimensions, Bernard Chazelle [Chazelle, 1990] proved that the query time must be Q(n) = Omega((log_beta n)^{d-1}) where beta = 2m/n. Chazelle\u27s lower bound is known to be tight for when space consumption is "high" i.e., m = Omega(n log^{d+epsilon}n). We have two main results. The first is a lower bound that shows Chazelle\u27s lower bound was not tight for "low space": we prove that we must have m Q(n) = Omega(n (log n log log n)^{d-1}). Our lower bound does not close the gap to the existing data structures, however, our second result is that our analysis is tight. Thus, we believe the gap is in fact natural since lower bounds are proven for idempotent semigroups while the data structures are built for general semigroups and thus they cannot assume (and use) the properties of an idempotent semigroup. As a result, we believe to close the gap one must study lower bounds for non-idempotent semigroups or building data structures for idempotent semigroups. We develope significantly new ideas for both of our results that could be useful in pursuing either of these directions

    Laser Vaporization Methods for the Synthesis of Metal and Semiconductor Nanoparticles; Graphene, Doped Graphene and Nanoparticles Supported on Graphene

    Get PDF
    The major objective of the research described in this dissertation is the development of new laser vaporization methods for the synthesis of metal and semiconductor nanoparticles, graphene, B- and N-doped graphene, and metal and semiconductor nanoparticles supported on graphene. These methods include the Laser Vaporization Controlled Condensation (LVCC) approach, which has been used in this work for the synthesis of: (1) gold nanoparticles supported on ceria and zirconia nanoparticles for the low temperature oxidation of carbon monoxide, and (2) graphene, boron- and nitrogen-doped graphene, hydrogen-terminated graphene (HTG), metal nanoparticles supported on graphene, and graphene quantum dots. The gold nanoparticles supported on ceria prepared by the LVCC method exhibit high activity for CO oxidation with a 100% conversion of CO to CO2 at about 60 °C. The first application of the LVCC method for the synthesis of these graphene and graphene-based nanomaterials is reported in this dissertation. Complete characterizations of the graphene-based nanomaterials using a variety of techniques including spectroscopic, X-ray diffraction, mass spectrometric and microscopic methods such as Raman, FTIR, UV-Vis, PL, XRD, XPS, TOF-MS, and TEM. The application of B- and N-doped graphene as catalysts for the oxygen reduction reaction in fuel cell applications is reported. The application of Pd nanoparticles supported on graphene for the Suzuki carbon-carbon cross-coupling reaction is reported. A new method is described for the synthesis of graphene quantum dots based on the combination of the LVCC method with oxidation/reduction sequences in solution. The N-doped graphene quantum dots emit strong blue luminescence, which can be tuned to produce different emission colors that could be used in biomedical imagining and other optoelectronic applications. The second method used in the research described in this dissertation is based on the Laser Vaporization Solvent Capturing (LVSC) approach, which has been introduced and developed, for the first time, for the synthesis of solvent-capped semiconductor and metal oxide nanoparticles. The method has been demonstrated for the synthesis of V, Mo, and W oxide nanoparticles capped by different solvent molecules such as acetonitrile and methanol. The LVSC method has also been applied for the synthesis of Si nanocrystals capped by acetonitrile clusters. The acetonitrile-capped Si nanocrystals exhibit strong emissions, which depend on the excitation wavelength and indicate the presence of Si quantum dots with different sizes. The Si and the metal oxide nanoparticles prepared by the LVSC method have been incorporated into graphene in order to synthesize graphene nanosheets with tunable properties depending on graphene-nanoparticle interactions
    corecore