24,479 research outputs found

    Efficient computation of partition of unity interpolants through a block-based searching technique

    Full text link
    In this paper we propose a new efficient interpolation tool, extremely suitable for large scattered data sets. The partition of unity method is used and performed by blending Radial Basis Functions (RBFs) as local approximants and using locally supported weight functions. In particular we present a new space-partitioning data structure based on a partition of the underlying generic domain in blocks. This approach allows us to examine only a reduced number of blocks in the search process of the nearest neighbour points, leading to an optimized searching routine. Complexity analysis and numerical experiments in two- and three-dimensional interpolation support our findings. Some applications to geometric modelling are also considered. Moreover, the associated software package written in \textsc{Matlab} is here discussed and made available to the scientific community

    On Range Searching with Semialgebraic Sets II

    Full text link
    Let PP be a set of nn points in Rd\R^d. We present a linear-size data structure for answering range queries on PP with constant-complexity semialgebraic sets as ranges, in time close to O(n1−1/d)O(n^{1-1/d}). It essentially matches the performance of similar structures for simplex range searching, and, for d≥5d\ge 5, significantly improves earlier solutions by the first two authors obtained in~1994. This almost settles a long-standing open problem in range searching. The data structure is based on the polynomial-partitioning technique of Guth and Katz [arXiv:1011.4105], which shows that for a parameter rr, 1<r≤n1 < r \le n, there exists a dd-variate polynomial ff of degree O(r1/d)O(r^{1/d}) such that each connected component of Rd∖Z(f)\R^d\setminus Z(f) contains at most n/rn/r points of PP, where Z(f)Z(f) is the zero set of ff. We present an efficient randomized algorithm for computing such a polynomial partition, which is of independent interest and is likely to have additional applications

    Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database Systems

    Full text link
    Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I/O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research in scalable massively parallel multi-core data processing as it was deemed inferior to hash joins. We devise a suite of new massively parallel sort-merge (MPSM) join algorithms that are based on partial partition-based sorting. Contrary to classical sort-merge joins, our MPSM algorithms do not rely on a hard to parallelize final merge step to create one complete sort order. Rather they work on the independently created runs in parallel. This way our MPSM algorithms are NUMA-affine as all the sorting is carried out on local memory partitions. An extensive experimental evaluation on a modern 32-core machine with one TB of main memory proves the competitive performance of MPSM on large main memory databases with billions of objects. It scales (almost) linearly in the number of employed cores and clearly outperforms competing hash join proposals - in particular it outperforms the "cutting-edge" Vectorwise parallel query engine by a factor of four.Comment: VLDB201
    • …
    corecore