80,803 research outputs found

    Empirical Evaluation of the Parallel Distribution Sweeping Framework on Multicore Architectures

    Full text link
    In this paper, we perform an empirical evaluation of the Parallel External Memory (PEM) model in the context of geometric problems. In particular, we implement the parallel distribution sweeping framework of Ajwani, Sitchinava and Zeh to solve batched 1-dimensional stabbing max problem. While modern processors consist of sophisticated memory systems (multiple levels of caches, set associativity, TLB, prefetching), we empirically show that algorithms designed in simple models, that focus on minimizing the I/O transfers between shared memory and single level cache, can lead to efficient software on current multicore architectures. Our implementation exhibits significantly fewer accesses to slow DRAM and, therefore, outperforms traditional approaches based on plane sweep and two-way divide and conquer.Comment: Longer version of ESA'13 pape

    OpenACC Based GPU Parallelization of Plane Sweep Algorithm for Geometric Intersection

    Get PDF
    Line segment intersection is one of the elementary operations in computational geometry. Complex problems in Geographic Information Systems (GIS) like finding map overlays or spatial joins using polygonal data require solving segment intersections. Plane sweep paradigm is used for finding geometric intersection in an efficient manner. However, it is difficult to parallelize due to its in-order processing of spatial events. We present a new fine-grained parallel algorithm for geometric intersection and its CPU and GPU implementation using OpenMP and OpenACC. To the best of our knowledge, this is the first work demonstrating an effective parallelization of plane sweep on GPUs. We chose compiler directive based approach for implementation because of its simplicity to parallelize sequential code. Using Nvidia Tesla P100 GPU, our implementation achieves around 40X speedup for line segment intersection problem on 40K and 80K data sets compared to sequential CGAL library

    History navigation in location-based mobile systems

    Get PDF
    The aim of this paper is to provide an overview and comparison of concepts that have been proposed to guide users through interaction histories (e.g. for web browsers). The goal is to gain insights into history design that may be used for designing an interaction history for the location-based Tourist Information Provider (TIP) system [8]. The TIP system consists of several services that interact on a mobile device

    Space-Efficient DFS and Applications: Simpler, Leaner, Faster

    Full text link
    The problem of space-efficient depth-first search (DFS) is reconsidered. A particularly simple and fast algorithm is presented that, on a directed or undirected input graph G=(V,E)G=(V,E) with nn vertices and mm edges, carries out a DFS in O(n+m)O(n+m) time with n+vV3log2(dv1)+O(logn)n+m+O(logn)n+\sum_{v\in V_{\ge 3}}\lceil{\log_2(d_v-1)}\rceil +O(\log n)\le n+m+O(\log n) bits of working memory, where dvd_v is the (total) degree of vv, for each vVv\in V, and V3={vVdv3}V_{\ge 3}=\{v\in V\mid d_v\ge 3\}. A slightly more complicated variant of the algorithm works in the same time with at most n+(4/5)m+O(logn)n+({4/5})m+O(\log n) bits. It is also shown that a DFS can be carried out in a graph with nn vertices and mm edges in O(n+mlog ⁣n)O(n+m\log^*\! n) time with O(n)O(n) bits or in O(n+m)O(n+m) time with either O(nloglog(4+m/n))O(n\log\log(4+{m/n})) bits or, for arbitrary integer k1k\ge 1, O(nlog(k) ⁣n)O(n\log^{(k)}\! n) bits. These results among them subsume or improve most earlier results on space-efficient DFS. Some of the new time and space bounds are shown to extend to applications of DFS such as the computation of cut vertices, bridges, biconnected components and 2-edge-connected components in undirected graphs

    Optimal randomized incremental construction for guaranteed logarithmic planar point location

    Full text link
    Given a planar map of nn segments in which we wish to efficiently locate points, we present the first randomized incremental construction of the well-known trapezoidal-map search-structure that only requires expected O(nlogn)O(n \log n) preprocessing time while deterministically guaranteeing worst-case linear storage space and worst-case logarithmic query time. This settles a long standing open problem; the best previously known construction time of such a structure, which is based on a directed acyclic graph, so-called the history DAG, and with the above worst-case space and query-time guarantees, was expected O(nlog2n)O(n \log^2 n). The result is based on a deeper understanding of the structure of the history DAG, its depth in relation to the length of its longest search path, as well as its correspondence to the trapezoidal search tree. Our results immediately extend to planar maps induced by finite collections of pairwise interior disjoint well-behaved curves.Comment: The article significantly extends the theoretical aspects of the work presented in http://arxiv.org/abs/1205.543

    Improved Implementation of Point Location in General Two-Dimensional Subdivisions

    Full text link
    We present a major revamp of the point-location data structure for general two-dimensional subdivisions via randomized incremental construction, implemented in CGAL, the Computational Geometry Algorithms Library. We can now guarantee that the constructed directed acyclic graph G is of linear size and provides logarithmic query time. Via the construction of the Voronoi diagram for a given point set S of size n, this also enables nearest-neighbor queries in guaranteed O(log n) time. Another major innovation is the support of general unbounded subdivisions as well as subdivisions of two-dimensional parametric surfaces such as spheres, tori, cylinders. The implementation is exact, complete, and general, i.e., it can also handle non-linear subdivisions. Like the previous version, the data structure supports modifications of the subdivision, such as insertions and deletions of edges, after the initial preprocessing. A major challenge is to retain the expected O(n log n) preprocessing time while providing the above (deterministic) space and query-time guarantees. We describe an efficient preprocessing algorithm, which explicitly verifies the length L of the longest query path in O(n log n) time. However, instead of using L, our implementation is based on the depth D of G. Although we prove that the worst case ratio of D and L is Theta(n/log n), we conjecture, based on our experimental results, that this solution achieves expected O(n log n) preprocessing time.Comment: 21 page
    corecore