15,649 research outputs found
gScan: Accelerating Graham Scan on the GPU
This paper presents a fast implementation of the Graham scan on the GPU. The
proposed algorithm is composed of two stages: (1) two rounds of preprocessing
performed on the GPU and (2) the finalization of finding the convex hull on the
CPU. We first discard the interior points that locate inside a quadrilateral
formed by four extreme points, sort the remaining points according to the
angles, and then divide them into the left and the right regions. For each
region, we perform a second round of filtering using the proposed preprocessing
approach to discard the interior points in further. We finally obtain the
expected convex hull by calculating the convex hull of the remaining points on
the CPU. We directly employ the parallel sorting, reduction, and partitioning
provided by the library Thrust for better efficiency and simplicity.
Experimental results show that our implementation achieves 6x ~ 7x speedups
over the Qhull implementation for 20M points.Comment: arXiv admin note: text overlap with arXiv:1508.0548
CudaChain: A Practical GPU-accelerated 2D Convex Hull Algorithm
This paper presents a practical GPU-accelerated convex hull algorithm and a
novel Sorting-based Preprocessing Approach (SPA) for planar point sets. The
proposed algorithm consists of two stages: (1) two rounds of preprocessing
performed on the GPU and (2) the finalization of calculating the expected
convex hull on the CPU. We first discard the interior points that locate inside
a quadrilateral formed by four extreme points, and then distribute the
remaining points into several (typically four) sub regions. For each subset of
points, we first sort them in parallel, then perform the second round of
discarding using SPA, and finally form a simple chain for the current remaining
points. A simple polygon can be easily generated by directly connecting all the
chains in sub regions. We at last obtain the expected convex hull of the input
points by calculating the convex hull of the simple polygon. We use the library
Thrust to realize the parallel sorting, reduction, and partitioning for better
efficiency and simplicity. Experimental results show that our algorithm
achieves 5x ~ 6x speedups over the Qhull implementation for 20M points. Thus,
this algorithm is competitive in practical applications for its simplicity and
satisfied efficiency
A Novel Implementation of QuickHull Algorithm on the GPU
We present a novel GPU-accelerated implementation of the QuickHull algorihtm
for calculating convex hulls of planar point sets. We also describe a practical
solution to demonstrate how to efficiently implement a typical
Divide-and-Conquer algorithm on the GPU. We highly utilize the parallel
primitives provided by the library Thrust such as the parallel segmented scan
for better efficiency and simplicity. To evaluate the performance of our
implementation, we carry out four groups of experimental tests using two groups
of point sets in two modes on the GPU K20c. Experimental results indicate that:
our implementation can achieve the speedups of up to 10.98x over the
state-of-art CPU-based convex hull implementation Qhull [16]. In addition, our
implementation can find the convex hull of 20M points in about 0.2 seconds.Comment: 10 pages, 5 figure
Plane-Sweep Incremental Algorithm: Computing Delaunay Tessellations of Large Datasets
We present the plane-sweep incremental algorithm, a hybrid approach for
computing Delaunay tessellations of large point sets whose size exceeds the
computer's main memory. This approach unites the simplicity of the incremental
algorithms with the comparatively low memory requirements of plane-sweep
approaches. The procedure is to first sort the point set along the first
principal component and then to sequentially insert the points into the
tessellation, essentially simulating a sweeping plane. The part of the
tessellation that has been passed by the sweeping plane can be evicted from
memory and written to disk, limiting the memory requirement of the program to
the "thickness" of the data set along its first principal component. We
implemented the algorithm and used it to compute the Delaunay tessellation and
Voronoi partition of the Sloan Digital Sky Survey magnitude space consisting of
287 million points.Comment: Technical Report from 200
Beneath-and-Beyond revisited
It is shown how the Beneath-and-Beyond algorithm can be used to yield another
proof of the equivalence of V- and H-representations of convex polytopes. In
this sense this paper serves as the sketch of an introduction to polytope
theory with a focus on algorithmic aspects. Moreover, computational results are
presented to compare Beneath-and-Beyond to other convex hull implementations.Comment: 21 pages, 2 figures; v2: added the bibliography which was erroneously
omitted in v
A Note on Experiments and Software For Multidimensional Order Statistics
In this note we describe experiments on an implementation of two methods
proposed in the literature for computing regions that correspond to a notion of
order statistics for multidimensional data. Our implementation, which works for
any dimension greater than one, is the only that we know of to be publicly
available. Experiments run using the software confirm that half-space peeling
generally gives better results than directly peeling convex hulls, but at a
computational cost
A geometric method of sector decomposition
We propose a new geometric method of IR factorization in sector
decomposition. The problem is converted into a set of problems in convex
geometry. The latter problems are solved using algorithms in combinatorial
geometry. This method provides a deterministic algorithm and never falls into
an infinite loop. The number of resulting sectors depends on the algorithm of
triangulation. Our test implementation shows smaller number of sectors
comparing with other existing methods with iterations.Comment: 17 pages, 2 eps figure
Optimal Compression of a Polyline with Segments and Arcs
This paper describes an efficient approach to constructing a resultant
polyline with a minimum number of segments and arcs. While fitting an arc can
be done with complexity O(1) (see [1] and [2]), the main complexity is in
checking that the resultant arc is within the specified tolerance. There are
additional tests to check for the ends and for changes in direction (see [3,
section 3] and [4, sections II.C and II.D]). However, the most important part
in reducing complexity is the ability to subdivide the polyline in order to
limit the number of arc fittings [2]. The approach described in this paper
finds a compressed polyline with a minimum number of segments and arcs.Comment: 40 pages, 34 figures, 3 table
Computing convex hulls and counting integer points with polymake
The main purpose of this paper is to report on the state of the art of
computing integer hulls and their facets as well as counting lattice points in
convex polytopes. Using the polymake system we explore various algorithms and
implementations. Our experience in this area is summarized in ten "rules of
thumb".Comment: major revision: experiments repeated with new software versions; new
experiments and additional software tested; new title; 38 pages including
appendix, 10 figures, 9 table
Experience Report: Formal Methods in Material Science
Increased demands in the field of scientific computation require that
algorithms be more efficiently implemented. Maintaining correctness in addition
to efficiency is a challenge that software engineers in the field have to face.
In this report we share our first impressions and experiences on the
applicability of formal methods to such design challenges arising in the
development of scientific computation software in the field of material
science. We investigated two different algorithms, one for load distribution
and one for the computation of convex hulls, and demonstrate how formal methods
have been used to discover counterexamples to the correctness of the existing
implementations as well as proving the correctness of a revised algorithm. The
techniques employed for this include SMT solvers, and automatic and interactive
verification tools.Comment: experience repor
- …