117 research outputs found

    Finding Convex Hulls Using Quickhull on the GPU

    Full text link
    We present a convex hull algorithm that is accelerated on commodity graphics hardware. We analyze and identify the hurdles of writing a recursive divide and conquer algorithm on the GPU and divise a framework for representing this class of problems. Our framework transforms the recursive splitting step into a permutation step that is well-suited for graphics hardware. Our convex hull algorithm of choice is Quickhull. Our parallel Quickhull implementation (for both 2D and 3D cases) achieves an order of magnitude speedup over standard computational geometry libraries.Comment: 11 page

    ParGeo: A Library for Parallel Computational Geometry

    Get PDF

    Convex Hulls: Surface Mapping onto a Sphere

    Full text link
    Writing an uncomplicated, robust, and scalable three-dimensional convex hull algorithm is challenging and problematic. This includes, coplanar and collinear issues, numerical accuracy, performance, and complexity trade-offs. While there are a number of methods available for finding the convex hull based on geometric calculations, such as, the distance between points, but do not address the technical challenges when implementing a usable solution (e.g., numerical issues and degenerate cloud points). We explain some common algorithm pitfalls and engineering modifications to overcome and solve these limitations. We present a novel iterative method using support mapping and surface projection to create an uncomplicated and robust 2d and 3d convex hull algorithm

    Occlusion Modeling for Coherent Echo Data Simulation:A Comparison Between Ray-Tracing and Convex-Hull Methods

    Get PDF
    The ability to simulate realistic coherent datasets for synthetic aperture imaging systems is crucial for the design, development and evaluation of the sensors and their signal processing pipelines, machine learning algorithms and autonomy systems. In the case of synthetic aperture sonar (SAS), collecting experimental data is expensive and it is rarely possible to obtain ground truth of the sensor’s path, the speed of sound in the medium, and the geometry of the imaged scene. Simulating sonar echo data allows signal processing algorithms to be tested with known ground truth, enabling rapid and inexpensive development and evaluation of signal processing algorithms. The de-facto standard for simulating conventional high-frequency (i.e., > 100 kHz) SAS echo data from an arbitrary sensor, path and scene is to use a point-based or facet-based diffraction model. A crucial part of this process is acoustic occlusion modeling. This article describes a SAS simulation pipeline and compares implementations of two occlusion methods; ray-tracing, and a newer approximate method based on finding the convex hull of a transformed point cloud. The full capability of the simulation pipeline is demonstrated using an example scene based on a high-resolution 3D model of the SS Thistlegorm shipwreck which was obtained using photogrammetry. The 3D model spans a volume of 220 × 130 × 25 m and is comprised of over 30 million facets that are decomposed into a cloud of almost 1 billion points. The convex-hull occlusion model was found to result in simulated SAS imagery that is qualitatively indistinguishable from the ray-tracing approach and quantitatively very similar, demonstrating that use of this alternative method has potential to improve speed while retaining high fidelity of simulation.The convex-hull approach was found to be up to 4 times faster in a fair speed comparison with serial and parallel CPU implementations for both methods, with the largest performance increase for wide-beam systems. The fastest occlusion modeling algorithm was found to be GPU-accelerated ray-tracing over the majority of scene scales tested, which was found to be up to 2 times faster than the parallel CPU convex-hull implementation. Although GPU implementations of convex hull algorithms are not currently readily available, future development of GPU-accelerated convex-hull finding could make the new approach much more viable. However, in the meantime, ray-tracing is still preferable, since it has higher accuracy and can leverage existing implementations for high performance computing architectures for better performance

    Load-Balancing for Parallel Delaunay Triangulations

    Get PDF
    Computing the Delaunay triangulation (DT) of a given point set in RD\mathbb{R}^D is one of the fundamental operations in computational geometry. Recently, Funke and Sanders (2017) presented a divide-and-conquer DT algorithm that merges two partial triangulations by re-triangulating a small subset of their vertices - the border vertices - and combining the three triangulations efficiently via parallel hash table lookups. The input point division should therefore yield roughly equal-sized partitions for good load-balancing and also result in a small number of border vertices for fast merging. In this paper, we present a novel divide-step based on partitioning the triangulation of a small sample of the input points. In experiments on synthetic and real-world data sets, we achieve nearly perfectly balanced partitions and small border triangulations. This almost cuts running time in half compared to non-data-sensitive division schemes on inputs exhibiting an exploitable underlying structure.Comment: Short version submitted to EuroPar 201
    • …
    corecore