1,339 research outputs found

    Efficient Parallel and Distributed Algorithms for GIS Polygon Overlay Processing

    Get PDF
    Polygon clipping is one of the complex operations in computational geometry. It is used in Geographic Information Systems (GIS), Computer Graphics, and VLSI CAD. For two polygons with n and m vertices, the number of intersections can be O(nm). In this dissertation, we present the first output-sensitive CREW PRAM algorithm, which can perform polygon clipping in O(log n) time using O(n + k + k\u27) processors, where n is the number of vertices, k is the number of intersections, and k\u27 is the additional temporary vertices introduced due to the partitioning of polygons. The current best algorithm by Karinthi, Srinivas, and Almasi does not handle self-intersecting polygons, is not output-sensitive and must employ O(n^2) processors to achieve O(log n) time. The second parallel algorithm is an output-sensitive PRAM algorithm based on Greiner-Hormann algorithm with O(log n) time complexity using O(n + k) processors. This is cost-optimal when compared to the time complexity of the best-known sequential plane-sweep based algorithm for polygon clipping. For self-intersecting polygons, the time complexity is O(((n + k) log n log log n)/p) using p In addition to these parallel algorithms, the other main contributions in this dissertation are 1) multi-core and many-core implementation for clipping a pair of polygons and 2) MPI-GIS and Hadoop Topology Suite for distributed polygon overlay using a cluster of nodes. Nvidia GPU and CUDA are used for the many-core implementation. The MPI based system achieves 44X speedup while processing about 600K polygons in two real-world GIS shapefiles 1) USA Detailed Water Bodies and 2) USA Block Group Boundaries) within 20 seconds on a 32-node (8 cores each) IBM iDataPlex cluster interconnected by InfiniBand technology

    Spectral Mapping Reconstruction of Extended Sources

    Get PDF
    Three dimensional spectroscopy of extended sources is typically performed with dedicated integral field spectrographs. We describe a method of reconstructing full spectral cubes, with two spatial and one spectral dimension, from rastered spectral mapping observations employing a single slit in a traditional slit spectrograph. When the background and image characteristics are stable, as is often achieved in space, the use of traditional long slits for integral field spectroscopy can substantially reduce instrument complexity over dedicated integral field designs, without loss of mapping efficiency -- particularly compelling when a long slit mode for single unresolved source followup is separately required. We detail a custom flux-conserving cube reconstruction algorithm, discuss issues of extended source flux calibration, and describe CUBISM, a tool which implements these methods for spectral maps obtained with ther Spitzer Space Telescope's Infrared Spectrograph.Comment: 11 pages, 8 figures, accepted by PAS

    OPTIMIZATION APPROACHES TO MPI AND AREA MERGING-BASED PARALLEL BUFFER ALGORITHM

    Get PDF
    On buffer zone construction, the rasterization-based dilation method inevitablyintroduces errors, and the double-sided parallel line method involves a series ofcomplex operations. In this paper, we proposed a parallel buffer algorithm based onarea merging and MPI (Message Passing Interface) to improve the performances ofbuffer analyses on processing large datasets. Experimental results reveal that thereare three major performance bottlenecks which significantly impact the serial andparallel buffer construction efficiencies, including the area merging strategy, thetask load balance method and the MPI inter-process results merging strategy.Corresponding optimization approaches involving tree-like area merging strategy, the vertex number oriented parallel task partition method and the inter-processresults merging strategy were suggested to overcome these bottlenecks. Experimentswere carried out to examine the performance efficiency of the optimized parallelalgorithm. The estimation results suggested that the optimization approaches couldprovide high performance and processing ability for buffer construction in a clusterparallel environment. Our method could provide insights into the parallelization ofspatial analysis algorithm

    OpenACC Based GPU Parallelization of Plane Sweep Algorithm for Geometric Intersection

    Get PDF
    Line segment intersection is one of the elementary operations in computational geometry. Complex problems in Geographic Information Systems (GIS) like finding map overlays or spatial joins using polygonal data require solving segment intersections. Plane sweep paradigm is used for finding geometric intersection in an efficient manner. However, it is difficult to parallelize due to its in-order processing of spatial events. We present a new fine-grained parallel algorithm for geometric intersection and its CPU and GPU implementation using OpenMP and OpenACC. To the best of our knowledge, this is the first work demonstrating an effective parallelization of plane sweep on GPUs. We chose compiler directive based approach for implementation because of its simplicity to parallelize sequential code. Using Nvidia Tesla P100 GPU, our implementation achieves around 40X speedup for line segment intersection problem on 40K and 80K data sets compared to sequential CGAL library

    System integration report

    Get PDF
    Several areas that arise from the system integration issue were examined. Intersystem analysis is discussed as it relates to software development, shared data bases and interfaces between TEMPUS and PLAID, shaded graphics rendering systems, object design (BUILD), the TEMPUS animation system, anthropometric lab integration, ongoing TEMPUS support and maintenance, and the impact of UNIX and local workstations on the OSDS environment

    Generalized kernels of polygons under rotation

    Get PDF
    Given a set O\mathcal{O} of kk orientations in the plane, two points inside a simple polygon PP O\mathcal{O}-see each other if there is an O\mathcal{O}-staircase contained in PP that connects them. The O\mathcal{O}-kernel of PP is the subset of points which O\mathcal{O}-see all the other points in PP. This work initiates the study of the computation and maintenance of the O\mathcal{O}-Kernel{\rm Kernel} of a polygon PP as we rotate the set O\mathcal{O} by an angle θ\theta, denoted O\mathcal{O}-Kernelθ(P){\rm Kernel}_{\theta}(P). In particular, we design efficient algorithms for (i) computing and maintaining {0o}\{0^{o}\}-Kernelθ(P){\rm Kernel}_{\theta}(P) while θ\theta varies in [π2,π2)[-\frac{\pi}{2},\frac{\pi}{2}), obtaining the angular intervals where the {0o}\{0^{o}\}-Kernelθ(P){\rm Kernel}_{\theta}(P) is not empty and (ii) for orthogonal polygons PP, computing the orientation θ[0,π2)\theta\in[0, \frac{\pi}{2}) such that the area and/or the perimeter of the {0o,90o}\{0^{o},90^{o}\}-Kernelθ(P){\rm Kernel}_{\theta}(P) are maximum or minimum. These results extend previous works by Gewali, Palios, Rawlins, Schuierer, and Wood.Comment: 12 pages, 4 figures, a version omitting some proofs appeared at the 34th European Workshop on Computational Geometry (EuroCG 2018

    Multiple dataset visualization (MDV) framework for scalar volume data

    Get PDF
    Many applications require comparative analysis of multiple datasets representing different samples, conditions, time instants, or views in order to develop a better understanding of the scientific problem/system under consideration. One effective approach for such analysis is visualization of the data. In this PhD thesis, we propose an innovative multiple dataset visualization (MDV) approach in which two or more datasets of a given type are rendered concurrently in the same visualization. MDV is an important concept for the cases where it is not possible to make an inference based on one dataset, and comparisons between many datasets are required to reveal cross-correlations among them. The proposed MDV framework, which deals with some fundamental issues that arise when several datasets are visualized together, follows a multithreaded architecture consisting of three core components, data preparation/loading, visualization and rendering. The visualization module - the major focus of this study, currently deals with isosurface extraction and texture-based rendering techniques. For isosurface extraction, our all-in-memory approach keeps datasets under consideration and the corresponding geometric data in the memory. Alternatively, the only-polygons- or points-in-memory only keeps the geometric data in memory. To address the issues related to storage and computation, we develop adaptive data coherency and multiresolution schemes. The inter-dataset coherency scheme exploits the similarities among datasets to approximate the portions of isosurfaces of datasets using the isosurface of one or more reference datasets whereas the intra/inter-dataset multiresolution scheme processes the selected portions of each data volume at varying levels of resolution. The graphics hardware-accelerated approaches adopted for MDV include volume clipping, isosurface extraction and volume rendering, which use 3D textures and advanced per fragment operations. With appropriate user-defined threshold criteria, we find that various MDV techniques maintain a linear time-N relationship, improve the geometry generation and rendering time, and increase the maximum N that can be handled (N: number of datasets). Finally, we justify the effectiveness and usefulness of the proposed MDV by visualizing 3D scalar data (representing electron density distributions in magnesium oxide and magnesium silicate) from parallel quantum mechanical simulation

    Hierarchical and Adaptive Filter and Refinement Algorithms for Geometric Intersection Computations on GPU

    Get PDF
    Geometric intersection algorithms are fundamental in spatial analysis in Geographic Information System (GIS). This dissertation explores high performance computing solution for geometric intersection on a huge amount of spatial data using Graphics Processing Unit (GPU). We have developed a hierarchical filter and refinement system for parallel geometric intersection operations involving large polygons and polylines by extending the classical filter and refine algorithm using efficient filters that leverage GPU computing. The inputs are two layers of large polygonal datasets and the computations are spatial intersection on pairs of cross-layer polygons. These intersections are the compute-intensive spatial data analytic kernels in spatial join and map overlay operations in spatial databases and GIS. Efficient filters, such as PolySketch, PolySketch++ and Point-in-polygon filters have been developed to reduce refinement workload on GPUs. We also showed the application of such filters in speeding-up line segment intersections and point-in-polygon tests. Programming models like CUDA and OpenACC have been used to implement the different versions of the Hierarchical Filter and Refine (HiFiRe) system. Experimental results show good performance of our filter and refinement algorithms. Compared to standard R-tree filter, on average, our filter technique can still discard 76% of polygon pairs which do not have segment intersection points. PolySketch filter reduces on average 99.77% of the workload of finding line segment intersections. Compared to existing Common Minimum Bounding Rectangle (CMBR) filter that is applied on each cross-layer candidate pair, the workload after using PolySketch-based CMBR filter is on average 98% smaller. The execution time of our HiFiRe system on two shapefiles, namely USA Water Bodies (contains 464K polygons) and USA Block Group Boundaries (contains 220K polygons), is about 3.38 seconds using NVidia Titan V GPU

    MPI-Vector-IO: Parallel I/O and Partitioning for Geospatial Vector Data

    Get PDF
    In recent times, geospatial datasets are growing in terms of size, complexity and heterogeneity. High performance systems are needed to analyze such data to produce actionable insights in an efficient manner. For polygonal a.k.a vector datasets, operations such as I/O, data partitioning, communication, and load balancing becomes challenging in a cluster environment. In this work, we present MPI-Vector-IO 1 , a parallel I/O library that we have designed using MPI-IO specifically for partitioning and reading irregular vector data formats such as Well Known Text. It makes MPI aware of spatial data, spatial primitives and provides support for spatial data types embedded within collective computation and communication using MPI message-passing library. These abstractions along with parallel I/O support are useful for parallel Geographic Information System (GIS) application development on HPC platforms

    Hierarchical Filter and Refinement System Over Large Polygonal Datasets on CPU-GPU

    Get PDF
    In this paper, we introduce our hierarchical filter and refinement technique that we have developed for parallel geometric intersection operations involving large polygons and polylines. The inputs are two layers of large polygonal datasets and the computations are spatial intersection on a pair of cross-layer polygons. These intersections are the compute-intensive spatial data analytic kernels in spatial join and map overlay computations. We have extended the classical filter and refine algorithms using PolySketch Filter to improve the performance of geospatial computations. In addition to filtering polygons by their Minimum Bounding Rectangle (MBR), our hierarchical approach explores further filtering using tiles (smaller MBRs) to increase the effectiveness of filtering and decrease the computational workload in the refinement phase. We have implemented this filter and refine system on CPU and GPU by using OpenMP and OpenACC. After using R-tree, on average, our filter technique can still discard 69% of polygon pairs which do not have segment intersection points. PolySketch filter reduces on average 99.77% of the workload of finding line segment intersections. PNP based task reduction and Striping algorithms filter out on average 95.84% of the workload of Point-in-Polygon tests. Our CPU-GPU system performs spatial join on two shapefiles, namely USA Water Bodies and USA Block Group Boundaries with 683K polygons in about 10 seconds using NVidia Titan V and Titan Xp GPU
    corecore