860 research outputs found

    Distance-Sensitive Planar Point Location

    Get PDF
    Let S\mathcal{S} be a connected planar polygonal subdivision with nn edges that we want to preprocess for point-location queries, and where we are given the probability γi\gamma_i that the query point lies in a polygon PiP_i of S\mathcal{S}. We show how to preprocess S\mathcal{S} such that the query time for a point~pPip\in P_i depends on~γi\gamma_i and, in addition, on the distance from pp to the boundary of~PiP_i---the further away from the boundary, the faster the query. More precisely, we show that a point-location query can be answered in time O(min(logn,1+logarea(Pi)γiΔp2))O\left(\min \left(\log n, 1 + \log \frac{\mathrm{area}(P_i)}{\gamma_i \Delta_{p}^2}\right)\right), where Δp\Delta_{p} is the shortest Euclidean distance of the query point~pp to the boundary of PiP_i. Our structure uses O(n)O(n) space and O(nlogn)O(n \log n) preprocessing time. It is based on a decomposition of the regions of S\mathcal{S} into convex quadrilaterals and triangles with the following property: for any point pPip\in P_i, the quadrilateral or triangle containing~pp has area Ω(Δp2)\Omega(\Delta_{p}^2). For the special case where S\mathcal{S} is a subdivision of the unit square and γi=area(Pi)\gamma_i=\mathrm{area}(P_i), we present a simpler solution that achieves a query time of O(min(logn,log1Δp2))O\left(\min \left(\log n, \log \frac{1}{\Delta_{p}^2}\right)\right). The latter solution can be extended to convex subdivisions in three dimensions

    Fractal Image Compression on MIMD Architectures II: Classification Based Speed-up Methods

    Get PDF
    Since fractal image compression is computationally very expensive, speed-up techniques are required in addition to parallel processing in order to compress large images in reasonable time. In this paper we discuss parallel fractal image compression algorithms suited for MIMD architectures which employ block classification as speed-up method

    A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

    Get PDF
    Fractal compression is the lossy compression technique in the field of gray/color image and video compression. It gives high compression ratio, better image quality with fast decoding time but improvement in encoding time is a challenge. This review paper/article presents the analysis of most significant existing approaches in the field of fractal based gray/color images and video compression, different block matching motion estimation approaches for finding out the motion vectors in a frame based on inter-frame coding and intra-frame coding i.e. individual frame coding and automata theory based coding approaches to represent an image/sequence of images. Though different review papers exist related to fractal coding, this paper is different in many sense. One can develop the new shape pattern for motion estimation and modify the existing block matching motion estimation with automata coding to explore the fractal compression technique with specific focus on reducing the encoding time and achieving better image/video reconstruction quality. This paper is useful for the beginners in the domain of video compression

    Yellow Tree: A Distributed Main-memory Spatial Index Structure for Moving Objects

    Get PDF
    Mobile devices equipped with wireless technologies to communicate and positioning systems to locate objects of interest are common place today, providing the impetus to develop location-aware applications. At the heart of location-aware applications are moving objects or objects that continuously change location over time, such as cars in transportation networks or pedestrians or postal packages. Location-aware applications tend to support the tracking of very large numbers of such moving objects as well as many users that are interested in finding out about the locations of other moving objects. Such location-aware applications rely on support from database management systems to model, store, and query moving object data. The management of moving object data exposes the limitations of traditional (spatial) database management systems as well as their index structures designed to keep track of objects\u27 locations. Spatial index structures that have been designed for geographic objects in the past primarily assume data are foremost of static nature (e.g., land parcels, road networks, or airport locations), thus requiring a limited amount of index structure updates and reorganization over a period of time. While handling moving objects however, there is an incumbent need for continuous reorganization of spatial index structures to remain up to date with constantly and rapidly changing object locations. This research addresses some of the key issues surrounding the efficient database management of moving objects whose location update rate to the database system varies from 1 to 30 minutes. Furthermore, we address the design of a highly scaleable and efficient spatial index structure to support location tracking and querying of large amounts of moving objects. We explore the possible architectural and the data structure level changes that are required to handle large numbers of moving objects. We focus specifically on the index structures that are needed to process spatial range queries and object-based queries on constantly changing moving object data. We argue for the case of main memory spatial index structures that dynamically adapt to continuously changing moving object data and concurrently answer spatial range queries efficiently. A proof-of concept implementation called the yellow tree, which is a distributed main-memory index structure, and a simulated environment to generate moving objects is demonstrated. Using experiments conducted on simulated moving object data, we conclude that a distributed main-memory based spatial index structure is required to handle dynamic location updates and efficiently answer spatial range queries on moving objects. Future work on enhancing the query processing performance of yellow tree is also discussed

    Load Balancing Algorithms for Parallel Spatial Join on HPC Platforms

    Get PDF
    Geospatial datasets are growing in volume, complexity, and heterogeneity. For efficient execution of geospatial computations and analytics on large scale datasets, parallel processing is necessary. To exploit fine-grained parallel processing on large scale compute clusters, partitioning of skewed datasets in a load-balanced way is challenging. The workload in spatial join is data dependent and highly irregular. Moreover, wide variation in the size and density of geometries from one region of the map to another, further exacerbates the load imbalance. This dissertation focuses on spatial join operation used in Geographic Information Systems (GIS) and spatial databases, where the inputs are two layers of geospatial data, and the output is a combination of the two layers according to join predicate.This dissertation introduces a novel spatial data partitioning algorithm geared towards load balancing the parallel spatial join processing. Unlike existing partitioning techniques, the proposed partitioning algorithm divides the spatial join workload instead of partitioning the individual datasets separately to provide better load-balancing. This workload partitioning algorithm has been evaluated on a high-performance computing system using real-world datasets. An intermediate output-sensitive duplication avoidance technique is proposed that decreases the external memory space requirement for storing spatial join candidates across the partitions. GPU acceleration is used to further reduce the spatial partitioning runtime. For dynamic load balancing in spatial join, a novel framework for fine-grained work stealing is presented. This framework is efficient and NUMA-aware. Performance improvements are demonstrated on shared and distributed memory architectures using threads and message passing. Experimental results show effective mitigation of data skew. The framework supports a variety of spatial join predicates and spatial overlay using partitioned and un-partitioned datasets
    corecore