521 research outputs found

    Shared memory with hidden latency on a family of mesh-like networks

    Get PDF

    High-Quality Shared-Memory Graph Partitioning

    Full text link
    Partitioning graphs into blocks of roughly equal size such that few edges run between blocks is a frequently needed operation in processing graphs. Recently, size, variety, and structural complexity of these networks has grown dramatically. Unfortunately, previous approaches to parallel graph partitioning have problems in this context since they often show a negative trade-off between speed and quality. We present an approach to multi-level shared-memory parallel graph partitioning that guarantees balanced solutions, shows high speed-ups for a variety of large graphs and yields very good quality independently of the number of cores used. For example, on 31 cores, our algorithm partitions our largest test instance into 16 blocks cutting less than half the number of edges than our main competitor when both algorithms are given the same amount of time. Important ingredients include parallel label propagation for both coarsening and improvement, parallel initial partitioning, a simple yet effective approach to parallel localized local search, and fast locality preserving hash tables

    Working With Incremental Spatial Data During Parallel (GPU) Computation

    Get PDF
    Central to many complex systems, spatial actors require an awareness of their local environment to enable behaviours such as communication and navigation. Complex system simulations represent this behaviour with Fixed Radius Near Neighbours (FRNN) search. This algorithm allows actors to store data at spatial locations and then query the data structure to find all data stored within a fixed radius of the search origin. The work within this thesis answers the question: What techniques can be used for improving the performance of FRNN searches during complex system simulations on Graphics Processing Units (GPUs)? It is generally agreed that Uniform Spatial Partitioning (USP) is the most suitable data structure for providing FRNN search on GPUs. However, due to the architectural complexities of GPUs, the performance is constrained such that FRNN search remains one of the most expensive common stages between complex systems models. Existing innovations to USP highlight a need to take advantage of recent GPU advances, reducing the levels of divergence and limiting redundant memory accesses as viable routes to improve the performance of FRNN search. This thesis addresses these with three separate optimisations that can be used simultaneously. Experiments have assessed the impact of optimisations to the general case of FRNN search found within complex system simulations and demonstrated their impact in practice when applied to full complex system models. Results presented show the performance of the construction and query stages of FRNN search can be improved by over 2x and 1.3x respectively. These improvements allow complex system simulations to be executed faster, enabling increases in scale and model complexity

    A Multi-Core Numerical Framework for Characterizing Flow in Oil Reservoirs

    Get PDF
    Presented at the SCS Spring Simulation Multi-Conference – SpringSim 2011, April 4-7, 2011 – Boston, USA Awarded Best Paper in the 19th High Performance Computing Symposium and Best Overall Paper at SpringSim 2011.This paper presents a numerical framework that enables scalable, parallel execution of engineering simulations on multi-core, shared memory architectures. Distribution of the simulations is done by selective hash-tabling of the model domain which spatially decomposes it into a number of orthogonal computational tasks. These tasks, the size of which is critical to optimal cache blocking and consequently performance, are then distributed for execution to multiple threads using the previously presented task management algorithm, H-Dispatch. Two numerical methods, smoothed particle hydrodynamics (SPH) and the lattice Boltzmann method (LBM), are discussed in the present work, although the framework is general enough to be used with any explicit time integration scheme. The implementation of both SPH and the LBM within the parallel framework is outlined, and the performance of each is presented in terms of speed-up and efficiency. On the 24-core server used in this research, near linear scalability was achieved for both numerical methods with utilization efficiencies up to 95%. To close, the framework is employed to simulate fluid flow in a porous rock specimen, which is of broad geophysical significance, particularly in enhanced oil recovery

    Change Management Systems for Seamless Evolution in Data Centers

    Get PDF
    Revenue for data centers today is highly dependent on the satisfaction of their enterprise customers. These customers often require various features to migrate their businesses and operations to the cloud. Thus, clouds today introduce new features at a swift pace to onboard new customers and to meet the needs of existing ones. This pace of innovation continues to grow on super linearly, e.g., Amazon deployed 1400 new features in 2017. However, such a rapid pace of evolution adds complexities both for users and the cloud. Clouds struggle to keep up with the deployment speed, and users struggle to learn which features they need and how to use them. The pace of these evolutions has brought us to a tipping point: we can no longer use rules of thumb to deploy new features, and customers need help to identify which features they need. We have built two systems: Janus and Cherrypick, to address these problems. Janus helps data center operators roll out new changes to the data center network. It automatically adapts to the data center topology, routing, traffic, and failure settings. The system reduces the risk of new deployments for network operators as they can now pick deployment strategies which are less likely to impact users’ performance. Cherrypick finds near-optimal cloud configurations for big data analytics. It adapts allows users to search through the new machine types the clouds are constantly introducing and find ones with a near-optimal performance that meets their budget. Cherrypick can adapt to new big-data frameworks and applications as well as the new machine types the clouds are constantly introducing. As the pace of cloud innovations increases, it is critical to have tools that allow operators to deploy new changes as well as those that would enable users to adapt to achieve good performance at low cost. The tools and algorithms discussed in this thesis help accomplish these goals

    Clustering in the Big Data Era: methods for efficient approximation, distribution, and parallelization

    Get PDF
    Data clustering is an unsupervised machine learning task whose objective is to group together similar items. As a versatile data mining tool, data clustering has numerous applications, such as object detection and localization using data from 3D laser-based sensors, finding popular routes using geolocation data, and finding similar patterns of electricity consumption using smart meters.The datasets in modern IoT-based applications are getting more and more challenging for conventional clustering schemes. Big Data is a term used to loosely describe hard-to-manage datasets. Particularly, large numbers of data points, high rates of data production, large numbers of dimensions, high skewness, and distributed data sources are aspects that challenge the classical data processing schemes, including clustering methods. This thesis contributes to efficient big data clustering for distributed and parallel computing architectures, representative of the processing environments in edge-cloud computing continuum. The thesis also proposes approximation techniques to cope with certain challenging aspects of big data.Regarding distributed clustering, the thesis proposes MAD-C, abbreviating Multi-stage Approximate Distributed Cluster-Combining. MAD-C leverages an approximation-based data synopsis that drastically lowers the required communication bandwidth among the distributed nodes and achieves multiplicative savings in computation time, compared to a baseline that centrally gathers and clusters the data. The thesis shows MAD-C can be used to detect and localize objects using data from distributed 3D laser-based sensors with high accuracy. Furthermore, the work in the thesis shows how to utilize MAD-C to efficiently detect the objects within a restricted area for geofencing purposes.Regarding parallel clustering, the thesis proposes a family of algorithms called PARMA-CC, abbreviating Parallel Multistage Approximate Cluster Combining. Using approximation-based data synopsis, PARMA-CC algorithms achieve scalability on multi-core systems by facilitating parallel execution of threads with limited dependencies which get resolved using fine-grained synchronization techniques. To further enhance the efficiency, PARMA-CC algorithms can be configured with respect to different data properties. Analytical and empirical evaluations show PARMA-CC algorithms achieve significantly higher scalability than the state-of-the-art methods while preserving a high accuracy.On parallel high dimensional clustering, the thesis proposes IP.LSH.DBSCAN, abbreviating Integrated Parallel Density-Based Clustering through Locality-Sensitive Hashing (LSH). IP.LSH.DBSCAN fuses the process of creating an LSH index into the process of data clustering, and it takes advantage of data parallelization and fine-grained synchronization. Analytical and empirical evaluations show IP.LSH.DBSCAN facilitates parallel density-based clustering of massive datasets using desired distance measures resulting in several orders of magnitude lower latency than state-of-the-art for high dimensional data.In essence, the thesis proposes methods and algorithmic implementations targeting the problem of big data clustering and applications using distributed and parallel processing. The proposed methods (available as open source software) are extensible and can be used in combination with other methods

    Geometric Algorithms and Data Structures for Simulating Diffusion Limited Reactions

    Get PDF
    Radiation therapy is one of the most effective means for treating cancers. An important calculation in radiation therapy is the estimation of dose distribution in the treated patient, which is key to determining the treatment outcome and potential side effects of the therapy. Biological dose — the level of biological damage (e.g., cell killing ratio, DNA damage, etc.) inflicted by the radiation is the best measure of treatment quality, but it is very difficult to calculate. Therefore, most clinics today use physical dose - the energy deposited by incident radiation per unit body mass - for planning radiation therapy, which can be calculated accurately using kinetic Monte Carlo simulations. Studies have found that physical dose correlates with biological dose, but exhibits a very complex relationship that is not yet well understood. Generally speaking, the calculation of biological dose involves four steps: (1) the calculation of physical dose distribution, (2) the generation of radiochemicals based on the physical dose distribution, (3) the simulation of interactions between radiochemicals and bio-matter in the body, and (4) the estimation of biological damage based on the distribution of radiochemicals. This dissertation focuses on the development of a more efficient and effective simulation algorithm to speed up step (3). The main contribution of this research is the development of an efficient and effective kinetic Monte Carlo (KMC) algorithm for simulating diffusion-limited chemical reactions in the context of radiation therapy. The central problem studied is - given n particles distributed among a small number of particle species, all allowed to diffuse and chemically react according to a small number of chemical reaction equations - predict the radiochemical yield over time. The algorithm presented makes use of a sparse grid structure, with one grid per species per radiochemical reactant used to group particles in a way that makes the nearest neighbor search efficient, where particles are stored only once, yet are represented in grids of all appropriate reaction radii. A kinetic data structure is used as the time stepping mechanism, which provides spatially local updates to the simulation at a frequency which captures all events - retaining accuracy. A serial and three parallel versions of the algorithm have been developed. The parallel versions implement the kinetic data structure using both a standard priority queue and a treap data structure in order to investigate the algorithms scalability. The treap provides a way for each thread of execution to do more work in a particular region of space. A comparison with a spatial discretization variant of the algorithm is also provided
    • …
    corecore