73 research outputs found

    Partial aggregation for collective communication in distributed memory machines

    Get PDF
    High Performance Computing (HPC) systems interconnect a large number of Processing Elements (PEs) in high-bandwidth networks to simulate complex scientific problems. The increasing scale of HPC systems poses great challenges on algorithm designers. As the average distance between PEs increases, data movement across hierarchical memory subsystems introduces high latency. Minimizing latency is particularly challenging in collective communications, where many PEs may interact in complex communication patterns. Although collective communications can be optimized for network-level parallelism, occasional synchronization delays due to dependencies in the communication pattern degrade application performance. To reduce the performance impact of communication and synchronization costs, parallel algorithms are designed with sophisticated latency hiding techniques. The principle is to interleave computation with asynchronous communication, which increases the overall occupancy of compute cores. However, collective communication primitives abstract parallelism which limits the integration of latency hiding techniques. Approaches to work around these limitations either modify the algorithmic structure of application codes, or replace collective primitives with verbose low-level communication calls. While these approaches give fine-grained control for latency hiding, implementing collective communication algorithms is challenging and requires expertise knowledge about HPC network topologies. A collective communication pattern is commonly described as a Directed Acyclic Graph (DAG) where a set of PEs, represented as vertices, resolve data dependencies through communication along the edges. Our approach improves latency hiding in collective communication through partial aggregation. Based on mathematical rules of binary operations and homomorphism, we expose data parallelism in a respective DAG to overlap computation with communication. The proposed concepts are implemented and evaluated with a subset of collective primitives in the Message Passing Interface (MPI), an established communication standard in scientific computing. An experimental analysis with communication-bound microbenchmarks shows considerable performance benefits for the evaluated collective primitives. A detailed case study with a large-scale distributed sort algorithm demonstrates, how partial aggregation significantly improves performance in data-intensive scenarios. Besides better latency hiding capabilities with collective communication primitives, our approach enables further optimizations of their implementations within MPI libraries. The vast amount of asynchronous programming models, which are actively studied in the HPC community, benefit from partial aggregation in collective communication patterns. Future work can utilize partial aggregation to improve the interaction of MPI collectives with acclerator architectures, and to design more efficient communication algorithms

    Scalable system software for high performance large-scale applications

    Get PDF
    In the last decades, high-performance large-scale systems have been a fundamental tool for scientific discovery and engineering advances. The sustained growth of supercomputing performance and the concurrent reduction in cost have made this technology available for a large number of scientists and engineers working on many different problems. The design of next-generation supercomputers will include traditional HPC requirements as well as the new requirements to handle data-intensive computations. Data intensive applications will hence play an important role in a variety of fields, and are the current focus of several research trends in HPC. Due to the challenges of scalability and power efficiency, next-generation of supercomputers needs a redesign of the whole software stack. Being at the bottom of the software stack, system software is expected to change drastically to support the upcoming hardware and to meet new application requirements. This PhD thesis addresses the scalability of system software. The thesis start at the Operating System level: first studying general-purpose OS (ex. Linux) and then studying lightweight kernels (ex. CNK). Then, we focus on the runtime system: we implement a runtime system for distributed memory systems that includes many of the system services required by next-generation applications. Finally we focus on hardware features that can be exploited at user-level to improve applications performance, and potentially included into our advanced runtime system. The thesis contributions are the following: Operating System Scalability: We provide an accurate study of the scalability problems of modern Operating Systems for HPC. We design and implement a methodology whereby detailed quantitative information may be obtained for each OS noise event. We validate our approach by comparing it to other well-known standard techniques to analyze OS noise, such FTQ (Fixed Time Quantum). Evaluation of the address translation management for a lightweight kernel: we provide a performance evaluation of different TLB management approaches ¿ dynamic memory mapping, static memory mapping with replaceable TLB entries, and static memory mapping with fixed TLB entries (no TLB misses) on a IBM BlueGene/P system. Runtime System Scalability: We show that a runtime system can efficiently incorporate system services and improve scalability for a specific class of applications. We design and implement a full-featured runtime system and programming model to execute irregular appli- cations on a commodity cluster. The runtime library is called Global Memory and Threading library (GMT) and integrates a locality-aware Partitioned Global Address Space communication model with a fork/join program structure. It supports massive lightweight multi-threading, overlapping of communication and computation and small messages aggregation to tolerate network latencies. We compare GMT to other PGAS models, hand-optimized MPI code and custom architectures (Cray XMT) on a set of large scale irregular applications: breadth first search, random walk and concurrent hash map access. Our runtime system shows performance orders of magnitude higher than other solutions on commodity clusters and competitive with custom architectures. User-level Scalability Exploiting Hardware Features: We show the high complexity of low-level hardware optimizations for single applications, as a motivation to incorporate this logic into an adaptive runtime system. We evaluate the effects of controllable hardware-thread priority mechanism that controls the rate at which each hardware-thread decodes instruction on IBM POWER5 and POWER6 processors. Finally, we show how to effectively exploits cache locality and network-on-chip on the Tilera many-core architecture to improve intra-core scalability

    Proceedings of the 7th International Conference on PGAS Programming Models

    Get PDF

    High-performance evolutionary computation for scalable spatial optimization

    Get PDF
    Spatial optimization (SO) is an important and prolific field of interdisciplinary research. Spatial optimization methods seek optimal allocation or arrangement of spatial units under spatial constraints such as distance, adjacency, contiguity, partition, etc. As spatial granularity becomes finer and problem formulations incorporate increasingly complex compositions of spatial information, the performance of spatial optimization solvers becomes more imperative. My research focuses on scalable spatial optimization methods within the evolutionary algorithm (EA) framework. The computational scalability challenge in EA is addressed by developing a parallel EA library that eliminates the costly global synchronization in massively parallel computing environment and scales to 131,072 processors. Classic EA operators are based on linear recombination and experience serious problems in traversing the decision space with non-linear spatial configurations. I propose a spatially explicit EA framework that couples graph representations of spatial constraints with intelligent guided search heuristics such as path relinking and ejection chain to effectively explore SO decision space. As a result, novel spatial recombination operators are developed to handle strong spatial constraints effectively and are generic to incorporate problem-specific spatial characteristics. This framework is employed to solve large political redistricting problems. Voting district-level redistricting problems are solved and sampled to create billions of feasible districting plans that adhere to Supreme Court mandates, suitable for statistical analyses of redistricting phenomena such as gerrymandering

    A task-based message passing framework

    Get PDF
    Over the past decade, it has become clear that parallel and distributed programming will occupy an increasingly larger proportion of a developer's work. While numerous programming languages and libraries have been built to facilitate working with concurrency, developer work is still difficult and error-prone. In this thesis, we propose a task-based message passing framework. The proposed framework combines the actor model with message passing functionality to offer a useful and efficient way to implement parallel and distributed algorithms. The framework is intended to be part of a novel C compiler that will offer built-in task and message features. Perhaps most importantly, the new framework aims to be intuitive and efficient. We have used the framework to implement a parallel sample-sort and a client-server application. Our results demonstrate both strong performance for a parallel sorting algorithm and scalability that extends to thousands of concurrent messages. In addition, we have developed a client server app that emphasizes the intuitive nature of the development cycle for the new model. We conclude that the proposed message passing framework would be well suited to concurrent development environments and offers a simple and efficient way to build applications for the new wave of multi-core hardware platforms

    Parallel Sparse Tensor Decomposition in Chapel

    Full text link
    In big-data analytics, using tensor decomposition to extract patterns from large, sparse multivariate data is a popular technique. Many challenges exist for designing parallel, high performance tensor decomposition algorithms due to irregular data accesses and the growing size of tensors that are processed. There have been many efforts at implementing shared-memory algorithms for tensor decomposition, most of which have focused on the traditional C/C++ with OpenMP framework. However, Chapel is becoming an increasingly popular programing language due to its expressiveness and simplicity for writing scalable parallel programs. In this work, we port a state of the art C/OpenMP parallel sparse tensor decomposition tool, SPLATT, to Chapel. We present a performance study that investigates bottlenecks in our Chapel code and discusses approaches for improving its performance. Also, we discuss features in Chapel that would have been beneficial to our porting effort. We demonstrate that our Chapel code is competitive with the C/OpenMP code for both runtime and scalability, achieving 83%-96% performance of the original code and near linear scalability up to 32 cores.Comment: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), 5th Annual Chapel Implementers and Users Workshop (CHIUW 2018

    Parallel computing 2011, ParCo 2011: book of abstracts

    Get PDF
    This book contains the abstracts of the presentations at the conference Parallel Computing 2011, 30 August - 2 September 2011, Ghent, Belgiu

    Parallel Breadth-First Search on Distributed Memory Systems

    Full text link
    • …
    corecore