103 research outputs found

    Efficient Communication and Synchronization on Manycore Processors

    Get PDF
    The increased number of cores integrated on a chip has brought about a number of challenges. Concerns about the scalability of cache coherence protocols have urged both researchers and practitioners to explore alternative programming models, where cache coherence is not a given. Message passing, traditionally used in distributed systems, has surfaced as an appealing alternative to shared memory, commonly used in multiprocessor systems. In this thesis, we study how basic communication and synchronization primitives on manycore processors can be improved, with an accent on taking advantage of message passing. We do this in two different contexts: (i) message passing is the only means of communication and (ii) it coexists with traditional cache-coherent shared memory. In the first part of the thesis, we analytically and experimentally study collective communication on a message-passing manycore processor. First, we devise broadcast algorithms for the Intel SCC, an experimental manycore platform without coherent caches. Our ideas are captured by OC-Bcast (on-chip broadcast), a tree-based broadcast algorithm. Two versions of OC-Bcast are presented: One for synchronous communication, suitable for use in high-performance libraries implementing the Message Passing Interface (MPI), and another for asynchronous communication, for use in distributed algorithms and general-purpose software. Both OC-Bcast flavors are based on one-sided communication and significantly outperform (by up to 3x) state-of-the-art two-sided algorithms. Next, we conceive an analytical communication model for the SCC. By expressing the latency and throughput of different broadcast algorithms through this model, we reveal that the advantage of OC-Bcast comes from greatly reducing the number of off-chip memory accesses on the critical path. The second part of the thesis focuses on lock-based synchronization. We start by introducing the concept of hybrid mutual exclusion algorithms, which rely both on cache-coherent shared memory and message passing. The hybrid algorithms we present, HybLock and HybComb, are shown to significantly outperform (by even 4x) their shared-memory-only counterparts, when used to implement concurrent counters, stacks and queues on a hybrid Tilera TILE-Gx processor. The advantage of our hybrid algorithms comes from the fact that their most critical parts rely on message passing, thereby avoiding the overhead of the cache coherence protocol. Still, we take advantage of shared memory, as shared state makes the implementation of certain mechanisms much more straightforward. Next, we try to profit from these insights even on processors without hardware support for message passing. Taking two classic x86 processors from Intel and AMD, we come up with cache-aware optimizations that improve the performance of executing contended critical sections by as much as 6x

    Architectural and compiler support for strongly atomic transactional memory

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 199-212).Transactions are gaining ground as a programmer-friendly means of expressing concurrency, as microarchitecture trends make it clear that parallel systems are in our future. This thesis presents the design and implementation of four efficient and powerful transaction systems: ApeX, an object oriented software-only system; UTM and LTM, two scalable systems using custom processor extensions; and HyApeX, a hybrid of the software and hardware systems, obtaining the benefits of both. The software transaction system implements strong atomicity, which ensures that transactions are protected from the influence of nontransactional code. Previous software systems use weaker atomicity guarantees because strong atomicity is presumed to be too expensive. In this thesis strong atomicity is obtained with minimal slowdown for nontransactional code. Compiler analyses can further improve the eciency of the mechanism, which has been formally veried with the Spin model-checker. The low overhead of ApeX allows it to be protably combined with a hardware transaction system to provide fast execution of short and small transactions, while allowing fallback to software for large or complicated transactions. I present UTM, a hardware transactional memory system allowing unbounded virtualizable transactions, and show how a hybrid system can be obtained.by C. Scott Ananian.Ph.D

    Partial aggregation for collective communication in distributed memory machines

    Get PDF
    High Performance Computing (HPC) systems interconnect a large number of Processing Elements (PEs) in high-bandwidth networks to simulate complex scientific problems. The increasing scale of HPC systems poses great challenges on algorithm designers. As the average distance between PEs increases, data movement across hierarchical memory subsystems introduces high latency. Minimizing latency is particularly challenging in collective communications, where many PEs may interact in complex communication patterns. Although collective communications can be optimized for network-level parallelism, occasional synchronization delays due to dependencies in the communication pattern degrade application performance. To reduce the performance impact of communication and synchronization costs, parallel algorithms are designed with sophisticated latency hiding techniques. The principle is to interleave computation with asynchronous communication, which increases the overall occupancy of compute cores. However, collective communication primitives abstract parallelism which limits the integration of latency hiding techniques. Approaches to work around these limitations either modify the algorithmic structure of application codes, or replace collective primitives with verbose low-level communication calls. While these approaches give fine-grained control for latency hiding, implementing collective communication algorithms is challenging and requires expertise knowledge about HPC network topologies. A collective communication pattern is commonly described as a Directed Acyclic Graph (DAG) where a set of PEs, represented as vertices, resolve data dependencies through communication along the edges. Our approach improves latency hiding in collective communication through partial aggregation. Based on mathematical rules of binary operations and homomorphism, we expose data parallelism in a respective DAG to overlap computation with communication. The proposed concepts are implemented and evaluated with a subset of collective primitives in the Message Passing Interface (MPI), an established communication standard in scientific computing. An experimental analysis with communication-bound microbenchmarks shows considerable performance benefits for the evaluated collective primitives. A detailed case study with a large-scale distributed sort algorithm demonstrates, how partial aggregation significantly improves performance in data-intensive scenarios. Besides better latency hiding capabilities with collective communication primitives, our approach enables further optimizations of their implementations within MPI libraries. The vast amount of asynchronous programming models, which are actively studied in the HPC community, benefit from partial aggregation in collective communication patterns. Future work can utilize partial aggregation to improve the interaction of MPI collectives with acclerator architectures, and to design more efficient communication algorithms

    Concurrency control overhead or closer look at blocking vs. nonblocking concurrency control mechanisms

    Get PDF
    This report has been published in the Proceedings of the Fifth Berkeley Conference on Distributed Data Management and Computer Networks.In this paper we divide concurrency control (CC) mechanisms for distributed DBMS's (DDBMs) into three classes. One class consists of blocking CC mechanisms and two classes contain nonblocking CC mechanisms. We define CC overhead and derive it for conflicting and nonconflicting transactions for each class of CC mechanisms. Since CC overhead is dependent on CC mechanism only, it can be used as a metric for comparison of CC mechanisms and as a measure of CC load on DDBMS resources. We also describe two new nonblocking distributed concurrency control mechanisms which use the concept of multiple data object versions. One is based on time stamp ordering of transaction execution and the other is based on nonserializable execution detection and recovery to serializable execution. We compare both with distributed two-phase lockingsupported by the Foundation Research Program of the Naval Postgraduate School with funds provided by the Chief of Naval Research.http://archive.org/details/concurrencycontr00badaN

    Software Transactional Memory Building Blocks

    Get PDF
    Exploiting thread-level parallelism has become a part of mainstream programming in recent years. Many approaches to parallelization require threads executing in parallel to also synchronize occassionally (i.e., coordinate concurrent accesses to shared state). Transactional Memory (TM) is a programming abstraction that provides the concept of database transactions in the context of programming languages such as C/C++. This allows programmers to only declare which pieces of a program synchronize without requiring them to actually implement synchronization and tune its performance, which in turn makes TM typically easier to use than other abstractions such as locks. I have investigated and implemented the building blocks that are required for a high-performance, practical, and realistic TM. They host several novel algorithms and optimizations for TM implementations, both for current hardware and future hardware extensions for TM, and are being used in or have influenced commercial TM implementations such as the TM support in GCC

    Heuristinen yhteistyöhaku ohjelmistoagenttien avulla

    Get PDF
    Parallel algorithms extend the notion of sequential algorithms by permitting the simultaneous execution of independent computational steps. When the independence constraint is lifted and executions can freely interact and intertwine, parallel algorithms become concurrent and may behave in a nondeterministic way. Parallelism has over the years slowly risen to be a standard feature of high-performance computing, but concurrency, being even harder to reason about, is still considered somewhat notorious and undesirable. As such, the implicit randomness available in concurrency is rarely made use of in algorithms. This thesis explores concurrency as a means to facilitate algorithmic cooperation in a heuristic search setting. We use agents, cooperating software entities, to build a single-source shortest path (SSSP) search algorithm based on parallelized A∗, dubbed A!. We show how asynchronous information sharing gives rise to implicit randomness, which cooperating agents use in A! to maintain a collective secondary ranking heuristic and focus search space exploration. We experimentally show that A! consistently outperforms both vanilla A∗ and a noncooperative, explicitly randomized A∗ variant in the standard n-puzzle sliding tile problem context. The results indicate that A! performance increases with the addition of more agents, but that the returns are diminishing. A! is observed to be sensitive to heuristic improvement, but also constrained by search overhead from limited path diversity. A hybrid approach combining both implicit and explicit randomness is also evaluated and found to not be an improvement over A! alone. The studied A! implementation based on vanilla A∗ is not as such competitive against state-of-the-art parallel A∗ algorithms, but rather a first step in applying concurrency to speed up heuristic SSSP search. The empirical results imply that concurrency and nondeterministic cooperation can successfully be harnessed in algorithm design, inviting further inquiry into algorithms of this kind.Rinnakkaisalgoritmit sallivat useiden riippumattomien ohjelmakäskyjen suorittamisen samanaikaisesti. Kun riippumattomuusrajoite poistetaan ja käskyjen suorittamisen järjestystä ei hallita, rinnakkaisalgoritmit voivat käskysuoritusten samanaikaisuuden vuoksi käyttäytyä epädeterministisellä tavalla. Rinnakkaisuus on vuosien saatossa noussut tärkeään rooliin tietotekniikassa ja samalla hallitsematonta samanaikaisuutta on yleisesti alettu pitää ongelmallisena ja ei-toivottuna. Samanaikaisuudesta kumpuavaa epäsuoraa satunnaisuutta hyödynnetään harvoin algoritmeissa. Tämä työ käsittelee käskysuoritusten samanaikaisuuden hyödyntämistä osana heuristista yhteistyöhakua. Työssä toteutetaan agenttien, yhteistyökykyisten ohjelmistokomponenttien, avulla uudenlainen A!-hakualgoritmi. A! perustuu rinnakkaiseen A∗ -algoritmiin, joka ratkaisee yhden lähteen lyhimmän polun hakuongelman. Työssä näytetään, miten ajastamaton viestintä agenttien välillä johtaa epäsuoraan satunnaisuuteen, jota A!-agentit kollektiivisesti hyödyntävät toissijaisen järjestämisheuristiikan ylläpitämisessä ja edelleen haun kohdentamisessa. Työssä näytetään kokeellisesti, kuinka A! suoriutuu niin tavanomaista kuin satunnaistettuakin A∗ -algoritmia paremmin n-puzzle pulmapelin ratkaisemisessa. Tulokset osoittavat, että A!-algoritmin suorituskyky kasvaa lisäagenttien myötä, mutta myös sen, että hyöty on joka lisäyksen jälkeen suhteellisesti pienempi. A! osoittautuu heuristiikan hyödyntämisen osalta verrokkeja herkemmäksi, mutta myös etsintäpolkujen monimuotoisuuden kannalta vaatimattomaksi. Yksinkertaisen suoraa ja epäsuoraa satunnaisuutta yhdistävän hybridialgoritmin ei todeta tuovan lisäsuorituskykyä A!-algoritmiin verrattuna. Empiiriset kokeet osoittavat, että hallitsematonta samanaikaisuutta ja epädeterminististä yhteistyötä voi onnistuneesti hyödyntää algoritmisuunnittelussa, mikä kannustaa lisätutkimuksiin näitä soveltavan algoritmiikan parissa

    Semantics and logics for signals

    Get PDF
    In operating systems such as Unix, processes can interact via signals. Signal handling resembles both exception handling and concurrent interleaving of processes. The handlers can be installed dynamically by the main program, but signals arrive non-deterministically; therefore, a handler may interrupt a program at any point. However, the interleaving of actions is not symmetric, in that the handler interrupts the main program, but not conversely. This thesis presents operational semantics and program logic for an idealized form of signal handling. To make signal handling logically tractable, we define handling to be block-structured. To reason about the interleaving of signal handlers, we adopt the idea of binary relations on states from rely-guarantee logics, imposing rely conditions on handlers. Given the one-way interleaving of signal handlers, the logic is less symmetric than rely-guarantee. We combine signal and exception handlers in the same language to investigate their interactions, specifically whether a handler can run more than once or is linearly used. We prove soundness of the program logic relative to a big-step operational semantics for signal handling. Then, we introduce and discuss reentrancy in various domains. Finally, we present our work towards logic with Reentrancy Linear Type System
    corecore