59,220 research outputs found

    Reducing synchronization in distributed parallel programs

    Get PDF
    Developers of scalable libraries and applications for distributed-memory parallel systems face many challenges to attaining high performance. These challenges include communication latency, critical path delay, suboptimal scheduling, load imbalance, and system noise. These challenges are often defined and measured relative to points of broad synchronization in the program’s execution. Given the way in which many algorithms are defined and systems are implemented, gauging the above challenges at synchronization points is not unreasonable. In this thesis, I attempt to demonstrate that in many cases, those synchronization points are themselves the core issue behind these challenges. In some cases, the synchronizing operations cause a program to incur the costs from these challenges. In other cases, the presence of synchronization potentially exacerbates these problems. Through a simple performance model, I demonstrate that making synchronization less frequent can greatly mitigate performance issues. My work and several results in the literature show that many motifs and whole applications can be successfully redesigned to operate with asymptotically less synchronization than their naïve starting points. In exploring these issues, I have identified recurrent patterns across many applications and multiple environments that can guide future efforts more directly toward synchronization-avoiding designs. Thus, I attempt to offer developers the beginnings of a high-level play-book to follow rather than having to rediscover application-specific instances of the patterns

    Reliable Synchronization Primitives for Java

    Get PDF
    Java is an architecture-independent, object-oriented language designed to facilitate code-sharing across the Internet in general, via the Web in particular. Java is multithreaded, providing thread creation and synchronization constructs based on generalized monitors. Although these primitives are appropriate for many windowing applications, they are not necessarily well-suited for the larger class of multithreaded programs that occur as part of distributed systems. We demonstrate how the Java primitives, in conjunction with the object-oriented aspects of the language, can be used to implement a collection of other traditional synchronization paradigms. These paradigms are formally specified, their implementations are rigorously verified, and their use is illustrated with several examples

    Data-driven Polytopic Output Synchronization of Heterogeneous Multi-agent Systems from Noisy Data

    Full text link
    This paper proposes a novel approach to addressing the output synchronization problem in unknown heterogeneous multi-agent systems (MASs) using noisy data. Unlike existing studies that focus on noiseless data, we introduce a distributed data-driven controller that enables all heterogeneous followers to synchronize with a leader's trajectory. To handle the noise in the state-input-output data, we develop a data-based polytopic representation for the MAS. We tackle the issue of infeasibility in the set of output regulator equations caused by the noise by seeking approximate solutions via constrained fitting error minimization. This method utilizes measured data and a noise-matrix polytope to ensure near-optimal output synchronization. Stability conditions in the form of data-dependent semidefinite programs are derived, providing stabilizing controller gains for each follower. The proposed distributed data-driven control protocol achieves near-optimal output synchronization by ensuring the convergence of the tracking error to a bounded polytope, with the polytope size positively correlated with the noise bound. Numerical tests validate the practical merits of the proposed data-driven design and theory

    Motivating Time as a First Class Entity

    Get PDF
    In hard real-time applications, programs must not only be functionally correct but must also meet timing constraints. Unfortunately, little work has been done to allow a high-level incorporation of timing constraints into distributed real-time programs. Instead the programmer is required to ensure system timing through a complicated synchronization process or through low-level programming, making it difficult to create and modify programs. In this report, we describe six features that must be integrated into a high level language and underlying support system in order to promote time to a first class position in distributed real-time programming systems: expressibility of time, real-time communication, enforcement of timing constraints, fault tolerance to violations of constraints, ensuring distributed system state consistency in the time domain, and static timing verification. For each feature we describe what is required, what related work had been performed, and why this work does not adequately provide sufficient capabilities for distributed real-time programming. We then briefly outline an integrated approach to provide these six features using a high-level distributed programming language and system tools such as compilers, operating systems, and timing analyzers to enforce and verify timing constraints

    Garbage Collection for General Graphs

    Get PDF
    Garbage collection is moving from being a utility to a requirement of every modern programming language. With multi-core and distributed systems, most programs written recently are heavily multi-threaded and distributed. Distributed and multi-threaded programs are called concurrent programs. Manual memory management is cumbersome and difficult in concurrent programs. Concurrent programming is characterized by multiple independent processes/threads, communication between processes/threads, and uncertainty in the order of concurrent operations. The uncertainty in the order of operations makes manual memory management of concurrent programs difficult. A popular alternative to garbage collection in concurrent programs is to use smart pointers. Smart pointers can collect all garbage only if developer identifies cycles being created in the reference graph. Smart pointer usage does not guarantee protection from memory leaks unless cycle can be detected as process/thread create them. General garbage collectors, on the other hand, can avoid memory leaks, dangling pointers, and double deletion problems in any programming environment without help from the programmer. Concurrent programming is used in shared memory and distributed memory systems. State of the art shared memory systems use a single concurrent garbage collector thread that processes the reference graph. Distributed memory systems have very few complete garbage collection algorithms and those that exist use global barriers, are centralized and do not scale well. This thesis focuses on designing garbage collection algorithms for shared memory and distributed memory systems that satisfy the following properties: concurrent, parallel, scalable, localized (decentralized), low pause time, high promptness, no global synchronization, safe, complete, and operates in linear time

    Flexible language constructs for large parallel programs

    Get PDF
    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given

    Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    Full text link
    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "deterministic consistency", a parallel programming model as easy to understand as the "parallel assignment" construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that software-only implementations of DC can run applications written for popular parallel environments such as OpenMP with low (<10%) overhead for some applications.Comment: 7 pages, 3 figure
    • …
    corecore