62,143 research outputs found

    Lock-free Concurrent Data Structures

    Full text link
    Concurrent data structures are the data sharing side of parallel programming. Data structures give the means to the program to store data, but also provide operations to the program to access and manipulate these data. These operations are implemented through algorithms that have to be efficient. In the sequential setting, data structures are crucially important for the performance of the respective computation. In the parallel programming setting, their importance becomes more crucial because of the increased use of data and resource sharing for utilizing parallelism. The first and main goal of this chapter is to provide a sufficient background and intuition to help the interested reader to navigate in the complex research area of lock-free data structures. The second goal is to offer the programmer familiarity to the subject that will allow her to use truly concurrent methods.Comment: To appear in "Programming Multi-core and Many-core Computing Systems", eds. S. Pllana and F. Xhafa, Wiley Series on Parallel and Distributed Computin

    Deterministic Consistency: A Programming Model for Shared Memory Parallelism

    Full text link
    The difficulty of developing reliable parallel software is generating interest in deterministic environments, where a given program and input can yield only one possible result. Languages or type systems can enforce determinism in new code, and runtime systems can impose synthetic schedules on legacy parallel code. To parallelize existing serial code, however, we would like a programming model that is naturally deterministic without language restrictions or artificial scheduling. We propose "deterministic consistency", a parallel programming model as easy to understand as the "parallel assignment" construct in sequential languages such as Perl and JavaScript, where concurrent threads always read their inputs before writing shared outputs. DC supports common data- and task-parallel synchronization abstractions such as fork/join and barriers, as well as non-hierarchical structures such as producer/consumer pipelines and futures. A preliminary prototype suggests that software-only implementations of DC can run applications written for popular parallel environments such as OpenMP with low (<10%) overhead for some applications.Comment: 7 pages, 3 figure

    The Lock-free kk-LSM Relaxed Priority Queue

    Full text link
    Priority queues are data structures which store keys in an ordered fashion to allow efficient access to the minimal (maximal) key. Priority queues are essential for many applications, e.g., Dijkstra's single-source shortest path algorithm, branch-and-bound algorithms, and prioritized schedulers. Efficient multiprocessor computing requires implementations of basic data structures that can be used concurrently and scale to large numbers of threads and cores. Lock-free data structures promise superior scalability by avoiding blocking synchronization primitives, but the \emph{delete-min} operation is an inherent scalability bottleneck in concurrent priority queues. Recent work has focused on alleviating this obstacle either by batching operations, or by relaxing the requirements to the \emph{delete-min} operation. We present a new, lock-free priority queue that relaxes the \emph{delete-min} operation so that it is allowed to delete \emph{any} of the ρ+1\rho+1 smallest keys, where ρ\rho is a runtime configurable parameter. Additionally, the behavior is identical to a non-relaxed priority queue for items added and removed by the same thread. The priority queue is built from a logarithmic number of sorted arrays in a way similar to log-structured merge-trees. We experimentally compare our priority queue to recent state-of-the-art lock-free priority queues, both with relaxed and non-relaxed semantics, showing high performance and good scalability of our approach.Comment: Short version as ACM PPoPP'15 poste

    Creating a Distributed Programming System Using the DSS: A Case Study of OzDSS

    Get PDF
    This technical report describes the integration of the Distribution Subsystem (DSS) to the programming system Mozart. The result, OzDSS, is described in detail. Essential when coupling a programming system to the DSS is how the internal model of threads and language entities are mapped to the abstract entities of the DSS. The model of threads and language entities of Mozart is described at a detailed level to explain the design choices made when developing the code that couples the DSS to Mozart. To show the challenges associated with different thread implementations, the C++DSS system is introduced. C++DSS is a C++ library which uses the DSS to implement different types of distributed language entities in the form of C++ classes. Mozart emulates threads, thus there is no risk of multiple threads accessing the DSS simultaneously. C++DSS, on the other hand, makes use of POSIX threads, thus simultaneous access to the DSS from multiple POSIX threads can happen. The fundamental differences in how threads are treated in a system that emulates threads (Mozart) to a system that make use of native-threads~(C++DSS) is discussed. The paper is concluded by a performance comparison between the OzDSS system and other distributed programming systems. We see that the OzDSS system outperforms ``industry grade'' Java-RMI and Java-CORBA implementations

    A Type-Safe Model of Adaptive Object Groups

    Full text link
    Services are autonomous, self-describing, technology-neutral software units that can be described, published, discovered, and composed into software applications at runtime. Designing software services and composing services in order to form applications or composite services requires abstractions beyond those found in typical object-oriented programming languages. This paper explores service-oriented abstractions such as service adaptation, discovery, and querying in an object-oriented setting. We develop a formal model of adaptive object-oriented groups which offer services to their environment. These groups fit directly into the object-oriented paradigm in the sense that they can be dynamically created, they have an identity, and they can receive method calls. In contrast to objects, groups are not used for structuring code. A group exports its services through interfaces and relies on objects to implement these services. Objects may join or leave different groups. Groups may dynamically export new interfaces, they support service discovery, and they can be queried at runtime for the interfaces they support. We define an operational semantics and a static type system for this model of adaptive object groups, and show that well-typed programs do not cause method-not-understood errors at runtime.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432

    Internal Design of the DSS

    Get PDF
    This technical report describes the implementation of the DSS middleware with focus on the design of the abstract entity interface and the coordination layer. Key concepts are highlighted and described, on the level of C++ classes

    Compositional Reasoning for Explicit Resource Management in Channel-Based Concurrency

    Get PDF
    We define a pi-calculus variant with a costed semantics where channels are treated as resources that must explicitly be allocated before they are used and can be deallocated when no longer required. We use a substructural type system tracking permission transfer to construct coinductive proof techniques for comparing behaviour and resource usage efficiency of concurrent processes. We establish full abstraction results between our coinductive definitions and a contextual behavioural preorder describing a notion of process efficiency w.r.t. its management of resources. We also justify these definitions and respective proof techniques through numerous examples and a case study comparing two concurrent implementations of an extensible buffer.Comment: 51 pages, 7 figure

    Toward Linearizability Testing for Multi-Word Persistent Synchronization Primitives

    Get PDF
    Persistent memory makes it possible to recover in-memory data structures following a failure instead of rebuilding them from state saved in slow secondary storage. Implementing such recoverable data structures correctly is challenging as their underlying algorithms must deal with both parallelism and failures, which makes them especially susceptible to programming errors. Traditional proofs of correctness should therefore be combined with other methods, such as model checking or software testing, to minimize the likelihood of uncaught defects. This research focuses specifically on the algorithmic principles of software testing, particularly linearizability analysis, for multi-word persistent synchronization primitives such as conditional swap operations. We describe an efficient decision procedure for linearizability in this context, and discuss its practical applications in detecting previously-unknown bugs in implementations of multi-word persistent primitives

    The C Object System: Using C as a High-Level Object-Oriented Language

    Full text link
    The C Object System (Cos) is a small C library which implements high-level concepts available in Clos, Objc and other object-oriented programming languages: uniform object model (class, meta-class and property-metaclass), generic functions, multi-methods, delegation, properties, exceptions, contracts and closures. Cos relies on the programmable capabilities of the C programming language to extend its syntax and to implement the aforementioned concepts as first-class objects. Cos aims at satisfying several general principles like simplicity, extensibility, reusability, efficiency and portability which are rarely met in a single programming language. Its design is tuned to provide efficient and portable implementation of message multi-dispatch and message multi-forwarding which are the heart of code extensibility and reusability. With COS features in hand, software should become as flexible and extensible as with scripting languages and as efficient and portable as expected with C programming. Likewise, Cos concepts should significantly simplify adaptive and aspect-oriented programming as well as distributed and service-oriented computingComment: 18
    corecore