3,817 research outputs found

    Prompt Application-Transparent Transaction Revalidation in Software Transactional Memory

    Get PDF
    Software Transactional Memory (STM) allows encapsulating shared-data accesses within transactions, executed with atomicity and isolation guarantees. The assessment of the consistency of a running transaction is performed by the STM layer at specific points of its execution, such as when a read or write access to a shared object occurs, or upon a commit attempt. However, performance and energy efficiency issues may arise when no shared-data read/write operation occurs for a while along a thread running a transaction. In this scenario, the STM layer may not regain control for a considerable amount of time, thus not being able to early detect if such transaction has become inconsistent in the meantime. To tackle this problem we present an STM architecture that, thanks to a lightweight operating system support, is able to perform a fine-grain periodic (hence prompt) revalidation of running transactions. Our proposal targets Linux and x86 systems and has been integrated with the open source TinySTM package. Experimental results with a port of the TPC-C benchmark to STM environments show the effectiveness of our solution

    Extending and Implementing the Self-adaptive Virtual Processor for Distributed Memory Architectures

    Get PDF
    Many-core architectures of the future are likely to have distributed memory organizations and need fine grained concurrency management to be used effectively. The Self-adaptive Virtual Processor (SVP) is an abstract concurrent programming model which can provide this, but the model and its current implementations assume a single address space shared memory. We investigate and extend SVP to handle distributed environments, and discuss a prototype SVP implementation which transparently supports execution on heterogeneous distributed memory clusters over TCP/IP connections, while retaining the original SVP programming model

    Dynamic Parameter Allocation in Parameter Servers

    Full text link
    To keep up with increasing dataset sizes and model complexity, distributed training has become a necessity for large machine learning tasks. Parameter servers ease the implementation of distributed parameter management---a key concern in distributed training---, but can induce severe communication overhead. To reduce communication overhead, distributed machine learning algorithms use techniques to increase parameter access locality (PAL), achieving up to linear speed-ups. We found that existing parameter servers provide only limited support for PAL techniques, however, and therefore prevent efficient training. In this paper, we explore whether and to what extent PAL techniques can be supported, and whether such support is beneficial. We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse, and experimentally compare its performance to existing parameter servers across a number of machine learning tasks. We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers

    Programming with process groups: Group and multicast semantics

    Get PDF
    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects
    • …
    corecore