10 research outputs found

    Unified concurrent write barrier

    Get PDF
    In a programming language with support for garbage collection, a write barrier is a code snippet that maintains the key invariants of the garbage collector. The write barrier is typically executed after a write operation. The write barrier is computationally expensive and can impact program performance. This is true to a greater extent for languages where garbage collectors need to maintain multiple sets of invariants. For example, languages that employ garbage collection schemes with two collectors may maintain their invariants using multiple different write barriers. The techniques of this disclosure address the problem of maintaining multiple invariants by unifying the write barriers and by executing computationally expensive parts of the write barrier in a concurrent thread

    Pervasive Monitoring - An Intelligent Sensor Pod Approach for Standardised Measurement Infrastructures

    Get PDF
    Geo-sensor networks have traditionally been built up in closed monolithic systems, thus limiting trans-domain usage of real-time measurements. This paper presents the technical infrastructure of a standardised embedded sensing device, which has been developed in the course of the Live Geography approach. The sensor pod implements data provision standards of the Sensor Web Enablement initiative, including an event-based alerting mechanism and location-aware Complex Event Processing functionality for detection of threshold transgression and quality assurance. The goal of this research is that the resultant highly flexible sensing architecture will bring sensor network applications one step further towards the realisation of the vision of a “digital skin for planet earth”. The developed infrastructure can potentially have far-reaching impacts on sensor-based monitoring systems through the deployment of ubiquitous and fine-grained sensor networks. This in turn allows for the straight-forward use of live sensor data in existing spatial decision support systems to enable better-informed decision-making.Seventh Framework Programme (European Commission) (FP7 project GENESIS no. 223996)Austria. Federal Ministry of Transport, Innovation and TechnologyERA-STAR Regions Project (G2real)Austria. Federal Ministry of Science and Researc

    LIPIcs

    Get PDF
    The semantics of concurrent data structures is usually given by a sequential specification and a consistency condition. Linearizability is the most popular consistency condition due to its simplicity and general applicability. Nevertheless, for applications that do not require all guarantees offered by linearizability, recent research has focused on improving performance and scalability of concurrent data structures by relaxing their semantics. In this paper, we present local linearizability, a relaxed consistency condition that is applicable to container-type concurrent data structures like pools, queues, and stacks. While linearizability requires that the effect of each operation is observed by all threads at the same time, local linearizability only requires that for each thread T, the effects of its local insertion operations and the effects of those removal operations that remove values inserted by T are observed by all threads at the same time. We investigate theoretical and practical properties of local linearizability and its relationship to many existing consistency conditions. We present a generic implementation method for locally linearizable data structures that uses existing linearizable data structures as building blocks. Our implementations show performance and scalability improvements over the original building blocks and outperform the fastest existing container-type implementations

    Garbage Collection as a Joint Venture

    No full text

    How FIFO is your concurrent FIFO queue

    No full text
    Abstract Designing and implementing high-performance concurrent data structures whose access performance scales on multicore hardware is difficult. Concurrent implementations of FIFO queues, for example, seem to require algorithms that efficiently increase the potential for parallel access by implementing semantically relaxed rather than strict FIFO queues where elements may be returned in some out-of-order fashion. However, we show experimentally that the on average shorter execution time of enqueue and dequeue operations of fast but relaxed implementations may offset the effect of semantical relaxations making them appear as behaving more FIFO than strict but slow implementations. Our key assumption is that ideal concurrent data structure operations should execute in zero time. We define two metrics, elementfairness and operation-fairness, to measure the degree of element and operation reordering, respectively, assuming operations take zero time. Element-fairness quantifies the deviation from FIFO queue semantics had all operations executed in zero time. With this metric even strict implementations of FIFO queues are not FIFO. Operation-fairness helps explaining element-fairness by quantifying operation reordering when considering the actual time operations took effect relative to their invocation time. In our experiments, the effect of poor operation-fairness of strict but slow implementations on element-fairness may outweigh the effect of semantical relaxation of fast but relaxed implementations. Categories and Subject Descriptors D.1.3 [Concurrent Programming]: Parallel Programming General Terms Measurement Keywords Zero-Time Linearization, Element-Fairness, Operation-Fairness * This work has been supported by the National Research Network RiSE on Rigorous Systems Engineering (Austrian Science Fund S11404-N23). Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee

    Distributed queues in shared memory: Multicore performance and scalability through quantitative relaxation

    No full text
    A prominent remedy to multicore scalability issues in concurrent data structure implementations is to relax the sequential specification of the data structure. We present distributed queues (DQ), a new family of relaxed concurrent queue implementations. DQs implement relaxed queues with linearizable emptiness check and either configurable or bounded out-of-order behavior or pool behavior. Our experiments show that DQs outperform and outscale in micro- and macrobenchmarks all strict and relaxed queue as well as pool implementations that we considered
    corecore