162,469 research outputs found

    The weakest failure detector for wait-free dining under eventual weak exclusion

    Full text link
    Dining philosophers is a classic scheduling problem for local mutual exclusion on arbitrary conflict graphs. We establish necessary conditions to solve wait-free dining under eventual weak exclusion in message-passing systems with crash faults. Wait-free dining ensures that every correct hungry process eventually eats. Eventual weak exclusion permits finitely many scheduling mistakes, but eventually no live neighbors eat simultaneously; this exclusion criterion models scenarios where scheduling mistakes are recoverable or only affect per-formance. Previous work showed that the eventually perfect failure detector (3P) is sufficient to solve wait-free dining under eventual weak exclusion; we prove that 3P is also necessary, and thus 3P is the weakest oracle to solve this problem. Our reduction also establishes that any such din-ing solution can be made eventually fair. Finally, the reduc-tion itself may be of more general interest; when applied to wait-free perpetual weak exclusion, our reduction produces an alternative proof that the more powerful trusting oracle (T) is necessary (but not sufficient) to solve the problem o

    The weakest failure detector for wait-free dining under eventual weak exclusion

    Get PDF
    ABSTRACT Dining philosophers is a classic scheduling problem for local mutual exclusion on arbitrary conflict graphs. We establish necessary conditions to solve wait-free dining under eventual weak exclusion in message-passing systems with crash faults. Wait-free dining ensures that every correct hungry process eventually eats. Eventual weak exclusion permits finitely many scheduling mistakes, but eventually no live neighbors eat simultaneously; this exclusion criterion models scenarios where scheduling mistakes are recoverable or only affect performance. Previous work showed that the eventually perfect failure detector (3P) is sufficient to solve wait-free dining under eventual weak exclusion; we prove that 3P is also necessary, and thus 3P is the weakest oracle to solve this problem. Our reduction also establishes that any such dining solution can be made eventually fair. Finally, the reduction itself may be of more general interest; when applied to wait-free perpetual weak exclusion, our reduction produces an alternative proof that the more powerful trusting oracle (T ) is necessary (but not sufficient) to solve the problem of Fault-Tolerant Mutual Exclusion (FTME)

    Automated Synthesis of Distributed Self-Stabilizing Protocols

    Full text link
    In this paper, we introduce an SMT-based method that automatically synthesizes a distributed self-stabilizing protocol from a given high-level specification and network topology. Unlike existing approaches, where synthesis algorithms require the explicit description of the set of legitimate states, our technique only needs the temporal behavior of the protocol. We extend our approach to synthesize ideal-stabilizing protocols, where every state is legitimate. We also extend our technique to synthesize monotonic-stabilizing protocols, where during recovery, each process can execute an most once one action. Our proposed methods are fully implemented and we report successful synthesis of well-known protocols such as Dijkstra's token ring, a self-stabilizing version of Raymond's mutual exclusion algorithm, ideal-stabilizing leader election and local mutual exclusion, as well as monotonic-stabilizing maximal independent set and distributed Grundy coloring

    Analysing Mutual Exclusion using Process Algebra with Signals

    Get PDF
    In contrast to common belief, the Calculus of Communicating Systems (CCS) and similar process algebras lack the expressive power to accurately capture mutual exclusion protocols without enriching the language with fairness assumptions. Adding a fairness assumption to implement a mutual exclusion protocol seems counter-intuitive. We employ a signalling operator, which can be combined with CCS, or other process calculi, and show that this minimal extension is expressive enough to model mutual exclusion: we confirm the correctness of Peterson's mutual exclusion algorithm for two processes, as well as Lamport's bakery algorithm, under reasonable assumptions on the underlying memory model. The correctness of Peterson's algorithm for more than two processes requires stronger, less realistic assumptions on the underlying memory model.Comment: In Proceedings EXPRESS/SOS 2017, arXiv:1709.0004

    TAPAs: A Tool for the Analysis of Process Algebras

    Get PDF
    Process algebras are formalisms for modelling concurrent systems that permit mathematical reasoning with respect to a set of desired properties. TAPAs is a tool that can be used to support the use of process algebras to specify and analyze concurrent systems. It does not aim at guaranteeing high performances, but has been developed as a support to teaching. Systems are described as process algebras terms that are then mapped to labelled transition systems (LTSs). Properties are verified either by checking equivalence of concrete and abstract systems descriptions, or by model checking temporal formulae over the obtained LTS. A key feature of TAPAs, that makes it particularly suitable for teaching, is that it maintains a consistent double representation of each system both as a term and as a graph. Another useful didactical feature is the exhibition of counterexamples in case equivalences are not verified or the proposed formulae are not satisfied

    Progressive Transactional Memory in Time and Space

    Full text link
    Transactional memory (TM) allows concurrent processes to organize sequences of operations on shared \emph{data items} into atomic transactions. A transaction may commit, in which case it appears to have executed sequentially or it may \emph{abort}, in which case no data item is updated. The TM programming paradigm emerged as an alternative to conventional fine-grained locking techniques, offering ease of programming and compositionality. Though typically themselves implemented using locks, TMs hide the inherent issues of lock-based synchronization behind a nice transactional programming interface. In this paper, we explore inherent time and space complexity of lock-based TMs, with a focus of the most popular class of \emph{progressive} lock-based TMs. We derive that a progressive TM might enforce a read-only transaction to perform a quadratic (in the number of the data items it reads) number of steps and access a linear number of distinct memory locations, closing the question of inherent cost of \emph{read validation} in TMs. We then show that the total number of \emph{remote memory references} (RMRs) that take place in an execution of a progressive TM in which nn concurrent processes perform transactions on a single data item might reach Ω(nlogn)\Omega(n \log n), which appears to be the first RMR complexity lower bound for transactional memory.Comment: Model of Transactional Memory identical with arXiv:1407.6876, arXiv:1502.0272

    Hardware synthesis of weakly consistent C concurrency

    Get PDF
    Lock-free algorithms, in which threads synchronise not via coarse-grained mutual exclusion but via fine-grained atomic operations ('atomics'), have been shown empirically to be the fastest class of multi-threaded algorithms in the realm of conventional processors. This paper explores how these algorithms can be compiled from C to reconfigurable hardware via high-level synthesis (HLS). We focus on the scheduling problem, in which software instructions are assigned to hardware clock cycles. We first show that typical HLS scheduling constraints are insufficient to implement atomics, because they permit some instruction reorderings that, though sound in a single-threaded context, demonstrably cause erroneous results when synthesising multi-threaded programs. We then show that correct behaviour can be restored by imposing additional intra-thread constraints among the memory operations. We implement our approach in the open-source LegUp HLS framework, and provide both sequentially consistent (SC) and weakly consistent ('weak') atomics. Weak atomics necessitate fewer constraints than SC atomics, but suffice for many concurrent algorithms. We confirm, via automatic model-checking, that we correctly implement the semantics defined by the 2011 revision of the C standard. A case study on a circular buffer suggests that circuits synthesised from programs that use atomics can be 2.5x faster than those that use locks, and that weak atomics can yield a further 1.5x speedup
    corecore