3 research outputs found

    Laws of order

    Get PDF
    Building correct and efficient concurrent algorithms is known to be a difficult problem of fundamental importance. To achieve efficiency, designers try to remove unnecessary and costly synchronization. However, not only is this manual trial-and-error process ad-hoc, time consuming and error-prone, but it often leaves designers pondering the question of: is it inherently impossible to eliminate certain synchronization, or is it that I was unable to eliminate it on this attempt and I should keep trying? In this paper we respond to this question. We prove that it is impossible to build concurrent implementations of classic and ubiquitous specifications such as sets, queues, stacks, mutual exclusion and read-modify-write operations, that completely eliminate the use of expensive synchronization. We prove that one cannot avoid the use of either: i) read-after-write (RAW), where a write to shared variable A is followed by a read to a different shared variable B without a write to B in between, or ii) atomic write-after-read (AWAR), where an atomic operation reads and then writes to shared locations. Unfortunately, enforcing RAW or AWAR is expensive on all current mainstream processors. To enforce RAW, memory ordering--also called fence or barrier--instructions must be used. To enforce AWAR, atomic instructions such as compare-and-swap are required. However, these instructions are typically substantially slower than regular instructions. Although algorithm designers frequently struggle to avoid RAW and AWAR, their attempts are often futile. Our result characterizes the cases where avoiding RAW and AWAR is impossible. On the flip side, our result can be used to guide designers towards new algorithms where RAW and AWAR can be eliminated

    Bounds for mutual exclusion with only processor consistency

    No full text
    Abstract. Most weak memory consistency models are incapable of supporting a solution to mutual exclusion using only read and write operations to shared variables. Processor Consistency–Goodman’s version (PC-G) is an exception. Ahamad et al.[1] showed that Peterson’s mutual exclusion algorithm is correct for PC-G, but Lamport’s bakery algorithm is not. In this paper, we derive a lower bound on the number and type (single- or multi-writer) of variables that a mutual exclusion algorithm must use in order to be correct for PC-G. We show that any such solution for n processes must use at least one multi-writer and n singlewriters. This lower bound is tight when n � 2, and is tight when n � 2 for solutions that do not provide fairness. We show that Burns ’ algorithm is an unfair solution for mutual exclusion in PC-G that achieves our bound. However, five other known algorithms that use the same number and type of variables do not guarantee mutual exclusion when the memory consistency model is only PC-G, as opposed to the Sequential Consistency model for which they were designed. A corollary of this investigation is that, in contrast to Sequential Consistency, multi-writers cannot be implemented from single-writers in PC-G.
    corecore