31 research outputs found

    Scalable eventually consistent counters over unreliable networks

    Get PDF
    Counters are an important abstraction in distributed computing, and play a central role in large scale geo-replicated systems, counting events such as web page impressions or social network “likes”. Classic distributed counters, strongly consistent via linearisability or sequential consistency, cannot be made both available and partition-tolerant, due to the CAP Theorem, being unsuitable to large scale scenarios. This paper defines Eventually Consistent Distributed Counters (ECDCs) and presents an implementation of the concept, Handoff Counters, that is scalable and works over unreliable networks. By giving up the total operation ordering in classic distributed counters, ECDC implementations can be made AP in the CAP design space, while retaining the essence of counting. Handoff Counters are the first Conflict-free Replicated Data Type (CRDT) based mechanism that overcomes the identity explosion problem in naive CRDTs, such as G-Counters (where state size is linear in the number of independent actors that ever incremented the counter), by managing identities towards avoiding global propagation and garbage collecting temporary entries. The approach used in Handoff Counters is not restricted to counters, being more generally applicable to other data types with associative and commutative operations.This work was partially supported by SMILES within project “TEC4Growth Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01- 0145-FEDER-000020” financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF); EU FP7 SyncFree project (609551), and EU H2020 LightKone project (732505).info:eu-repo/semantics/publishedVersio

    Abstract Lock-Free Garbage Collection for Multiprocessors

    No full text
    Garbage collection algorithms for shared-memory mul-tiprocessors typically rely on some form of global syn-chronization to preserve consistency. Such global syn-chronization may lead to problems on asynchronous architectures: if one process is halted or delayed, other, non-faulty processes will be unable to progress. By contrast, a storage management algorithm is loclc-j%ee if (in the absence of resource exhaustion) a process that is allocating or collecting memory can be delayed at any point without forcing other processes to block. This paper presents the first algorithm for lock-free garbage collection in a realistic model. The algorithm assumes that processes synchronize by applying read, write, and compare&swap operations to shared memory. This al-gorithm uses no locks, busy-waiting, or barrier synchronization, it does not assume that processes can observe or modify one another’s local variables or registers, and it does not use inter-process interrupts.

    Linearizability: a correctness condition for concurrent objects

    No full text
    A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object’s operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable
    corecore