6,388 research outputs found
Simplifying proofs of linearisability using layers of abstraction
Linearisability has become the standard correctness criterion for concurrent
data structures, ensuring that every history of invocations and responses of
concurrent operations has a matching sequential history. Existing proofs of
linearisability require one to identify so-called linearisation points within
the operations under consideration, which are atomic statements whose execution
causes the effect of an operation to be felt. However, identification of
linearisation points is a non-trivial task, requiring a high degree of
expertise. For sophisticated algorithms such as Heller et al's lazy set, it
even is possible for an operation to be linearised by the concurrent execution
of a statement outside the operation being verified. This paper proposes an
alternative method for verifying linearisability that does not require
identification of linearisation points. Instead, using an interval-based logic,
we show that every behaviour of each concrete operation over any interval is a
possible behaviour of a corresponding abstraction that executes with
coarse-grained atomicity. This approach is applied to Heller et al's lazy set
to show that verification of linearisability is possible without having to
consider linearisation points within the program code
Logic programming in the context of multiparadigm programming: the Oz experience
Oz is a multiparadigm language that supports logic programming as one of its
major paradigms. A multiparadigm language is designed to support different
programming paradigms (logic, functional, constraint, object-oriented,
sequential, concurrent, etc.) with equal ease. This article has two goals: to
give a tutorial of logic programming in Oz and to show how logic programming
fits naturally into the wider context of multiparadigm programming. Our
experience shows that there are two classes of problems, which we call
algorithmic and search problems, for which logic programming can help formulate
practical solutions. Algorithmic problems have known efficient algorithms.
Search problems do not have known efficient algorithms but can be solved with
search. The Oz support for logic programming targets these two problem classes
specifically, using the concepts needed for each. This is in contrast to the
Prolog approach, which targets both classes with one set of concepts, which
results in less than optimal support for each class. To explain the essential
difference between algorithmic and search programs, we define the Oz execution
model. This model subsumes both concurrent logic programming
(committed-choice-style) and search-based logic programming (Prolog-style).
Instead of Horn clause syntax, Oz has a simple, fully compositional,
higher-order syntax that accommodates the abilities of the language. We
conclude with lessons learned from this work, a brief history of Oz, and many
entry points into the Oz literature.Comment: 48 pages, to appear in the journal "Theory and Practice of Logic
Programming
Faster linearizability checking via -compositionality
Linearizability is a well-established consistency and correctness criterion
for concurrent data types. An important feature of linearizability is Herlihy
and Wing's locality principle, which says that a concurrent system is
linearizable if and only if all of its constituent parts (so-called objects)
are linearizable. This paper presents -compositionality, which generalizes
the idea behind the locality principle to operations on the same concurrent
data type. We implement -compositionality in a novel linearizability
checker. Our experiments with over nine implementations of concurrent sets,
including Intel's TBB library, show that our linearizability checker is one
order of magnitude faster and/or more space efficient than the state-of-the-art
algorithm.Comment: 15 pages, 2 figure
An Innovative Approach to Achieve Compositionality Efficiently using Multi-Version Object Based Transactional Systems
In the modern era of multicore processors, utilizing cores is a tedious job.
Synchronization and communication among processors involve high cost. Software
transaction memory systems (STMs) addresses this issues and provide better
concurrency in which programmer need not have to worry about consistency
issues. Another advantage of STMs is that they facilitate compositionality of
concurrent programs with great ease. Different concurrent operations that need
to be composed to form a single atomic unit is achieved by encapsulating them
in a single transaction. In this paper, we introduce a new STM system as
multi-version object based STM (MVOSTM) which is the combination of both of
these ideas for harnessing greater concurrency in STMs. As the name suggests
MVOSTM, works on a higher level and maintains multiple versions corresponding
to each key. We have developed MVOSTM with the unlimited number of versions
corresponding to each key. In addition to that, we have developed garbage
collection for MVOSTM (MVOSTM-GC) to delete unwanted versions corresponding to
the keys to reduce traversal overhead. MVOSTM provides greater concurrency
while reducing the number of aborts and it ensures compositionality by making
the transactions atomic. Here, we have used MVOSTM for the list and hash-table
data structure as list-MVOSTM and HT- MVOSTM. Experimental results of
list-MVOSTM outperform almost two to twenty fold speedup than existing
state-of-the-art list based STMs (Trans-list, Boosting-list, NOrec-list,
list-MVTO, and list-OSTM). HT-MVOSTM shows a significant performance gain of
almost two to nineteen times better than existing state-of-the-art hash-table
based STMs (ESTM, RWSTMs, HT-MVTO, and HT-OSTM). MVOSTM with list and
hash-table shows the least number of aborts among all the existing STM
algorithms. MVOSTM satisfies correctness-criteria as opacity.Comment: 35 pages, 23 figure
- …