31 research outputs found
Multiversion Conflict Notion for Transactional Memory Systems*
In recent years, Software Transactional Memory systems (STMs) have garnered significant interest
as an elegant alternative for addressing concurrency issues in memory. STM systems take optimistic
approach. Multiple transactions are allowed to execute concurrently. On completion, each
transaction is validated and if any inconsistency is observed it is aborted. Otherwise it is allowed to
commit.
In databases a class of histories called as conflict-serializability (CSR) based on the notion of
conflicts have been identified, whose membership can be efficiently verified. As a result, CSR is the
commonly used correctness criterion in databases In fact all known single-version schedulers known
for databases are a subset of CSR. Similarly, using the notion of conflicts, a correctness criterion,
conflict-opacity (co-opacity) which is a sub-class of can be designed whose membership can be
verified in polynomial time. Using the verification mechanism, an efficient STM implementation
can be designed that is permissive w.r.t co-opacity. Further, many STM implementations have been
developed that using the notion of conflicts.
By storing multiple versions for each transaction object, multi-version STMs provide more concurrency
than single-version STMs. But the main drawback of co-opacity is that it does not admit
histories that are uses multiple versions. This has motivated us to develop a new conflict notions for
multi-version STMs. In this paper, we present a new conflict notion multi-version conflict. Using
this conflict notion, we identify a new subclass of opacity, mvc-opacity that admits multi-versioned
histories and whose membership can be verified in polynomial time. We show that co-opacity is a
proper subset of this class.
An important requirement that arises while building a multi-version STM system is to decide
“on the spot” or schedule online among the various versions available, which version should a transaction
read from? Unfortunately this notion of online scheduling can sometimes lead to unnecessary
aborts of transactions if not done carefully. To capture the notion of online scheduling which avoid
unnecessary aborts in STMs, we have identified a new concept ols-permissiveness and is defined
w.r.t a correctness-criterion, similar to permissiveness. We show that it is impossible for a STM system
that is permissive w.r.t opacity to such avoid un-necessary aborts i.e. satisfy ols-permissiveness
w.r.t opacity. We show this result is true for mvc-opacity as well
Fairness and Approximation in Multi-version Transactional Memory.
Shared memory multi-core systems bene_x000C_t from transactional memory implementations due to the inherent avoidance of deadlocks and progress guarantees. In this research, we examine how the system performance is a_x000B_ected by transaction fairness in scheduling and by the precision in consistency. We _x000C_rst explore the fairness aspect using a Lazy Snapshot (multi-version) Algorithm. The fairness of transactions scheduling aims to balance the load between read-only and update transactions. We implement a fairness mechanism based on machine learning techniques that improve fairness decisions according to the transaction execution history. Experimental analysis shows that the throughput of the Lazy Snapshot Algorithm is improved with machine learning support. We also explore the impacts on performance of consistency relaxation. In transactional memory, correctness is typically proven with opacity which is a precise consistency property that requires a legal serialization of an execution such that transactions do not overlap (atomicity) and read instructions always return the most recent value (legality). In real systems there are situations where system delays do not allow precise consistency, such as in large scale applications, due to network or other time delays. Thus, we introduce here the notion of approximate consistency in transactional memory. We de_x000C_ne K-opacity as a relaxed consistency property where transactions\u27 read operations may return one of K most recent written values. In multi-version transactional memory, this allows to save a new object version once every K object updates, which has two bene_x000C_ts: (i) it reduces space requirements by a factor of K, and (ii) it reduces the number of aborts, since there is smaller chance for con icts. In fact, we apply the concept of K-opacity on regular read and write, count and queue objects, which are common objects used in typical concurrent programs. We provide formal correctness proofs and we also demonstrate the performance bene_x000C_ts of our approach with experimental analysis. We compare the performance of precise consistent execution (1-opaque) with di_x000B_erent consistency values of K using micro benchmarks. The results show that increased relaxation of opacity gives higher throughput and decreases the aborts rate signi_x000C_cantly
Recommended from our members
Software lock elision for x86 machine code
More than a decade after becoming a topic of intense research there is no
transactional memory hardware nor any examples of software transactional memory
use outside the research community. Using software transactional memory in large
pieces of software needs copious source code annotations and often means
that standard compilers and debuggers can no longer be used. At the same time,
overheads associated with software transactional memory fail to motivate
programmers to expend the needed effort to use software transactional
memory. The only way around the overheads in the case of general unmanaged code
is the anticipated availability of hardware support. On the other hand, architects
are unwilling to devote power and area budgets in mainstream microprocessors to
hardware transactional memory, pointing to transactional memory being a
"niche" programming construct. A deadlock has thus ensued that is blocking
transactional memory use and experimentation in the mainstream.
This dissertation covers the design and construction of a software transactional
memory runtime system called SLE_x86 that can potentially break this
deadlock by decoupling transactional memory from programs using it. Unlike most
other STM designs, the core design principle is transparency rather than
performance. SLE_x86 operates at the level of x86 machine code, thereby
becoming immediately applicable to binaries for the popular x86
architecture. The only requirement is that the binary synchronise using known
locking constructs or calls such as those in Pthreads or OpenMP
libraries. SLE_x86 provides speculative lock elision (SLE) entirely in
software, executing critical sections in the binary using transactional
memory. Optionally, the critical sections can also be executed without using
transactions by acquiring the protecting lock.
The dissertation makes a careful analysis of the impact on performance due to
the demands of the x86 memory consistency model and the need to transparently
instrument x86 machine code. It shows that both of these problems can be
overcome to reach a reasonable level of performance, where transparent
software transactional memory can perform better than a lock. SLE_x86 can
ensure that programs are ready for transactional memory in any form, without
being explicitly written for it
Starvation Freedom in Multi-Version Transactional Memory Systems
Software Transactional Memory systems (STMs) have garnered significant interest as an elegant alternative for addressing synchronization and concurrency issues with multi-threaded programming in multi-core systems. In order for STMs to be efficient, they must guarantee some progress properties. This work explores the notion of starvation-freedom in Software Transactional Memory Systems (STMs). An STM system is said to be starvation-free if every thread invoking a transaction gets opportunity to take a step (due to the presence of a fair scheduler) then the transaction will eventually commit. A few starvation-free algorithms have been proposed in the literature in the context of single-version STM Systems. These algorithm work on the basis of priority. But the drawback with this approach is that if a set of high-priority transactions become slow then they can cause several other transactions to abort. Multi-version STMs maintain multiple-versions for each transactional object or t-object. By storing multiple versions, these systems can achieve greater concurrency. In this paper, we propose a multi-version starvation free STM, KSFTM, which as the name suggests achieves starvation freedom while storing K versions of each t-object. Here K is an input parameter fixed by the application programmer depending on the requirement. Our algorithm is dynamic which can support different values of K ranging from 1 to infinity . If K is infinity then there is no limit on the number of versions. But a separate garbage-collection mechanism is required to collect unwanted versions. On the other hand, when K is 1, it becomes same as a single-version STM system. We prove the correctness and starvation-freedom property of the proposed KSFTM algorithm. To the best of our knowledge, this is the first multi-version STM system that is correct and satisfies starvation-freedom as well
The PCL Theorem. Transactions cannot be Parallel, Consistent and Live.
We show that it is impossible to design a transactional memory system which ensures parallelism, i.e. transactions do not need to synchronize unless they access the same application objects, while ensuring very little consistency, i.e. a consistency condition, called weak adaptive consistency, introduced here and which is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and very little liveness, i.e. that transactions eventually commit if they run solo
Software Transactional Memory Building Blocks
Exploiting thread-level parallelism has become a part of mainstream programming in recent years. Many approaches to parallelization require threads executing in parallel to also synchronize occassionally (i.e., coordinate concurrent accesses to shared state). Transactional Memory (TM) is a programming abstraction that provides the concept of database transactions in the context of programming languages such as C/C++. This allows programmers to only declare which pieces of a program synchronize without requiring them to actually implement synchronization and tune its performance, which in turn makes TM typically easier to use than other abstractions such as locks.
I have investigated and implemented the building blocks that are required for a high-performance, practical, and realistic TM. They host several novel algorithms and optimizations for TM implementations, both for current hardware and future hardware extensions for TM, and are being used in or have influenced commercial TM implementations such as the TM support in GCC
The cartography of cell motion
Cell motility plays an important role throughout biology, the polymerisation of actin being fundamental in producing protrusive force. However, it is increasingly apparent that intracellular pressure, arising from myosin-II contraction, is a co-driver of motility. In its extreme form, pressure manifests itself as hemispherical protrusions, referred to as blebs, where membrane is torn from the underlying cortex. Although many components and signalling pathways have been identified, we lack a complete model of motility, particularly of the regulation and mechanics of blebbing. Advances in microscopy are continually improving the quality of time series image data, but the absence of highthroughput tools for extracting quantitative numbers remains an analysis bottle-neck. We develop the next generation of the successful QuimP software designed for automated analysis of motile cells, producing quantitative spatio-temporal maps of protein distributions and changes in cell morphology. Key to QuimP's new functionality, we present the Electrostatic Contour Migration Method (ECMM) that provides high resolution tracking of local deformation with better uniformity and efficiency than rival methods. Photobleaching experiments are used to give insight into the accuracy and limitations of in silico membrane tracking algorithms. We employ ECMM to build an automated protrusion tracking method (ECMM-APT) sensitive not only to pseudopodia, but also the complex characteristics of high speed blebs. QuimP is applied to characterising the protrusive behaviour of Dictyostelium, induced to bleb by imaging under agar. We show blebs are characterised by distinct speed-displacement distributions, can reach speeds of 4.9μm/sec, and preferentially form at the anks during chemotaxis. Significantly, blebs emerge from at
to concave membrane regions suggesting curvature is a major determinant of bleb location, size, and speed. We hypothesise that actin driven pseudopodia at the leading edge induce changes in curvature and therefore membrane tension, positive curvature inhibiting blebbing at the very front, and negative curvature enhancing blebbing at the sides. This possibly provides the necessary space for rear advancement. Furthermore, bleb kymographs reveal a retrograde shift of the cortex at the point of bleb expansion, suggesting inward contractive forces acting on the cortex even at concave regions. Strains defficient in phospholipid signalling show impaired chemotaxis and blebbing. Finally, we present further applications of QuimP, for example, we conclusively show that dishevelled is not polarised during Xenopus gastrulation, contrary to hypotheses in the literature