5,454 research outputs found

    Concurrency Control for Transactional Drago

    Get PDF
    The granularity of concurrency control has a big impact on the performance of transactional systems. Concurrency control granu- larity and data granularity (data size) are usually the same. The e ect of this coupling is that if a coarse granularity is used, the overhead of data access (number of disk accesses) is reduced, but also the degree of concurrency. On the other hand, if a ne granularity is chosen to achieve a higher degree of concurrency (there are less con icts), the cost of data access is increased (each data item is accessed independently, which increases the number of disk accesses). There have been some pro- posals where data can be dynamically clustered/unclustered to increase either concurrency or data access depending on the application usage of data. However, concurrency control and data granularity remain tightly coupled. In Transactional Drago, a programming language for building distributed transactional applications, concurrency control has been un- coupled from data granularity, thus allowing to increase the degree of concurrency without degrading data access. This paper describes this approach and its implementation in Ada 95

    Maintaining consistency in distributed systems

    Get PDF
    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Shirakami: A Hybrid Concurrency Control Protocol for Tsurugi Relational Database System

    Full text link
    Modern real-world transactional workloads such as bills of materials or telecommunication billing need to process both short transactions and long transactions. Recent concurrency control protocols do not cope with such workloads since they assume only classical workloads (i.e., YCSB and TPC-C) that have relatively short transactions. To this end, we proposed a new concurrency control protocol Shirakami. Shirakami has two sub-protocols. Shirakami-LTX protocol is for long transactions based on multiversion concurrency control and Shirakami-OCC protocol is for short transactions based on Silo. Shirakami naturally integrates them with write preservation method and epoch-based synchronization. Shirakami is a module in Tsurugi system, which is a production-purpose relational database system

    Model-Based Proactive Read-Validation in Transaction Processing Systems

    Get PDF
    Concurrency control protocols based on read-validation schemes allow transactions which are doomed to abort to still run until a subsequent validation check reveals them as invalid. These late aborts do not favor the reduction of wasted computation and can penalize performance. To counteract this problem, we present an analytical model that predicts the abort probability of transactions handled via read-validation schemes. Our goal is to determine what are the suited points-along a transaction lifetime-to carry out a validation check. This may lead to early aborting doomed transactions, thus saving CPU time. We show how to exploit the abort probability predictions returned by the model in combination with a threshold-based scheme to trigger read-validations. We also show how this approach can definitely improve performance-leading up to 14 % better turnaround-as demonstrated by some experiments carried out with a port of the TPC-C benchmark to Software Transactional Memory
    • …
    corecore