2,885 research outputs found

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Performance models of concurrency control protocols for transaction processing systems

    Get PDF
    Transaction processing plays a key role in a lot of IT infrastructures. It is widely used in a variety of contexts, spanning from database management systems to concurrent programming tools. Transaction processing systems leverage on concurrency control protocols, which allow them to concurrently process transactions preserving essential properties, as isolation and atomicity. Performance is a critical aspect of transaction processing systems, and it is unavoidably affected by the concurrency control. For this reason, methods and techniques to assess and predict the performance of concurrency control protocols are of interest for many IT players, including application designers, developers and system administrators. The analysis and the proper understanding of the impact on the system performance of these protocols require quantitative approaches. Analytical modeling is a practical approach for building cost-effective computer system performance models, enabling us to quantitatively describe the complex dynamics characterizing these systems. In this dissertation we present analytical performance models of concurrency control protocols. We deal with both traditional transaction processing systems, such as database management systems, and emerging ones, as transactional memories. The analysis focuses on widely used protocols, providing detailed performance models and validation studies. In addition, we propose new modeling approaches, which also broaden the scope of our study towards a more realistic, application-oriented, performance analysis

    Analytical considerations for transactional cache protocols

    Get PDF
    Since the early nineties transactional cache protocols have been intensively studied in the context of client-server database systems. Research has developed a variety of protocols and compared different aspects of their quality using simulation systems and applying semi-standardized benchmarks. Unfortunately none of the related publications substantiated their experimental findings by thorough analytical considerations. We try to close this gap at least partially by presenting comprensive and highly accurate analytical formulas for quality aspects of two important transactional cache protocols. We consider the non-adaptive variants of the "Callback Read Protocol" (CBR) and the "Optimistic Concurrency Control Protocol" (OCC). The paper studies their cache filling size and the number of messages they produce for the so-called UNIFORM workload. In many cases the cache filling size may considerably differ from a given maximum cache size - a phenomenon which has been overlooked by former publications. Moreover for OCC, we also give a highly accurate formula which forecasts the transaction abortion rate. All formulas are compared against corresponding simulation results in order to validate their correctness

    Exploiting method semantics in client cache consistency protocols for object-oriented databases

    Get PDF
    PhD ThesisData-shipping systems are commonly used in client-server object-oriented databases. This is in- tended to utilise clients' resources and improve scalability by allowing clients to run transactions locally after fetching the required database items from the database server. A consequence of this is that a database item can be cached at more than one client. This therefore raises issues regarding client cache consistency and concurrency control. A number of client cache consistency protocols have been studied, and some approaches to concurrency control for object-oriented datahases have been proposed. Existing client consistency protocols, however, do not consider method semantics in concurrency control. This study proposes a client cache consistency protocol where method se- mantic can be exploited in concurrency control. It identifies issues regarding the use of method semantics for the protocol and investigates the performance using simulation. The performance re- sults show that this can result in performance gains when compared to existing protocols. The study also shows the potential benefits of asynchronous version of the protoco

    When Private Blockchain Meets Deterministic Database

    Full text link
    Private blockchain as a replicated transactional system shares many commonalities with distributed database. However, the intimacy between private blockchain and deterministic database has never been studied. In essence, private blockchain and deterministic database both ensure replica consistency by determinism. In this paper, we present a comprehensive analysis to uncover the connections between private blockchain and deterministic database. While private blockchains have started to pursue deterministic transaction executions recently, deterministic databases have already studied deterministic concurrency control protocols for almost a decade. This motivates us to propose Harmony, a novel deterministic concurrency control protocol designed for blockchain use. We use Harmony to build a new relational blockchain, namely HarmonyBC, which features low abort rates, hotspot resiliency, and inter-block parallelism, all of which are especially important to disk-oriented blockchain. Empirical results on Smallbank, YCSB, and TPC-C show that HarmonyBC offers 2.0x to 3.5x throughput better than the state-of-the-art private blockchains

    Testing the dependability and performance of group communication based database replication protocols

    Get PDF
    Database replication based on group communication systems has recently been proposed as an efficient and resilient solution for large-scale data management. However, its evaluation has been conducted either on simplistic simulation models, which fail to assess concrete implementations, or on complete system implementations which are costly to test with realistic large-scale scenarios. This paper presents a tool that combines implementations of replication and communication protocols under study with simulated network, database engine, and traffic generator models. Replication components can therefore be subjected to realistic large scale loads in a variety of scenarios, including fault-injection, while at the same time providing global observation and control. The paper shows first how the model is configured and validated to closely reproduce the behavior of a real system, and then how it is applied, allowing us to derive interesting conclusions both on replication and communication protocols and on their implementationsFundação para a Ciência e a Tecnologia (FCT) - Project STRONGREP (POSI/CHS/41285/2001)
    corecore