652 research outputs found
Maintaining the correctness of transactional memory programs
Dissertação para obtenção do Grau de Doutor em
Engenharia InformáticaThis dissertation addresses the challenge of maintaining the correctness of transactional memory programs, while improving its parallelism with small transactions and relaxed isolation levels.
The efficiency of the transactional memory systems depends directly on the level of parallelism, which in turn depends on the conflict rate. A high conflict rate between memory transactions can be addressed by reducing the scope of transactions, but this approach may turn the application prone to the occurrence of atomicity violations. Another way to address this issue is to ignore some of the conflicts by using a relaxed isolation level, such as snapshot isolation, at the cost of introducing write-skews serialization anomalies that break the consistency guarantees provided by a stronger consistency property, such as opacity.
In order to tackle the correctness issues raised by the atomicity violations and the write-skew anomalies, we propose two static analysis techniques: one based in a novel static analysis algorithm that works on a dependency graph of program variables and detects atomicity violations;
and a second one based in a shape analysis technique supported by separation logic augmented with heap path expressions, a novel representation based on sequences of heap dereferences that certifies if a transactional memory program executing under snapshot isolation is free from writeskew
anomalies.
The evaluation of the runtime execution of a transactional memory algorithm using snapshot
isolation requires a framework that allows an efficient implementation of a multi-version algorithm and, at the same time, enables its comparison with other existing transactional memory algorithms. In the Java programming language there was no framework satisfying both these requirements. Hence, we extended an existing software transactional memory framework that already supported efficient implementations of some transactional memory algorithms, to also
support the efficient implementation of multi-version algorithms. The key insight for this extension is the support for storing the transactional metadata adjacent to memory locations. We illustrate the benefits of our approach by analyzing its impact with both single- and multi-version transactional memory algorithms using several transactional workloads.Fundação para a Ciência e Tecnologia - PhD research grant SFRH/BD/41765/2007, and in
the research projects Synergy-VM (PTDC/EIA-EIA/113613/2009), and RepComp (PTDC/EIAEIA/
108963/2008
Lazy State Determination for SQL databases
Transactional systems have seen various efforts to increase their throughput, mainly
by making use of parallelism and efficient Concurrency Control techniques. Most approaches
optimize the systems’ behaviour when under high contention.
In this work, we strive towards reducing the system’s overall contention through Lazy
State Determination (LSD). LSD is a new transactional API that leverages on futures
to delay the accesses to the Database as much as possible, reducing the amount of time
that transactions require to operate under isolation and, thus, reducing the contention
window.
LSD was shown to be a promising solution for Key-Value Stores. Now, our focus turns
to Relational Database Management Systems, as we attempt to implement and evaluate
LSD in this new setting. This implementation was done through a custom JDBC driver
to minimize required modifications to any external platform.
Results show that the reduction of the contention window effectively improves the
success rate of transactional applications. However, our current implementation exhibits
some performance issues that must be further investigated and addressed.Os sistemas transacionais têm sido alvo de esforços variados para aumentar a sua velocidade
de processamento, principalmente através de paralelismo e de técnicas de controlo
de concorrência mais eficazes. A maior parte das soluções propostas visam a otimização
do comportamento destes sistemas em ambientes de elevada contenção.
Neste trabalho, nós iremos reduzir a contenção no sistema recorrendo ao Lazy State
Determination (LSD). O LSD é uma nova API transacional que promove a utilização
de futuros para adiar o máximo os acessos à Base de Dados, reduzindo assim o tempo
que cada transação requer para executar em isolamento e, por consequência, reduzindo
também a janela de contenção.
O LSD tem-se mostrado uma solução promissora para bases de dados Chave-Valor.
O nosso foco foi agora redirecionado para Sistemas de Gestão de Bases de Dados Relacionais,
com uma tentativa de implementação e avaliação do LSD neste novo contexto.
Este objetivo foi concretizado através da implementação de um controlador JDBC para
minimizar quaisquer alterações a plataformas externas.
Os resultados mostram que a redução da janela de contenção efetivamente melhora
a taxa de sucesso de aplicações transacionais. No entanto, a nossa implementação atual
tem alguns problemas de desempenho que necessitam de ser investigados e endereçados
Dynamic Partial Order Reduction for Checking Correctness Against Transaction Isolation Levels
Modern applications, such as social networking systems and e-commerce
platforms are centered around using large-scale databases for storing and
retrieving data. Accesses to the database are typically enclosed in
transactions that allow computations on shared data to be isolated from other
concurrent computations and resilient to failures. Modern databases trade
isolation for performance. The weaker the isolation level is, the more
behaviors a database is allowed to exhibit and it is up to the developer to
ensure that their application can tolerate those behaviors.
In this work, we propose stateless model checking algorithms for studying
correctness of such applications that rely on dynamic partial order reduction.
These algorithms work for a number of widely-used weak isolation levels,
including Read Committed, Causal Consistency, Snapshot Isolation, and
Serializability. We show that they are complete, sound and optimal, and run
with polynomial memory consumption in all cases. We report on an implementation
of these algorithms in the context of Java Pathfinder applied to a number of
challenging applications drawn from the literature of distributed systems and
databases.Comment: Submission to PLDI 202
Recommended from our members
Using Formal Methods to Verify Transactional Abstract Concurrency Control
Concurrent application design and implementation is more important than ever in today\u27s multi-core processor world. Transactional Memory (TM) Concurrent application design and implementation is more important than ever in today\u27s multi-core processor world. Transactional Memory (TM). Each has its own particular advantages and disadvantages. However, these techniques each need some extra information to `glue\u27 the non-transactional operation into a transactional context. At the most general level, non-transactional code must be decorated in such a way that the TM run-time can determine how those non-transactional operations commute with one another, and how to `undo\u27 the non-transactional operations in case the run-time needs to abort a software transaction. The TM run-time trusts that these programmer-provided annotations are correct. Therefore, if an implementor needs to employ one of these transactional `escape hatches\u27, it is crucially important that their concurrency control annotations be correct. However, reasoning about the commutativity of data structure operations is often challenging, and increasing the burden on the programmer with a proof requirement does not simplify the task of concurrent programming. There is a way to leverage the structure that these TM extensions require to reduce greatly the burden on the programmer. If the programmer could describe the abstract state of the data structure and then reason about it with as much machine assistance as possible, then there would be much less opportunity for error. Abstract state is preferable to a more concrete state, because it permits the programmer to use different concrete implementations of the same abstract data type. Also, some TM extensions such as open nesting can handle concrete state conflicts without programmer intervention (making the abstract state the appropriate state for reasoning about commutativity). A solution to the problem of specifying and verifying the concurrency properties of abstract data structures is the subject of this thesis. We will describe a new language, ACCLAM, for describing the abstract state of a data structure and reasoning about its concurrency control properties. This thesis also describes a tool that can process ACCLAM descriptions into a machine verifiable form (they are converted to a SAT problem). We will also provides a more detailed overview of transactional memory and the more popular extensions, a detailed semantic description of ACCLAM and a set of example data structure models and the results of processing those examples with the language processing tool
A self-healing framework for general software systems
Modern systems must guarantee high reliability, availability, and efficiency. Their complexity, exacerbated by the dynamic integration with other systems, the use of third- party services and the various different environments where they run, challenges development practices, tools and testing techniques. Testing cannot identify and remove all possible faults, thus faulty conditions may escape verification and validation activities and manifest themselves only after the system deployment. To cope with those failures, researchers have proposed the concept of self-healing systems. Such systems have the ability to examine their failures and to automatically take corrective actions. The idea is to create software systems that can integrate the knowledge that is needed to compensate for the effects of their imperfections. This knowledge is usually codified into the systems in the form of redundancy. Redundancy can be deliberately added into the systems as part of the design and the development process, as it occurs for many fault tolerance techniques. Although this kind of redundancy is widely applied, especially for safety- critical systems, it is however generally expensive to be used for common use software systems. We have some evidence that modern software systems are characterized by a different type of redundancy, which is not deliberately introduced but is naturally present due to the modern modular software design. We call it intrinsic redundancy. This thesis proposes a way to use the intrinsic redundancy of software systems to increase their reliability at a low cost. We first study the nature of the intrinsic redundancy to demonstrate that it actually exists. We then propose a way to express and encode such redundancy and an approach, Java Automatic Workaround, to exploit it automatically and at runtime to avoid system failures. Fundamentally, the Java Automatic Workaround approach replaces some failing operations with other alternative operations that are semantically equivalent in terms of the expected results and in the developer’s intent, but that they might have some syntactic difference that can ultimately overcome the failure. We qualitatively discuss the reasons of the presence of the intrinsic redundancy and we quantitatively study four large libraries to show that such redundancy is indeed a characteristic of modern software systems. We then develop the approach into a prototype and we evaluate it with four open source applications. Our studies show that the approach effectively exploits the intrinsic redundancy in avoiding failures automatically and at runtime
Speculative execution by using software transactional memory
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática.Many programs sequentially execute operations that take a long time to complete. Some of these operations may return a highly predictable result. If this is the case, speculative execution can improve the overall performance of the program.
Speculative execution is the execution of code whose result may not be needed. Generally it is used as a performance optimization. Instead of waiting for the result of a costly operation,speculative execution can be used to speculate the operation most probable result and continue
executing based in this speculation. If later the speculation is confirmed to be correct, time had been gained. Otherwise, if the speculation is incorrect, the execution based in the speculation must abort and re-execute with the correct result.
In this dissertation we propose the design of an abstract process to add speculative execution to a program by doing source-to-source transformation. This abstract process is used in the definition of a mechanism and methodology that enable programmer to add speculative execution to the source code of programs. The abstract process is also used in the design of an automatic source-to-source transformation process that adds speculative execution to existing programs without user intervention. Finally, we also evaluate the performance impact of introducing speculative execution in database clients.
Existing proposals for the design of mechanisms to add speculative execution lacked portability in favor of performance. Some were designed to be implemented at kernel or hardware level. The process and mechanisms we propose in this dissertation can add speculative execution to the source of program, independently of the kernel or hardware that is used.
From our experiments we have concluded that database clients can improve their performance by using speculative execution. There is nothing in the system we propose that limits in the scope of database clients. Although this was the scope of the case study, we strongly believe that other programs can benefit from the proposed process and mechanisms for introduction of speculative execution
Distributed replicated macro-components
Dissertação para obtenção do Grau de Mestre em
Engenharia InformáticaIn recent years, several approaches have been proposed for improving application
performance on multi-core machines. However, exploring the power of multi-core processors
remains complex for most programmers. A Macro-component is an abstraction
that tries to tackle this problem by allowing to explore the power of multi-core machines
without requiring changes in the programs. A Macro-component encapsulates several
diverse implementations of the same specification. This allows to take the best performance
of all operations and/or distribute load among replicas, while keeping contention
and synchronization overhead to the minimum.
In real-world applications, relying on only one server to provide a service leads to
limited fault-tolerance and scalability. To address this problem, it is common to replicate
services in multiple machines. This work addresses the problem os supporting such
replication solution, while exploring the power of multi-core machines.
To this end, we propose to support the replication of Macro-components in a cluster of
machines. In this dissertation we present the design of a middleware solution for achieving
such goal. Using the implemented replication middleware we have successfully deployed
a replicated Macro-component of in-memory databases which are known to have scalability
problems in multi-core machines. The proposed solution combines multi-master
replication across nodes with primary-secondary replication within a node, where several
instances of the database are running on a single machine. This approach deals with
the lack of scalability of databases on multi-core systems while minimizing communication
costs that ultimately results in an overall improvement of the services. Results show
that the proposed solution is able to scale as the number of nodes and clients increases.
It also shows that the solution is able to take advantage of multi-core architectures.RepComp project (PTDC/EIAEIA/108963/2008
Correctness and Progress Verification of Non-Blocking Programs
The progression of multi-core processors has inspired the development of concurrency libraries that guarantee safety and liveness properties of multiprocessor applications. The difficulty of reasoning about safety and liveness properties in a concurrent environment has led to the development of tools to verify that a concurrent data structure meets a correctness condition or progress guarantee. However, these tools possess shortcomings regarding the ability to verify a composition of data structure operations. Additionally, verification techniques for transactional memory evaluate correctness based on low-level read/write histories, which is not applicable to transactional data structures that use a high-level semantic conflict detection. In my dissertation, I present tools for checking the correctness of multiprocessor programs that overcome the limitations of previous correctness verification techniques. Correctness Condition Specification (CCSpec) is the first tool that automatically checks the correctness of a composition of concurrent multi-container operations performed in a non-atomic manner. Transactional Correctness tool for Abstract Data Types (TxC-ADT) is the first tool that can check the correctness of transactional data structures. TxC-ADT elevates the standard definitions of transactional correctness to be in terms of an abstract data type, an essential aspect for checking correctness of transactions that synchronize only for high-level semantic conflicts. Many practical concurrent data structures, transactional data structures, and algorithms to facilitate non-blocking programming all incorporate helping schemes to ensure that an operation comprising multiple atomic steps is completed according to the progress guarantee. The helping scheme introduces additional interference by the active threads in the system to achieve the designed progress guarantee. Previous progress verification techniques do not accommodate loops whose termination is dependent on complex behaviors of the interfering threads, making these approaches unsuitable. My dissertation presents the first progress verification technique for non-blocking algorithms that are dependent on descriptor-based helping mechanisms
- …