492 research outputs found
Speculative Concurrency Control for Real-Time Databases
In this paper, we propose a new class of Concurrency Control Algorithms that is especially suited for real-time database applications. Our approach relies on the use of (potentially) redundant computations to ensure that serializable schedules are found and executed as early as possible, thus, increasing the chances of a timely commitment of transactions with strict timing constraints. Due to its nature, we term our concurrency control algorithms Speculative. The aforementioned description encompasses many algorithms that we call collectively Speculative Concurrency Control (SCC) algorithms. SCC algorithms combine the advantages of both Pessimistic and Optimistic Concurrency Control (PCC and OCC) algorithms, while avoiding their disadvantages. On the one hand, SCC resembles PCC in that conflicts are detected as early as possible, thus making alternative schedules available in a timely fashion in case they are needed. On the other hand, SCC resembles OCC in that it allows conflicting transactions to proceed concurrently, thus avoiding unnecessary delays that may jeopardize their timely commitment
A Survey of Traditional and Practical Concurrency Control in Relational Database Management Systems
Traditionally, database theory has focused on concepts such as atomicity and serializability, asserting that concurrent transaction management must enable correctness above all else. Textbooks and academic journals detail a vision of unbounded rationality, where reduced throughput because of concurrency protocols is not of tremendous concern. This thesis seeks to survey the traditional basis for concurrency in relational database management systems and contrast that with actual practice. SQL-92, the current standard for concurrency in relational database management systems has defined isolation, or
allowable concurrency levels, and these are examined. Some ways in which DB2, a popular database, interprets these levels and finesses extra concurrency through performance enhancement are detailed. SQL-92 standardizes de facto relational database management systems features. Given this and a superabundance of articles in professional journals detailing steps for fine-tuning transaction concurrency, the expansion of performance tuning seems bright, even at the expense of serializabilty.
Are the practical changes wrought by non-academic professionals killing traditional database concurrency ideals? Not really. Reasoned changes for performance gains advocate compromise, using complex concurrency controls when necessary for the job at hand and relaxing standards otherwise. The idea of relational database management systems is only twenty years old, and standards are still evolving. Is there still an interplay between tradition and practice? Of course. Current practice uses tradition pragmatically, not idealistically. Academic ideas help drive the systems available for use, and perhaps current practice now will help academic ideas define concurrency control concepts for relational database management systems
Consistency in a Partitioned Network: A Survey
Recently, several strategies for transaction processing in partitioned distributed database systems with replicated data have been proposed. We survey these strategies in light of the competing goals of maintaining correctness and achieving high availability. Extensions and combinations are then discussed, and guidelines for the selection of a strategy for a particular application are presented
Implementasi Optimistic Concurrency Control pada Sistem Aplikasi E-Commerce berdasarkan Arsitektur Microservices menggunakan Kubernetes
MicroService memiliki banyak pendekatan dalam penerapannya. Salah satunya dengan membuat setiap Service bersifat isolated. Untuk memenuhi sifat isolated tersebut komunikasi dilakukan secara asinkronus dimana setiap Service berkomunikasi menggunakan bantuan dari event bus. Duplikasi data akan sering terjadi dikarenakan Service bersifat isolated yaitu setiap Service tidak bisa mengambil data pada database yang bukan miliknya. Oleh karena itu duplikasi data harus tetap sinkron di setiap Service. Permasalahan muncul pada saat dilakukan scaling. Service yang di scaling memproses event secara konkuren sehingga urutan eksekusi setiap event bisa saja tidak terurut. Hal ini memungkinkan keadaan nilai dari suatu data menjadi tidak konsisten diantara masing-masing database tiap Service. Optimistic Concurrency Control sebagai solusi terhadap masalah konsistensi data yang terjadi. Hasil dari solusi yang diterapkan membuat nilai data menjadi sinkron disetiap database Service dalam keadaan scaling
Optimistic concurrency control revisited
Several years ago optimistic concurrency control gained much attention in the database community. However, two-phase locking was already well established, especially in the relational database market. Concerning traditional database systems most developers felt that pessimistic concurrency control might not be the best solution for concurrency control, but, a well-known and accepted one. With the work on new generation database systems, however, there has been a revival of optimistic concurrency control (at least a partial one). This paper will reconsider optimistic concurrency control. It will lay bare the shortcomings of the original approach and present some major improvements. Moreover, several techniques will be presented which especially support read transactions with the consequence that the number of backups can be decreased substantially. Finally, a general solution for the starvation problem is presented. The solution is perfectly consistent with the underlying optimistic approach
ORPE -- A Data Semantics Driven Concurrency Control
This paper presents a concurrency control mechanism that does not follow a
'one concurrency control mechanism fits all needs' strategy. With the presented
mechanism a transaction runs under several concurrency control mechanisms and
the appropriate one is chosen based on the accessed data. For this purpose, the
data is divided into four classes based on its access type and usage
(semantics). Class (the optimistic class) implements a first-committer-wins
strategy, class (the reconciliation class) implements a
first-n-committers-win strategy, class (the pessimistic class) implements a
first-reader-wins strategy, and class (the escrow class) implements a
first-n-readers-win strategy. Accordingly, the model is called \PeFS. The
selected concurrency control mechanism may be automatically adapted at run-time
according to the current load or a known usage profile. This run-time
adaptation allows \Pe to balance the commit rate and the response time even
under changing conditions. \Pe outperforms the Snapshot Isolation concurrency
control in terms of response time by a factor of approximately 4.5 under heavy
transactional load (4000 concurrent transactions). As consequence, the degree
of concurrency is 3.2 times higher.Comment: 20 pages, 7 tables, 15 figure
Improving Key-Value Database Scalability with Lazy State Determination
Applications keep demanding higher and higher throughput and lower response times
from Database systems. Databases leverage concurrency, by using both multiple computer
systems (nodes) and the multiple cores available in each node, to execute multiple requests
(transactions) concurrently.
Executing multiple transactions concurrently requires coordination, which is ensured
by the database concurrency control (CC) module. However, excessive control/limitation
of concurrency by the CC module negatively impacts the overall performance (latency
and throughput) of the database system. The performance limitations imposed by the
database CC module can be addressed by exploring new hardware, or by leveraging
software-based techniques such as futures and lazy evaluation of transactions.
This is where Lazy State Determination (LSD) shines [43, 42]. LSD proposes a new
transactional API that decreases the conflicts between concurrent transactions by enabling
the use of futures in both SQL and Key-Value database systems. The use of futures allows
LSD to better capture the application semantics and to make more informed decisions on
what really constitutes a conflict. These two key insights get together to create a system
that provides high throughput in high contention scenarios.
Our work builds on top of a previous LSD prototype. We identified and diagnosed its
shortcomings, and devised and implemented a new prototype that addressed them. We
validated our new LSD system and evaluated its behaviour and performance by comparing
and contrasting with the original prototype. Our evaluation showed that the throughput
of the new LSD prototype is from 3.7× to 4.9× higher, in centralized and distributed
settings respectively, while also reducing the latency up to 10 times.
With this work, we provide an LSD-based Key-Value Database System that has better
vertical and horizontal scalability, and can take advantage of systems with higher core
count or high number of nodes, in centralized and distributed settings, respectively.As aplicações continuam a exigir aos sistemas de base de dados (BD) débitos cada
vez maiores e tempos de resposta cada vez menores. As BD respondem explorando a
concorrência, usando múltiplos sistemas computacionais (nós) e os vários cores disponíveis
em cada um desses nós, para executar vários pedidos (transações) simultaneamente.
A execução de múltiplas transações simultaneamente requer coordenação, assegurada
pelo módulo de controlo de concorrência (CC) da BD. No entanto, o controlo/limitação
excessiva de concorrência pelo módulo de CC impacta negativamente o desempenho
geral (latência e débito) do sistema de BD. As limitações de desempenho impostas pelo
módulo CC da BD podem ser abordadas tanto explorando novo hardware como recorrendo
a técnicas baseadas em software, como futuros e avaliação diferida de transações.
É aqui que o Lazy State Determination (LSD) brilha [43, 42]. O LSD propõe uma nova
API transacional que permite o uso de futuros em sistemas de BD SQL e Chave-Valor,
diminuindo os conflitos entre transações concorrentes. O uso de futuros permite também
que o LSD capture melhor a semântica da aplicação e tome decisões mais informadas
sobre o que realmente constitui um conflito. Estes dois aspetos combinam-se para criar
um sistema transacional que fornece elevado débito em cenários de alta contenção.
O nosso trabalho foi desenvolvido sobe um protótipo anterior de LSD. Identificamos
e diagnosticamos as suas deficiências e limitações, e concebemos e implementamos um
novo protótipo que as endereçou. Validamos o novo sistema LSD e avaliamos o seu
comportamento e desempenho comparando e contrastando com o protótipo original. A
nossa avaliação mostrou que o débito do novo protótipo LSD é de 3,7× a 4,9× maior,
em configurações centralizadas e distribuídas, respetivamente, além de reduzir a latência
até 10 vezes.
Com este trabalho, disponibilizamos um sistema de base de dados de Chave-Valor
baseado em LSD que possui melhor escalabilidade vertical e horizontal, fazendo melhor
uso de sistemas com múltiplos cores ou com elevado número de nós
A speculative execution approach to provide semantically aware contention management for concurrent systems
PhD ThesisMost modern platforms offer ample potention for parallel execution of concurrent programs yet concurrency control is required to exploit parallelism while maintaining program correctness. Pessimistic con-
currency control featuring blocking synchronization and mutual ex-
clusion, has given way to transactional memory, which allows the
composition of concurrent code in a manner more intuitive for the
application programmer. An important component in any transactional memory technique however is the policy for resolving conflicts
on shared data, commonly referred to as the contention management
policy.
In this thesis, a Universal Construction is described which provides
contention management for software transactional memory. The technique differs from existing approaches given that multiple execution
paths are explored speculatively and in parallel. In the resolution of
conflicts by state space exploration, we demonstrate that both concur-
rent conflicts and semantic conflicts can be solved, promoting multi-
threaded program progression.
We de ne a model of computation called Many Systems, which defines the execution of concurrent threads as a state space management
problem. An implementation is then presented based on concepts
from the model, and we extend the implementation to incorporate
nested transactions. Results are provided which compare the performance of our approach with an established contention management
policy, under varying degrees of concurrent and semantic conflicts. Finally, we provide performance results from a number of search strategies, when nested transactions are introduced
- …