89 research outputs found
Ensuring Serializable Executions with Snapshot Isolation DBMS
Snapshot Isolation (SI) is a multiversion concurrency control that has been implemented by open source and commercial database systems such as PostgreSQL and Oracle. The main feature of SI is that a read operation does not block a write operation and vice versa, which allows higher degree of concurrency than traditional two-phase locking. SI prevents many anomalies that appear in other isolation levels, but it still can result in non-serializable execution, in which database integrity constraints can be violated. Several techniques have been proposed to ensure serializable execution with engines running SI; these techniques are based on modifying the applications by introducing conflicting SQL statements. However, with each of these techniques the DBA has to make a difficult choice among possible transactions to modify. This thesis helps the DBA’s to choose between these different techniques and choices by understanding how the choices affect system performance. It also proposes a novel technique called ’External Lock Manager’ (ELM) which introduces conflicts in a separate lock-manager object so that every execution will be serializable. We build a prototype system for ELM and we run experiments to demonstrate the robustness of the new technique compare to the previous techniques. Experiments show that modifying the application code for some transactions has a high impact on performance for some choices, which makes it very hard for DBA’s to choose wisely. However, ELM has peak performance which is similar to SI, no matter which transactions are chosen for modification. Thus we say that ELM is a robust technique for ensure serializable execution
Serializable Isolation for Snapshot Databases
Many popular database management systems implement a multiversion concurrency control algorithm called snapshot isolation rather than providing full serializability based on locking. There are well-known anomalies permitted by snapshot isolation that can lead to violations of data consistency by interleaving transactions that would maintain consistency if run serially. Until now, the only way to prevent these anomalies was to modify the applications by introducing explicit locking or artificial update conflicts, following careful analysis of conflicts between all pairs of transactions. This thesis describes a modification to the concurrency control algorithm of a database management system that automatically detects and prevents snapshot isolation anomalies at runtime for arbitrary applications, thus providing serializable isolation. The new algorithm preserves the properties that make snapshot isolation attractive, including that readers do not block writers and vice versa. An implementation of the algorithm in a relational database management system is described, along with a benchmark and performance study, showing that the throughput approaches that of snapshot isolation in most cases
Efficient middleware for database replication
Dissertação de Mestrado em Engenharia InformáticaDatabase systems are used to store data on the most varied applications, like Web
applications, enterprise applications, scientific research, or even personal applications.
Given the large use of database in fundamental systems for the users, it is necessary that database systems are efficient e reliable. Additionally, in order for these systems to serve a large number of users, databases must be scalable, to be able to process large numbers of transactions. To achieve this, it is necessary to resort to data replication. In a
replicated system, all nodes contain a copy of the database. Then, to guarantee that
replicas converge, write operations must be executed on all replicas. The way updates
are propagated leads to two different replication strategies. The first is known as
asynchronous or optimistic replication, and the updates are propagated asynchronously
after the conclusion of an update transaction. The second is known as synchronous or pessimistic replication, where the updates are broadcasted synchronously during the transaction.
In pessimistic replication, contrary to the optimistic replication, the replicas remain
consistent. This approach simplifies the programming of the applications, since the
replication of the data is transparent to the applications. However, this approach
presents scalability issues, caused by the number of exchanged messages during
synchronization, which forces a delay to the termination of the transaction. This leads
the user to experience a much higher latency in the pessimistic approach.
On this work is presented the design and implementation of a database replication
system, with snapshot isolation semantics, using a synchronous replication approach.
The system is composed by a primary replica and a set of secondary replicas that fully
replicate the database- The primary replica executes the read-write transactions, while
the remaining replicas execute the read-only transactions. After the conclusion of a read-write transaction on the primary replica the updates are propagated to the
remaining replicas. This approach is proper to a model where the fraction of read
operations is considerably higher than the write operations, allowing the reads load to be
distributed over the multiple replicas.
To improve the performance of the system, the clients execute some operations
speculatively, in order to avoid waiting during the execution of a database operation.
Thus, the client may continue its execution while the operation is executed on the
database. If the result replied to the client if found to be incorrect, the transaction will be aborted, ensuring the correctness of the execution of the transactions
Towards Transaction as a Service
This paper argues for decoupling transaction processing from existing
two-layer cloud-native databases and making transaction processing as an
independent service. By building a transaction as a service (TaaS) layer, the
transaction processing can be independently scaled for high resource
utilization and can be independently upgraded for development agility.
Accordingly, we architect an execution-transaction-storage three-layer
cloud-native database. By connecting to TaaS, 1) the AP engines can be
empowered with ACID TP capability, 2) multiple standalone TP engine instances
can be incorporated to support multi-master distributed TP for horizontal
scalability, 3) multiple execution engines with different data models can be
integrated to support multi-model transactions, and 4) high performance TP is
achieved through extensive TaaS optimizations and consistent evolution.
Cloud-native databases deserve better architecture: we believe that TaaS
provides a path forward to better cloud-native databases
Practical cross-engine transactions in dual-engine database systems
With the growing DRAM capacity and core count in modern servers, database systems are becoming increasingly multi-engine to feature a heterogeneous set of engines. In particular, a memory-optimized engine and a conventional storage-centric engine may coexist to satisfy various application needs. However, handling cross-engine transactions that access more than one engine remains challenging in terms of correctness, performance and programmability. This thesis describes Skeena, an approach to cross-engine transactions with proper isolation guarantees and low overhead. Skeena adapts and integrates past concurrency control theory to provide a complete solution to supporting various isolation levels in dual-engine systems, and proposes a lightweight transaction tracking structure that captures the necessary information to guarantee correctness with low overhead. Evaluation on a 40-core server shows that Skeena only incurs minuscule overhead for cross-engine transactions, without penalizing single-engine transactions
Middleware-based Database Replication: The Gaps between Theory and Practice
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.Comment: 14 pages. Appears in Proc. ACM SIGMOD International Conference on
Management of Data, Vancouver, Canada, June 200
- …