1,583 research outputs found
Consistency in a Partitioned Network: A Survey
Recently, several strategies for transaction processing in partitioned distributed database systems with replicated data have been proposed. We survey these strategies in light of the competing goals of maintaining correctness and achieving high availability. Extensions and combinations are then discussed, and guidelines for the selection of a strategy for a particular application are presented
A Survey of Traditional and Practical Concurrency Control in Relational Database Management Systems
Traditionally, database theory has focused on concepts such as atomicity and serializability, asserting that concurrent transaction management must enable correctness above all else. Textbooks and academic journals detail a vision of unbounded rationality, where reduced throughput because of concurrency protocols is not of tremendous concern. This thesis seeks to survey the traditional basis for concurrency in relational database management systems and contrast that with actual practice. SQL-92, the current standard for concurrency in relational database management systems has defined isolation, or
allowable concurrency levels, and these are examined. Some ways in which DB2, a popular database, interprets these levels and finesses extra concurrency through performance enhancement are detailed. SQL-92 standardizes de facto relational database management systems features. Given this and a superabundance of articles in professional journals detailing steps for fine-tuning transaction concurrency, the expansion of performance tuning seems bright, even at the expense of serializabilty.
Are the practical changes wrought by non-academic professionals killing traditional database concurrency ideals? Not really. Reasoned changes for performance gains advocate compromise, using complex concurrency controls when necessary for the job at hand and relaxing standards otherwise. The idea of relational database management systems is only twenty years old, and standards are still evolving. Is there still an interplay between tradition and practice? Of course. Current practice uses tradition pragmatically, not idealistically. Academic ideas help drive the systems available for use, and perhaps current practice now will help academic ideas define concurrency control concepts for relational database management systems
Atomic commitment in transactional DHTs
We investigate the problem of atomic commit in transactional database systems
built on top of Distributed Hash Tables. DHTs provide a decentralized way to
store and look up data. To solve the atomic commit problem we propose to
use an adaption of Paxos commit as a non-blocking algorithm. We exploit the
symmetric replication technique existing in the DKS DHT to determine which
nodes are necessary to execute the commit algorithm. By doing so we achieve a
lower number of communication rounds and a reduction of meta-data in contrast
to traditional Three-Phase-Commit protocols. We also show how the proposed
solution can cope with dynamism due to churn in DHTs. Our solution works
correctly relying only on an inaccurate failure detection of node failure which is
necessary for systems running over the Internet
Robust data storage in a network of computer systems
PhD ThesisRobustness of data in this thesis is taken to mean reliable
storage of data and also high availability of data .objects in spite
of the occurrence of faults. Algorithms and data structures which
can be used to provide such robustness in the presence of various
disk, processor and communication network failures are described.
Reliable storage of data at individual nodes in a network of
computer systems is based on the use of a stable storage mechanism
combined with strategies which are used to help ensure crash resis-
tance of file operations in spite of the use of buffering mechan-
isms by operating systems. High availability of data in the net-
work is maintained by replicating data on different computers and
mutual consistency between replicas is ensured in spite of network
partitioning.
A stable storage system which provides atomicity for more complex data structures instead of the usual fixed size page has been
designed and implemented and its performance evaluated. A crash
resistant file system has also been implemented and evaluated.
Many of the techniques presented here are used in the design
of what we call CRES (Crash-resistant, Replicated and Stable)
storage. CRES storage provides fault tolerance facilities for
various disk and processor faults. It also provides fault tolerance facilities for network partitioning through the provision of an algorithm for the update and merge of a partitioned data storage
system
Assise: Performance and Availability via NVM Colocation in a Distributed File System
The adoption of very low latency persistent memory modules (PMMs) upends the
long-established model of disaggregated file system access. Instead, by
colocating computation and PMM storage, we can provide applications much higher
I/O performance, sub-second application failover, and strong consistency. To
demonstrate this, we built the Assise distributed file system, based on a
persistent, replicated coherence protocol for managing a set of
server-colocated PMMs as a fast, crash-recoverable cache between applications
and slower disaggregated storage, such as SSDs. Unlike disaggregated file
systems, Assise maximizes locality for all file IO by carrying out IO on
colocated PMM whenever possible and minimizes coherence overhead by maintaining
consistency at IO operation granularity, rather than at fixed block sizes.
We compare Assise to Ceph/Bluestore, NFS, and Octopus on a cluster with Intel
Optane DC PMMs and SSDs for common cloud applications and benchmarks, such as
LevelDB, Postfix, and FileBench. We find that Assise improves write latency up
to 22x, throughput up to 56x, fail-over time up to 103x, and scales up to 6x
better than its counterparts, while providing stronger consistency semantics.
Assise promises to beat the MinuteSort world record by 1.5x
- …