276 research outputs found
Brief Announcement: On the Correctness of Transaction Processing with External Dependency
We briefly introduce a unified model to characterize correctness levels stronger (or equal to) serializability in the presence of application invariant. We propose to classify relations among committed transactions into data-related and application semantic-related. Our model delivers a condition that can be used to verify the safety of transactional executions in the presence of application invariant
Robustness against Consistency Models with Atomic Visibility
To achieve scalability, modern Internet services often rely on distributed databases with consistency models for transactions weaker than serializability. At present, application programmers often lack techniques to ensure that the weakness of these consistency models does not violate application correctness. We present criteria to check whether applications that rely on a database providing only weak consistency are robust, i.e., behave as if they used a database providing serializability. When this is the case, the application programmer can reap the scalability benefits of weak consistency while being able to easily check the desired correctness properties. Our results handle systematically and uniformly several recently proposed weak consistency models, as well as a mechanism for strengthening consistency in parts of an application
Practical cross-engine transactions in dual-engine database systems
With the growing DRAM capacity and core count in modern servers, database systems are becoming increasingly multi-engine to feature a heterogeneous set of engines. In particular, a memory-optimized engine and a conventional storage-centric engine may coexist to satisfy various application needs. However, handling cross-engine transactions that access more than one engine remains challenging in terms of correctness, performance and programmability. This thesis describes Skeena, an approach to cross-engine transactions with proper isolation guarantees and low overhead. Skeena adapts and integrates past concurrency control theory to provide a complete solution to supporting various isolation levels in dual-engine systems, and proposes a lightweight transaction tracking structure that captures the necessary information to guarantee correctness with low overhead. Evaluation on a 40-core server shows that Skeena only incurs minuscule overhead for cross-engine transactions, without penalizing single-engine transactions
Recommended from our members
The feasibility of using standard Z notation in the design of complex systems
Formal design methods are becoming increasingly recognised as being useful for specifying complex systems. Incorporating formal methods in the early stages of a design process introduces the possibility of using mathematical techniques, hence improving the effectiveness of a design process.
The Z notation has been applied mainly to specifying software, although it has also been used for specifying hardware and general systems. The Z notation fulfils two functions in this thesis. The first function is as a notation for representing specifications of complex systems, and the second function is as a notation for representing implementations of the same complex systems. The suitability of the Z notation for these functions is investigated in three studies. Both the specifications and implementations are represented as unified collections of Schemas that describe the behaviour in response to each set of input conditions. In each of the studies, both the specifications and implementations of the complex system take place at an early stage in a design process. Throughout this thesis non rigorous proof sketches prove that the implementations meet the requirements of the specifications
NCC: Natural Concurrency Control for Strictly Serializable Datastores by Avoiding the Timestamp-Inversion Pitfall
Strictly serializable datastores greatly simplify the development of correct
applications by providing strong consistency guarantees. However, existing
techniques pay unnecessary costs for naturally consistent transactions, which
arrive at servers in an order that is already strictly serializable. We find
these transactions are prevalent in datacenter workloads. We exploit this
natural arrival order by executing transaction requests with minimal costs
while optimistically assuming they are naturally consistent, and then leverage
a timestamp-based technique to efficiently verify if the execution is indeed
consistent. In the process of designing such a timestamp-based technique, we
identify a fundamental pitfall in relying on timestamps to provide strict
serializability, and name it the timestamp-inversion pitfall. We find
timestamp-inversion has affected several existing works.
We present Natural Concurrency Control (NCC), a new concurrency control
technique that guarantees strict serializability and ensures minimal costs --
i.e., one-round latency, lock-free, and non-blocking execution -- in the best
(and common) case by leveraging natural consistency. NCC is enabled by three
key components: non-blocking execution, decoupled response control, and
timestamp-based consistency check. NCC avoids timestamp-inversion with a new
technique: response timing control, and proposes two optimization techniques,
asynchrony-aware timestamps and smart retry, to reduce false aborts. Moreover,
NCC designs a specialized protocol for read-only transactions, which is the
first to achieve the optimal best-case performance while ensuring strict
serializability, without relying on synchronized clocks. Our evaluation shows
that NCC outperforms state-of-the-art solutions by an order of magnitude on
many workloads
Representation of coherency classes for parallel systems
Some parallel applications do not require a precise
imitation of the behaviour of the physically shared
memory programming model. Consequently, certain
parallel machine architectures have elected to emphasise
different required coherency properties because of
possible efficiency gains. This has led to various definitions
of models of store coherency. These definitions
have not been amenable to detailed analysis and, consequently,
inconsistencies have resulted.
In this paper a unified framework is proposed in
which different models of store coherency are developed
systematically by progressively relaxing the constraints
that they have to satisfy. A demonstration is given of
how formal reasoning can be cam’ed out to compare
different models. Some real-life systems are considered
and a definition of a version of weak coherency is
found to be incomplete
- …