381 research outputs found
Recommended from our members
Superdatabases for Composition of Heterogeneous Databases
Superdatabases are designed to compose and extend databases. In particular, superdatabases allow consistent update across heterogeneous databases. The key idea of superdatabase is hierarchical composition of element databases. For global crash recovery, each element database must provide local recovery plus some kind of agreement protocol, such as two-phase commit. For global concurrency control, each element database must have local synchronization with an explicit serial order, such as two-phase locking, timestamps, or optimistic methods. Given element databases satisfying the above requirements, the superdatabase can certify the serializability of global transactions through a concatenation of local serial order. Combined with previous work on heterogeneous databases, including unified query languages and view integration, now we can build heterogeneous databases which are consistent, adaptable, and extensible by construction
Recommended from our members
A Formal Characterization of Epsilon Serializability
Epsilon Serializability (ESR) is a generalization of classic serializability (SR). ESR allows some limited amount of inconsistency in transaction processing (TP), through an interface called epsilon-transactions (ETs). For example, some query ETs may view inconsistent data due to non-SR interleaving with concurrent updates. In this paper, we restrict our attention to the situation where query-only ETs run concurrently with consistent update transactions that are SR without the ETs. This paper presents a formal characterization of ESR and ETs. Using the ACTA framework, the first part of this characterization formally expresses the inter-transaction conflicts that are recognized by ESR and, through that, defines ESR, analogous to the manner in which conflict-based serializability is defined. The second part of the paper is devoted to deriving expressions for: (1) the inconsistency in the values of data -- arising from ongoing updates, (2) the inconsistency of the results of a query ““ arising from the inconsistency of the data read in order to process the query, and (3) the inconsistency exported by an update ET - arising from ongoing queries reading uncommitted data produced by the update ET. These expressions are used to determine the preconditions that ET operations have to satisfy in order to maintain the limits on the inconsistency in the data read by query ETs, the inconsistency exported by update ETs, and the inconsistency in the results of queries. This determination suggests possible mechanisms that can be used to realize ESR
Recommended from our members
Execution Autonomy in Distributed Transaction Processing
We study the feasibility of execution autonomy in systems with asynchronous transaction processing based on epsilon-serializability (ESR). The abstract correctness criteria defined by ESR are implemented by techniques such as asynchronous divergence control and asynchronous consistency restoration. Concrete application examples in a distributed environment, such as banking, are described in order to illustrate the advantages of using ESR to support execution autonomy
A Study of Dynamic Optimization Techniques: Lessons and Directions in Kernel Design
The Synthesis kernel [21,22,23,27,28] showed that dynamic code generation, software feedback, and fine-grain modular kernel organization are useful implementation techniques for improving the performance of operating system kernels. In addition, and perhaps more importantly, we discovered that there are strong interactions between the techniques. Hence, a careful and systematic combination of the techniques can be very powerful even though each one by itself may have serious limitations. By identifying these interactions we illustrate the problems of applying each technique in isolation to existing kernels. We also highlight the important common under-pinnings of the Synthesis experience and present our ideas on future operating system design and implementation. Finally, we outline a more uniform approach to dynamic optimizations called incremental partial evaluation
Recommended from our members
A Comparison of Cache Performance in Server-Based and Symmetric Database Architectures
We study the cache performance in a symmetric distributed main-memory database. The high performance networks in many large distributed systems enable a machine to reach the main memory of other nodes more quickly than the time to access local disks. We therefore introduce remote memory as an additional layer in the memory hierarchy between local memory and disks. In order to appreciate the tradeoffs of memory and cpu in the symmetric architecture, we compare system performance in alternative architectures. Simulations show that, by exploiting remote memory (in each node‘s cache), performance improves over a wide range of cache sizes as compared to a distributed client/server architecture. We also compare the symmetric model to a centralized-server model and parameterize the performance tradeoffs
Recommended from our members
Replication and Nested Transactions in the Eden Distributed System
Hardware redundancy in distributed systems offers the potential for increased availability and performance, but this requires software support if the full potential is to be realized. We have designed and implemented two mechanisms for such support. The first provides crash-resistant resources, replicated transparently and consistently to increase the availability of distributed data. To update multiple copies despite down nodes, we have introduced the Regeneration method used in the implementation of a replicated system directory. Regeneration restores inaccessible copies elsewhere in the network, maintains the availability of resources, and adapts to configuration changes. The second mechanism is a systell1 supporting nested transactions, which can manage the complex failure modes in a distributed system, synchronize concurrent resource access internal to applications, and facilitate safe module composition. In the tree-structured nesting, each transaction has a Transaction Manager (TM), responsible for the concurrency control and crash recovery of its subtransactions. Many concurrency control and recovery techniques can be combined in this TM Tree design framework. We chose locking and versions for the first implementation. Using Eden objects and the replicated directory, our nested transactions provide consistent concurrent access 10 location-independent, crash- resistant resources. In summary, the principal contributions of this research are the Regeneration method and the TM Tree framework. Regeneration uses the separation of hardware repair from data restoration to increase replicated data availability. TM tree composes existing techniques to derive many difficult designs for nested transaction. Both have been proven in the design and implementation of actual systems
Regulation of APC/C-Cdh1 and Its Function in Neuronal Survival
This paper presents WebCQ, a prototype of a large-scale Web information monitoring system, WebCQ is designed to discover and detect changes to the World Wide Web (the Web) pages efficiently, and to notify users of interesting changes with a personalized customization. The system consists of four main components: a change detection robot that discovers and detects changes, a proxy cache service that reduces the communication traffics to the original information provider on the remote server, a tool that highlights changes between the web page last seen and the new version of the page, and a change notification service that delivers interesting changes and fresh information to the right users at the right time. A salient feature of our change detection robot is its ability to support various types of web page sentinels for finding and displaying interesting changes to web pages. This paper describes the WebCQ system with an emphasis on general issues in designing and engineering a la..
- …