1,717 research outputs found

    A unified concurrency control algorithm for distributed database systems

    Get PDF
    We present a unified concurrency-control algorithm for distributed database systems in which each transaction may choose its own concurrency control protocol. Specifically, they integrate two-phase locking, timestamp ordering, and precedence agreement into one unified concurrency-control scheme. They show the correctness of the scheme and study the problem of selecting the best protocol for each transaction to optimize system performance.published_or_final_versio

    Consistency in a Partitioned Network: A Survey

    Get PDF
    Recently, several strategies for transaction processing in partitioned distributed database systems with replicated data have been proposed. We survey these strategies in light of the competing goals of maintaining correctness and achieving high availability. Extensions and combinations are then discussed, and guidelines for the selection of a strategy for a particular application are presented

    Design and evaluation of a new transaction execution model for multidatabase systems

    Get PDF
    Cataloged from PDF version of article.In this paper, we present a new transaction execution model that captures the formalism and semantics of various extended transaction models and adopts them to a multidatabase system (MDBS) environment. The proposed model covers nested transactions, various dependency types among transactions, and commit independent transactions. The formulation of complex MDBS transaction types can be accomplished easily with the extended semantics captured in the model. A detailed performance model of an MDBS is employed in investigating the performance implications of the proposed transaction model. © Elsevier Science Inc. 1997

    Designing a commutative replicated data type

    Get PDF
    Commuting operations greatly simplify consistency in distributed systems. This paper focuses on designing for commutativity, a topic neglected previously. We show that the replicas of \emph{any} data type for which concurrent operations commute converges to a correct value, under some simple and standard assumptions. We also show that such a data type supports transactions with very low cost. We identify a number of approaches and techniques to ensure commutativity. We re-use some existing ideas (non-destructive updates coupled with invariant identification), but propose a much more efficient implementation. Furthermore, we propose a new technique, background consensus. We illustrate these ideas with a shared edit buffer data type

    Replicated Data and Partition Failures

    Get PDF
    In a distributed database system, data is often replicated to improve performance and availability. By storing copies of shared data on processors where it is frequently accessed, the need for expensive, remote read accesses is decreased. By storing copies of critical data on processors with independent failure modes, the probability that at least one copy of the data will be accessible increases. In theory, data replication makes it possible to provide arbitrarily high data availability. In practice, realizing the benefits of data replication is difficult since the correctness of data must be maintained. One important aspect of correctness with replicated data is mutual consistency: all copies of the same logical data-item must agree on exactly one current value for the data-item. Furthermore, this value should make sense in terms of the transactions executed on copies of the data-item. When communication fails between sites containing copies of the same logical data-item, mutual consistency between copies becomes complicated to ensure. The most disruptive of these communication failures are partition failures, which fragment the network into isolated subnetworks called partitions. Unless partition failures are detected and recognized by all affected processors, independent and uncoordinated updates may be applied to different copies of the data, thereby compromising the correctness of data. Consider, for example, an Airline Reservation System implemented by a distributed database which splits into two partitions when the communication network fails. If, at the time of the failure, all the nodes have one seat remaining for PAN AM 537, reservations could be made in both partitions. This would violate correctness: who should get the last seat? There should not be more seats reserved for a flight than physically exist on the plane. (Some airlines do not implement this constraint and allow overbookings.) The design of a replicated data management algorithm tolerating partition failures (or partition processing strategy) is a notoriously hard problem. Typically, the cause or extent of a partition failure cannot be discerned by the processors themselves. At best, a processor may be able to identify the other processors in its partition; but, for the processors outside of its partition, it will not be able to distinguish between the case where those processors are simply isolated from it and the case where those processors are down. In addition, slow responses can cause the network to appear partitioned even when it is not, further complicating the design of a fault-tolerant algorithm

    Design of testbed and emulation tools

    Get PDF
    The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems

    CASCH: a tool for computer-aided scheduling

    Get PDF
    A software tool called Computer-Aided Scheduling (CASCH) for parallel processing on distributed-memory multiprocessors in a complete parallel programming environment is presented. A compiler automatically converts sequential applications into parallel codes to perform program parallelization. The parallel code that executes on a target machine is optimized by CASCH through proper scheduling and mapping.published_or_final_versio
    corecore