378 research outputs found

    A comparative study of concurrency control algorithms for distributed databases

    Get PDF
    The declining cost of computer hardware and the increasing data processing needs of geographically dispersed organizations have led to substantial interest in distributed data management. These characteristics have led to reconsider the design of centralized databases. Distributed databases have appeared as a result of those considerations. A number of advantages result from having duplicate copies of data in a distributed databases. Some of these advantages are: increased data accesibility, more responsive data access, higher reliability, and load sharing. These and other benefits must be balanced against the additional cost and complexity introduced in doing so. This thesis considers the problem of concurrency control of multiple copy databases. Several synchronization techniques are mentioned and a few algorithms for concurrency control are evaluated and compared

    A categorization scheme for concurrency control protocols in distributed databases

    Get PDF
    The problem of concurrency control in distributed databases is very complex. As a result, a great number of control algorithms have been proposed in recent years. This research is aimed at the development of a viable categorization scheme for these various algorithms. The scheme is based on the theoretical concept of serializability, but is qualitative in nature. An important class of serializable execution sequences, conflict-preserving-serializable, leads to the identification of fundamental attributes common to all algorithms included in this study. These attributes serve as the underlying philosophy for the categorization scheme. Combined with the two logical approaches of prevention and correction of nonserializability, the result is a flexible and extensive categorization scheme which accounts for all algorithms studied and suggests the possibility of new algorithms

    The Integration of Database Systems

    Get PDF

    A systems approach to synchronization and naming in a distributed computing environment

    Get PDF
    This thesis describes the development of a distributed computer architecture that supports the interconnection of loosely-coupled computing resources (sites) by heterogeneous communication networks. The internetwork system, named the intelligent message transport system, provides for reliable delivery of intersite messages even though the interconnecting networks may use a probabilistic ( best-effort ) delivery scheme. Processors are interfaced to each other by a network gateway processor. The gateway processors are autonomous network controllers that provide a simple and consistent site interface to the internetwork system. The gateway processors also provide for high level functions at the Transport Layer of the system rather than burdening the user with networking details at the Application Layer. These high-level functions include the synchronization of concurrent updates to replicated data objects, and the management of the system-wide name space for shared data

    New Protocol for Multidatabase Concurrency Control

    Get PDF

    Building on Quicksand

    Full text link
    Reliable systems have always been built out of unreliable components. Early on, the reliable components were small such as mirrored disks or ECC (Error Correcting Codes) in core memory. These systems were designed such that failures of these small components were transparent to the application. Later, the size of the unreliable components grew larger and semantic challenges crept into the application when failures occurred. As the granularity of the unreliable component grows, the latency to communicate with a backup becomes unpalatable. This leads to a more relaxed model for fault tolerance. The primary system will acknowledge the work request and its actions without waiting to ensure that the backup is notified of the work. This improves the responsiveness of the system. There are two implications of asynchronous state capture: 1) Everything promised by the primary is probabilistic. There is always a chance that an untimely failure shortly after the promise results in a backup proceeding without knowledge of the commitment. Hence, nothing is guaranteed! 2) Applications must ensure eventual consistency. Since work may be stuck in the primary after a failure and reappear later, the processing order for work cannot be guaranteed. Platform designers are struggling to make this easier for their applications. Emerging patterns of eventual consistency and probabilistic execution may soon yield a way for applications to express requirements for a "looser" form of consistency while providing availability in the face of ever larger failures. This paper recounts portions of the evolution of these trends, attempts to show the patterns that span these changes, and talks about future directions as we continue to "build on quicksand".Comment: CIDR 200

    Controlling Disk Contention for Parallel Query Processing in Shared Disk Database Systems

    Get PDF
    Shared Disk database systems offer a high flexibility for parallel transaction and query processing. This is because each node can process any transaction, query or subquery because it has access to the entire database. Compared to Shared Nothing, this is particularly advantageous for scan queries for which the degree of intra-query parallelism as well as the scan processors themselves can dynamically be chosen. On the other hand, there is the danger of disk contention between subqueries, in particular for index scans. We present a detailed simulation study to analyze the effectiveness of parallel scan processing in Shared Disk database systems. In particular, we investigate the relationship between the degree of declustering and the degree of scan parallelism for relation scans, clustered index scans, and non-clustered index scans. Furthermore, we study the usefulness of disk caches and prefetching for limiting disk contention. Finally, we show the importance of dynamically choosing the degree of scan parallelism to control disk contention in multi-user mode

    Management of concurrency in a reliable object-oriented computing system

    Get PDF
    PhD ThesisModern computing systems support concurrency as a means of increasing the performance of the system. However, the potential for increased performance is not without its problems. For example, lost updates and inconsistent retrieval are but two of the possible consequences of unconstrained concurrency. Many concurrency control techniques have been designed to combat these problems; this thesis considers the applicability of some of these techniques in the context of a reliable object-oriented system supporting atomic actions. The object-oriented programming paradigm is one approach to handling the inherent complexity of modern computer programs. By modeling entities from the real world as objects which have well-defined interfaces, the interactions in the system can be carefully controlled. By structuring sequences of such interactions as atomic actions, then the consistency of the system is assured. Objects are encapsulated entities such that their internal representation is not externally visible. This thesis postulates that this encapsulation should also include the capability for an object to be responsible for its own concurrency control. Given this latter assumption, this thesis explores the means by which the property of type-inheritance possessed by object-oriented languages can be exploited to allow programmers to explicitly control the level of concurrency an object supports. In particular, a object-oriented concurrency controller based upon the technique of two-phase locking is described and implemented using type-inheritance. The thesis also shows how this inheritance-based approach is highly flexible such that the basic concurrency control capabilities can be adopted unchanged or overridden with more type-specific concurrency control if requiredUK Science and Engineering Research Council, Serc/Alve

    Analysis of parallel scan processing in Shared Disk database systems

    Get PDF
    Shared Disk database systems offer a high flexibility for parallel transaction and query processing. This is because each node can process any transaction, query or subquery because it has access to the entire database. Compared to Shared Nothing database systems, this is particularly advantageous for scan queries for which the degree of intra-query parallelism as well as the scan processors themselves can dynamically be chosen. On the other hand, there is the danger of disk contention between subqueries, in particular for index scans. We present a detailed simulation study to analyze the effectiveness of parallel scan processing in Shared Disk database systems. In particular, we investigate the relationship between the degree of declustering and the degree of scan parallelism for relation scans, clustered index scans, and non-clustered index scans. Furthermore, we study the usefulness of disk caches and prefetching for limiting disk contention. Finally, we show that disk contention in multi-user mode can be limited for Shared Disk database systems by dynamically choosing the degree of scan parallelism
    • …
    corecore