224 research outputs found

    A comparative study of concurrency control algorithms for distributed databases

    Get PDF
    The declining cost of computer hardware and the increasing data processing needs of geographically dispersed organizations have led to substantial interest in distributed data management. These characteristics have led to reconsider the design of centralized databases. Distributed databases have appeared as a result of those considerations. A number of advantages result from having duplicate copies of data in a distributed databases. Some of these advantages are: increased data accesibility, more responsive data access, higher reliability, and load sharing. These and other benefits must be balanced against the additional cost and complexity introduced in doing so. This thesis considers the problem of concurrency control of multiple copy databases. Several synchronization techniques are mentioned and a few algorithms for concurrency control are evaluated and compared

    A database management capability for Ada

    Get PDF
    The data requirements of mission critical defense systems have been increasing dramatically. Command and control, intelligence, logistics, and even weapons systems are being required to integrate, process, and share ever increasing volumes of information. To meet this need, systems are now being specified that incorporate data base management subsystems for handling storage and retrieval of information. It is expected that a large number of the next generation of mission critical systems will contain embedded data base management systems. Since the use of Ada has been mandated for most of these systems, it is important to address the issues of providing data base management capabilities that can be closely coupled with Ada. A comprehensive distributed data base management project has been investigated. The key deliverables of this project are three closely related prototype systems implemented in Ada. These three systems are discussed

    Process algebra approach to parallel DBMS performance modelling

    Get PDF
    Abstract unavailable please refer to PD

    Design and implementation of a transaction manager for a relational database

    No full text
    1vf ulti-user database management systems are in great demand because of the information requirements of our modern industrial society. A clear requirement is that database resources be shared by many users at the same time. Transaction management aims to manage concurrent database access by multiple users while preserving the consistency of the database. In this thesis a single-user relational database management system, REQUIEM, is used as a vehicle to investigate improved methods for achieving this. A module, called the REQUIEM Transaction Manager (RTM), is built on top of the original REQUIEM to achieve a multi-user database management system. The design work of the present thesis is founded upon various techniques for transaction management proposed in published literature which are critically assessed and a mechanism which combines appealing features from existing methodologies. The problems of transaction management considered in this thesis are: 1. concurrency control, 2. granularity control, 3. deadlock control, and 4. recovery control. The RTM is also compared with the transaction management facilities of conventional commercial systems such as DB2, INGRES and ORACLE

    Nonblocking commit protocols

    Get PDF

    Robust data storage in a network of computer systems

    Get PDF
    PhD ThesisRobustness of data in this thesis is taken to mean reliable storage of data and also high availability of data .objects in spite of the occurrence of faults. Algorithms and data structures which can be used to provide such robustness in the presence of various disk, processor and communication network failures are described. Reliable storage of data at individual nodes in a network of computer systems is based on the use of a stable storage mechanism combined with strategies which are used to help ensure crash resis- tance of file operations in spite of the use of buffering mechan- isms by operating systems. High availability of data in the net- work is maintained by replicating data on different computers and mutual consistency between replicas is ensured in spite of network partitioning. A stable storage system which provides atomicity for more complex data structures instead of the usual fixed size page has been designed and implemented and its performance evaluated. A crash resistant file system has also been implemented and evaluated. Many of the techniques presented here are used in the design of what we call CRES (Crash-resistant, Replicated and Stable) storage. CRES storage provides fault tolerance facilities for various disk and processor faults. It also provides fault tolerance facilities for network partitioning through the provision of an algorithm for the update and merge of a partitioned data storage system

    A quorum-based commit and termination protocol for distributed database systems

    Get PDF
    A quorum-based commit and termination protocol is designed with the goal of maintaining high data availability in case of failures. The protocol proposed is resilient to arbitrary concurrent site failures, lost messages, and network partitioning. The major difference between this protocol and existing ones is that the voting partition processing strategy is taken into consideration in the design. As a result, the protocol is expected to maintain higher data availability.published_or_final_versio

    An Analytical Model for Evaluating Database Update Schemes

    Get PDF
    A methodology is presented for evaluating the performance of database update schemes. The methodology uses the M/Hr/1 queueing model as a basis for this analysis and makes use of the history of how data is used in the database. Parameters have been introduced which can be set based on the characteristics of a specific system. These include update to retrieval ratio, average file size, overhead, block size and the expected number of items in the database. The analysis is specifically directed toward the support of derived data within the relational model. Three support methods are analyzed. These are first examined in a central database system. The analysis is then extended in order to measure performance in a distributed system. Because concurrency is a major problem in a distributed system, the support of derived data is analyzed with respect to three distributive concurrency control techniques -- master/slave, distributed and synchronized. In addition to its use as a performance predictor, the development of the methodology serves to demonstrate how queueing theory may be used to investigate other related database problems. This is an important benefit due to this lack of fundamental results in the area of using queueing theory to analyze database performance
    • …
    corecore