1,791 research outputs found
Performance study of protocols in replicated database.
by Ching-Ting, Ng.Thesis (M.Phil.)--Chinese University of Hong Kong, 1996.Includes bibliographical references (leaves 79-82).Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 2 --- Background --- p.5Chapter 2.1 --- Protocols tackling site failure --- p.5Chapter 2.2 --- Protocols tackling Partition Failure --- p.6Chapter 2.2.1 --- Primary site --- p.6Chapter 2.2.2 --- Quorum Consensus Protocol --- p.7Chapter 2.2.3 --- Missing Writes --- p.10Chapter 2.2.4 --- Virtual Partition Protocol --- p.11Chapter 2.3 --- Protocols to enhance the Performance of Updating --- p.11Chapter 2.3.1 --- Independent Updates and Incremental Agreement in Replicated Databases --- p.12Chapter 2.3.2 --- A Transaction Replication Scheme for a Replicated Database with Node Autonomy --- p.13Chapter 3 --- Transaction Replication Scheme --- p.17Chapter 3.1 --- A TRS for a Replicated Database with Node Autonomy --- p.17Chapter 3.1.1 --- Example --- p.17Chapter 3.1.2 --- Problem --- p.18Chapter 3.1.3 --- Network Model --- p.18Chapter 3.1.4 --- Transaction and Data Model --- p.19Chapter 3.1.5 --- Histories and One-Copy Serializability --- p.20Chapter 3.1.6 --- Transaction Broadcasting Scheme --- p.21Chapter 3.1.7 --- Local Transactions --- p.22Chapter 3.1.8 --- Public Transactions --- p.23Chapter 3.1.9 --- A Conservative Timestamping Algorithm --- p.24Chapter 3.1.10 --- Decentralized Two-Phase Commit --- p.25Chapter 3.1.11 --- Partition Failures --- p.27Chapter 4 --- Simulation Model --- p.29Chapter 4.1 --- Simulation Model --- p.29Chapter 4.1.1 --- Model Design --- p.29Chapter 4.2 --- Implement at ion --- p.37Chapter 4.2.1 --- Simulation --- p.37Chapter 4.2.2 --- Simulation Language --- p.37Chapter 5 --- Performance Results and Analysis --- p.39Chapter 5.1 --- Simulation Results and Data Analysis --- p.39Chapter 5.1.1 --- Experiment 1 : Variation of TRS Period --- p.44Chapter 5.1.2 --- Experiment 2 : Variation of Clock Synchronization --- p.47Chapter 5.1.3 --- Experiment 3 : Variation of Ratio of Local to Public Transaction --- p.49Chapter 5.1.4 --- Experiment 4 : Variation of Number of Operations --- p.51Chapter 5.1.5 --- Experiment 5 : Variation of Message Transmit Delay --- p.55Chapter 5.1.6 --- Experiment 6 : Variation of the Interarrival Time of Transactions --- p.58Chapter 5.1.7 --- Experiment 7 : Variation of Operation CPU cost --- p.61Chapter 5.1.8 --- Experiment 8 : Variation of Disk I/O time --- p.64Chapter 5.1.9 --- Experiment 9 : Variation of Cache Hit Ratio --- p.66Chapter 5.1.10 --- Experiment 10 : Variation of Number of Data Access --- p.68Chapter 5.1.11 --- Experiment 11 : Variation of Read Operation Ratio --- p.70Chapter 5.1.12 --- Experiment 12 : Variation of One Site Failed --- p.72Chapter 5.1.13 --- Experiment 13 : Variation of Sites Available --- p.74Chapter 6 --- Conclusion --- p.77Bibliography --- p.79Chapter A --- Implementation --- p.83Chapter A.1 --- Assumptions of System Model --- p.83Chapter A.1.1 --- Program Description --- p.83Chapter A.1.2 --- TRS System --- p.85Chapter A. 1.3 --- Common Functional Modules for Majority Quorum and Tree Quo- rum Protocol --- p.88Chapter A.1.4 --- Majority Quorum Consensus Protocol --- p.90Chapter A. 1.5 --- Tree Quorum Protocol --- p.9
A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing
Data Grids have been adopted as the platform for scientific communities that
need to share, access, transport, process and manage large data collections
distributed worldwide. They combine high-end computing technologies with
high-performance networking and wide-area storage management techniques. In
this paper, we discuss the key concepts behind Data Grids and compare them with
other data sharing and distribution paradigms such as content delivery
networks, peer-to-peer networks and distributed databases. We then provide
comprehensive taxonomies that cover various aspects of architecture, data
transportation, data replication and resource allocation and scheduling.
Finally, we map the proposed taxonomy to various Data Grid systems not only to
validate the taxonomy but also to identify areas for future exploration.
Through this taxonomy, we aim to categorise existing systems to better
understand their goals and their methodology. This would help evaluate their
applicability for solving similar problems. This taxonomy also provides a "gap
analysis" of this area through which researchers can potentially identify new
issues for investigation. Finally, we hope that the proposed taxonomy and
mapping also helps to provide an easy way for new practitioners to understand
this complex area of research.Comment: 46 pages, 16 figures, Technical Repor
The design of a distributed database and a replicated data management algorithm
Call number: LD2668 .R4 CMSC 1988 V36Master of ScienceComputing and Information Science
The Architecture of an Autonomic, Resource-Aware, Workstation-Based Distributed Database System
Distributed software systems that are designed to run over workstation
machines within organisations are termed workstation-based. Workstation-based
systems are characterised by dynamically changing sets of machines that are
used primarily for other, user-centric tasks. They must be able to adapt to and
utilize spare capacity when and where it is available, and ensure that the
non-availability of an individual machine does not affect the availability of
the system. This thesis focuses on the requirements and design of a
workstation-based database system, which is motivated by an analysis of
existing database architectures that are typically run over static, specially
provisioned sets of machines. A typical clustered database system -- one that
is run over a number of specially provisioned machines -- executes queries
interactively, returning a synchronous response to applications, with its data
made durable and resilient to the failure of machines. There are no existing
workstation-based databases. Furthermore, other workstation-based systems do
not attempt to achieve the requirements of interactivity and durability,
because they are typically used to execute asynchronous batch processing jobs
that tolerate data loss -- results can be re-computed. These systems use
external servers to store the final results of computations rather than
workstation machines. This thesis describes the design and implementation of a
workstation-based database system and investigates its viability by evaluating
its performance against existing clustered database systems and testing its
availability during machine failures.Comment: Ph.D. Thesi
Fault tolerant software technology for distributed computing system
Issued as Monthly reports [nos. 1-23], Interim technical report, Technical guide books [nos. 1-2], and Final report, Project no. G-36-64
Contention management for distributed data replication
PhD ThesisOptimistic replication schemes provide distributed applications with access
to shared data at lower latencies and greater availability. This is
achieved by allowing clients to replicate shared data and execute actions
locally. A consequence of this scheme raises issues regarding shared data
consistency. Sometimes an action executed by a client may result in
shared data that may conflict and, as a consequence, may conflict with
subsequent actions that are caused by the conflicting action. This requires
a client to rollback to the action that caused the conflicting data,
and to execute some exception handling. This can be achieved by relying
on the application layer to either ignore or handle shared data inconsistencies
when they are discovered during the reconciliation phase of an
optimistic protocol.
Inconsistency of shared data has an impact on the causality relationship
across client actions. In protocol design, it is desirable to preserve the
property of causality between different actions occurring across a distributed
application. Without application level knowledge, we assume
an action causes all the subsequent actions at the same client. With
application knowledge, we can significantly ease the protocol burden of
provisioning causal ordering, as we can identify which actions do not
cause other actions (even if they precede them). This, in turn, makes
possible the client’s ability to rollback to past actions and to change
them, without having to alter subsequent actions. Unfortunately, increased
instances of application level causal relations between actions
lead to a significant overhead in protocol. Therefore, minimizing the
rollback associated with conflicting actions, while preserving causality,
is seen as desirable for lower exception handling in the application layer.
In this thesis, we present a framework that utilizes causality to create
a scheduler that can inform a contention management scheme to reduce
the rollback associated with the conflicting access of shared data.
Our framework uses a backoff contention management scheme to provide
causality preserving for those optimistic replication systems with high
causality requirements, without the need for application layer knowledge.
We present experiments which demonstrate that our framework reduces
clients’ rollback and, more importantly, that the overall throughput of
the system is improved when the contention management is used with
applications that require causality to be preserved across all actions
- …