1,601 research outputs found
A distributed file service based on optimistic concurrency control
The design of a layered file service for the Amoeba Distributed System is discussed, on top of which various applications can easily be intplemented. The bottom layer is formed by the Amoeba Block Services, responsible for implementing stable storage and repficated, highly available disk blocks. The next layer is formed by the Amoeba File Service which provides version management and concurrency control for tree-structured files. On top of this layer, the appficafions, ranging from databases to source code control systems, determine the structure of the file trees and provide an interface to the users
Recommended from our members
Superdatabases for Composition of Heterogeneous Databases
Superdatabases are designed to compose and extend databases. In particular, superdatabases allow consistent update across heterogeneous databases. The key idea of superdatabase is hierarchical composition of element databases. For global crash recovery, each element database must provide local recovery plus some kind of agreement protocol, such as two-phase commit. For global concurrency control, each element database must have local synchronization with an explicit serial order, such as two-phase locking, timestamps, or optimistic methods. Given element databases satisfying the above requirements, the superdatabase can certify the serializability of global transactions through a concatenation of local serial order. Combined with previous work on heterogeneous databases, including unified query languages and view integration, now we can build heterogeneous databases which are consistent, adaptable, and extensible by construction
Recommended from our members
Improving DBMS performance through diverse redundancy
Database replication is widely used to improve both fault tolerance and DBMS performance. Non-diverse database replication has a significant limitation - it is effective against crash failures only. Diverse redundancy is an effective mechanism of tolerating a wider range of failures, including many non-crash failures. However it has not been adopted in practice because many see DBMS performance as the main concern. In this paper we show experimental evidence that diverse redundancy (diverse replication) can bring benefits in terms of DBMS performance, too. We report on experimental results with an optimistic architecture built with two diverse DBMSs under a load derived from TPC-C benchmark, which show that a diverse pair performs faster not only than non-diverse pairs but also than the individual copies of the DBMSs used. This result is important because it shows potential for DBMS performance better than anything achievable with the available off-the-shelf servers
Maintaining consistency in distributed systems
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability
Transactions Processing Subsystems for Databases Based On ARIES Write-Ahead Logging for The Client-Server Architecture Approach
This paper proposes a formal framework specification that applies an advanced recovery mechanism, functional in a client-server architecture while addressing atomicity and consistency issues. Another palpable issue in using such dominant architectures is recovery. This paper also addresses this issue in context with the client-server architecture using extensions of the original ARIES algorithm and concepts of Software Transaction Memory. This novelty has been successfully implemented and tested for propriety and applicability
Optimizing recovery protocols for replicated database systems
En la actualidad, el uso de tecnologías de informacíon y sistemas de cómputo tienen una gran influencia en la vida diaria. Dentro de los sistemas informáticos actualmente en uso, son de gran relevancia los sistemas distribuidos por la capacidad que pueden tener para escalar, proporcionar soporte para la tolerancia a fallos y mejorar el desempeño de aplicaciones y proporcionar alta disponibilidad.
Los sistemas replicados son un caso especial de los sistemas distribuidos. Esta tesis está centrada en el área de las bases de datos replicadas debido al uso extendido que en el presente se hace de ellas, requiriendo características como: bajos tiempos de respuesta, alto rendimiento en los procesos, balanceo de carga entre las replicas, consistencia e integridad de datos y tolerancia a fallos.
En este contexto, el desarrollo de aplicaciones utilizando bases de datos replicadas presenta dificultades que pueden verse atenuadas mediante el uso de servicios de soporte a mas bajo nivel tales como servicios de comunicacion y pertenencia. El uso de los servicios proporcionados por los sistemas de comunicación de grupos permiten ocultar los detalles de las comunicaciones y facilitan el diseño de protocolos de replicación y recuperación.
En esta tesis, se presenta un estudio de las alternativas y estrategias empleadas en los protocolos de replicación y recuperación en las bases de datos replicadas. También se revisan diferentes conceptos sobre los sistemas de comunicación de grupos y sincronia virtual. Se caracterizan y clasifican diferentes tipos de protocolos de replicación con respecto a la interacción o soporte que pudieran dar a la recuperación, sin embargo el enfoque se dirige a los protocolos basados en sistemas de comunicación de grupos.
Debido a que los sistemas comerciales actuales permiten a los programadores y administradores de sistemas de bases de datos renunciar en alguna medida a la consistencia con la finalidad de aumentar el rendimiento, es importante determinar el nivel de consistencia necesario. En el caso de las bases de datos replicadas la consistencia está muy relacionada con el nivel de aislamiento establecido entre las transacciones.
Una de las propuestas centrales de esta tesis es un protocolo de recuperación para un protocolo de replicación basado en certificación. Los protocolos de replicación de base de datos basados en certificación proveen buenas bases para el desarrollo de sus respectivos protocolos de recuperación cuando se utiliza el nivel de aislamiento snapshot. Para tal nivel de aislamiento no se requiere que los readsets sean transferidos entre las réplicas ni revisados en la fase de cetificación y ya que estos protocolos mantienen un histórico de la lista de writesets que es utilizada para certificar las transacciones, este histórico provee la información necesaria para transferir el estado perdido por la réplica en recuperación. Se hace un estudio del rendimiento del protocolo de recuperación básico y de la versión optimizada en la que se compacta la información a transferir. Se presentan los resultados obtenidos en las pruebas de la implementación del protocolo de recuperación en el middleware de soporte.
La segunda propuesta esta basada en aplicar el principio de compactación de la informacion de recuperación en un protocolo de recuperación para los protocolos de replicación basados en votación débil. El objetivo es minimizar el tiempo necesario para transfeir y aplicar la información perdida por la réplica en recuperación obteniendo con esto un protocolo de recuperación mas eficiente. Se ha verificado el buen desempeño de este algoritmo a través de una simulación. Para efectuar la simulación se ha hecho uso del entorno de simulación Omnet++. En los resultados de los experimentos puede apreciarse que este protocolo de recuperación tiene buenos resultados en múltiples escenarios.
Finalmente, se presenta la verificación de la corrección de ambos algoritmos de recuperación en el Capítulo 5.Nowadays, information technology and computing systems have a great relevance
on our lives. Among current computer systems, distributed systems are
one of the most important because of their scalability, fault tolerance, performance
improvements and high availability.
Replicated systems are a specific case of distributed system. This Ph.D. thesis is
centered in the replicated database field due to their extended usage, requiring
among other properties: low response times, high throughput, load balancing
among replicas, data consistency, data integrity and fault tolerance.
In this scope, the development of applications that use replicated databases
raises some problems that can be reduced using other fault-tolerant building
blocks, as group communication and membership services. Thus, the usage
of the services provided by group communication systems (GCS) hides several
communication details, simplifying the design of replication and recovery protocols.
This Ph.D. thesis surveys the alternatives and strategies being used in the replication
and recovery protocols for database replication systems. It also summarizes
different concepts about group communication systems and virtual synchrony.
As a result, the thesis provides a classification of database replication
protocols according to their support to (and interaction with) recovery protocols,
always assuming that both kinds of protocol rely on a GCS.
Since current commercial DBMSs allow that programmers and database administrators
sacrifice consistency with the aim of improving performance, it is
important to select the appropriate level of consistency. Regarding (replicated)
databases, consistency is strongly related to the isolation levels being assigned
to transactions.
One of the main proposals of this thesis is a recovery protocol for a replication
protocol based on certification. Certification-based database replication protocols
provide a good basis for the development of their recovery strategies when
a snapshot isolation level is assumed. In that level readsets are not needed in
the validation step. As a result, they do not need to be transmitted to other
replicas. Additionally, these protocols hold a writeset list that is used in the
certification/validation step. That list maintains the set of writesets needed by the recovery protocol. This thesis evaluates the performance of a recovery
protocol based on the writeset list tranfer (basic protocol) and of an optimized
version that compacts the information to be transferred.
The second proposal applies the compaction principle to a recovery protocol
designed for weak-voting replication protocols. Its aim is to minimize the time
needed for transferring and applying the writesets lost by the recovering replica,
obtaining in this way an efficient recovery. The performance of this recovery
algorithm has been checked implementing a simulator. To this end, the Omnet++
simulating framework has been used. The simulation results confirm
that this recovery protocol provides good results in multiple scenarios.
Finally, the correction of both recovery protocols is also justified and presented
in Chapter 5.García Muñoz, LH. (2013). Optimizing recovery protocols for replicated database systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31632TESI
Recommended from our members
Execution Autonomy in Distributed Transaction Processing
We study the feasibility of execution autonomy in systems with asynchronous transaction processing based on epsilon-serializability (ESR). The abstract correctness criteria defined by ESR are implemented by techniques such as asynchronous divergence control and asynchronous consistency restoration. Concrete application examples in a distributed environment, such as banking, are described in order to illustrate the advantages of using ESR to support execution autonomy
A comparative study of concurrency control algorithms for distributed databases
The declining cost of computer hardware and the increasing data processing needs of geographically dispersed organizations have led to substantial interest in distributed data management. These characteristics have led to reconsider the design of centralized databases. Distributed databases have appeared as a result of those considerations. A number of advantages result from having duplicate copies of data in a distributed databases. Some of these advantages are: increased data accesibility, more responsive data access, higher reliability, and load sharing. These and other benefits must be balanced against the additional cost and complexity introduced in doing so. This thesis considers the problem of concurrency control of multiple copy databases. Several synchronization techniques are mentioned and a few algorithms for concurrency control are evaluated and compared
- …