503 research outputs found

    Optimizing recovery protocols for replicated database systems

    Full text link
    En la actualidad, el uso de tecnologías de informacíon y sistemas de cómputo tienen una gran influencia en la vida diaria. Dentro de los sistemas informáticos actualmente en uso, son de gran relevancia los sistemas distribuidos por la capacidad que pueden tener para escalar, proporcionar soporte para la tolerancia a fallos y mejorar el desempeño de aplicaciones y proporcionar alta disponibilidad. Los sistemas replicados son un caso especial de los sistemas distribuidos. Esta tesis está centrada en el área de las bases de datos replicadas debido al uso extendido que en el presente se hace de ellas, requiriendo características como: bajos tiempos de respuesta, alto rendimiento en los procesos, balanceo de carga entre las replicas, consistencia e integridad de datos y tolerancia a fallos. En este contexto, el desarrollo de aplicaciones utilizando bases de datos replicadas presenta dificultades que pueden verse atenuadas mediante el uso de servicios de soporte a mas bajo nivel tales como servicios de comunicacion y pertenencia. El uso de los servicios proporcionados por los sistemas de comunicación de grupos permiten ocultar los detalles de las comunicaciones y facilitan el diseño de protocolos de replicación y recuperación. En esta tesis, se presenta un estudio de las alternativas y estrategias empleadas en los protocolos de replicación y recuperación en las bases de datos replicadas. También se revisan diferentes conceptos sobre los sistemas de comunicación de grupos y sincronia virtual. Se caracterizan y clasifican diferentes tipos de protocolos de replicación con respecto a la interacción o soporte que pudieran dar a la recuperación, sin embargo el enfoque se dirige a los protocolos basados en sistemas de comunicación de grupos. Debido a que los sistemas comerciales actuales permiten a los programadores y administradores de sistemas de bases de datos renunciar en alguna medida a la consistencia con la finalidad de aumentar el rendimiento, es importante determinar el nivel de consistencia necesario. En el caso de las bases de datos replicadas la consistencia está muy relacionada con el nivel de aislamiento establecido entre las transacciones. Una de las propuestas centrales de esta tesis es un protocolo de recuperación para un protocolo de replicación basado en certificación. Los protocolos de replicación de base de datos basados en certificación proveen buenas bases para el desarrollo de sus respectivos protocolos de recuperación cuando se utiliza el nivel de aislamiento snapshot. Para tal nivel de aislamiento no se requiere que los readsets sean transferidos entre las réplicas ni revisados en la fase de cetificación y ya que estos protocolos mantienen un histórico de la lista de writesets que es utilizada para certificar las transacciones, este histórico provee la información necesaria para transferir el estado perdido por la réplica en recuperación. Se hace un estudio del rendimiento del protocolo de recuperación básico y de la versión optimizada en la que se compacta la información a transferir. Se presentan los resultados obtenidos en las pruebas de la implementación del protocolo de recuperación en el middleware de soporte. La segunda propuesta esta basada en aplicar el principio de compactación de la informacion de recuperación en un protocolo de recuperación para los protocolos de replicación basados en votación débil. El objetivo es minimizar el tiempo necesario para transfeir y aplicar la información perdida por la réplica en recuperación obteniendo con esto un protocolo de recuperación mas eficiente. Se ha verificado el buen desempeño de este algoritmo a través de una simulación. Para efectuar la simulación se ha hecho uso del entorno de simulación Omnet++. En los resultados de los experimentos puede apreciarse que este protocolo de recuperación tiene buenos resultados en múltiples escenarios. Finalmente, se presenta la verificación de la corrección de ambos algoritmos de recuperación en el Capítulo 5.Nowadays, information technology and computing systems have a great relevance on our lives. Among current computer systems, distributed systems are one of the most important because of their scalability, fault tolerance, performance improvements and high availability. Replicated systems are a specific case of distributed system. This Ph.D. thesis is centered in the replicated database field due to their extended usage, requiring among other properties: low response times, high throughput, load balancing among replicas, data consistency, data integrity and fault tolerance. In this scope, the development of applications that use replicated databases raises some problems that can be reduced using other fault-tolerant building blocks, as group communication and membership services. Thus, the usage of the services provided by group communication systems (GCS) hides several communication details, simplifying the design of replication and recovery protocols. This Ph.D. thesis surveys the alternatives and strategies being used in the replication and recovery protocols for database replication systems. It also summarizes different concepts about group communication systems and virtual synchrony. As a result, the thesis provides a classification of database replication protocols according to their support to (and interaction with) recovery protocols, always assuming that both kinds of protocol rely on a GCS. Since current commercial DBMSs allow that programmers and database administrators sacrifice consistency with the aim of improving performance, it is important to select the appropriate level of consistency. Regarding (replicated) databases, consistency is strongly related to the isolation levels being assigned to transactions. One of the main proposals of this thesis is a recovery protocol for a replication protocol based on certification. Certification-based database replication protocols provide a good basis for the development of their recovery strategies when a snapshot isolation level is assumed. In that level readsets are not needed in the validation step. As a result, they do not need to be transmitted to other replicas. Additionally, these protocols hold a writeset list that is used in the certification/validation step. That list maintains the set of writesets needed by the recovery protocol. This thesis evaluates the performance of a recovery protocol based on the writeset list tranfer (basic protocol) and of an optimized version that compacts the information to be transferred. The second proposal applies the compaction principle to a recovery protocol designed for weak-voting replication protocols. Its aim is to minimize the time needed for transferring and applying the writesets lost by the recovering replica, obtaining in this way an efficient recovery. The performance of this recovery algorithm has been checked implementing a simulator. To this end, the Omnet++ simulating framework has been used. The simulation results confirm that this recovery protocol provides good results in multiple scenarios. Finally, the correction of both recovery protocols is also justified and presented in Chapter 5.García Muñoz, LH. (2013). Optimizing recovery protocols for replicated database systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31632TESI

    Evaluating certification protocols in the partial database state machine

    Get PDF
    Partial replication is an alluring technique to ensure the reliability of very large and geographically distributed databases while, at the same time, offering good performance. By correctly exploiting access locality most transactions become confined to a small subset of the database replicas thus reducing processing, storage access and communication overhead associated with replication. The advantages of partial replication have however to be weighted against the added complexity that is required to manage it. In fact, if the chosen replica configuration prevents the local execution of transactions or if the overhead of consistency protocols offsets the savings of locality, potential gains cannot be realized. These issues are heavily dependent on the application used for evaluation and render simplistic benchmarks useless. In this paper, we present a detailed analysis of Partial Database State Machine (PDBSM) replication by comparing alternative partial replication protocols with full replication. This is done using a realistic scenario based on a detailed network simulator and access patterns from an industry standard database benchmark. The results obtained allow us to identify the best configuration for typical on-line transaction processing applications.União Europeia - GORDA Project (FP6-IST/004758)

    Partial replication with strong consistency

    Get PDF
    In response to the increasing expectations of their clients, cloud services exploit geo-replication to provide fault-tolerance, availability and low latency when executing requests. However, cloud platforms tend to adopt weak consistency semantics, in which replicas may diverge in state independently. These systems offer good response times but at the disadvantage of allowing potential data inconsistencies that may affect user experience. Some systems propose to adopt solutions with strong consistency, which are not as efficient but simplify the development of correct applications by guaranteeing that all replicas in the system maintain the same database state. Therefore, it is interesting to explore a system that can offer strong consistency while minimizing its main disadvantage: the impact in performance that results from coordinating every replica in the system. A possible solution to reduce the cost of replica coordination is to support partial replication. Partially replicating a database allows for each server to only be responsible for a subset of the data - a partition - which means that when updating the database only some of replicas have to be synchronized, improving response times. In this dissertation, we propose an algorithm that implements a distributed replicated database that offers strong consistency with support for partial replication. To achieve strong consistency in a partially replicated scenario, our algorithm is in part based on the Clock-SI[10] research, which presents an algorithm that implements a multi-versioned database for strong consistency (snapshot-isolation) and performs the Two-Phase Commit protocol when coordinating replicas during updates. The algorithm is supported by an architecture that simplifies distributing partitions among datacenters and efficiently propagating operations across nodes in the same partition, thanks to the ChainPaxos[27] algorithm.Como forma de responder às expectativas cada vez maiores dos seus clientes, as operadoras cloud tiram partido da geo-replicação para oferecer tolerância a falhas, disponibilidade e baixa latência dos seus sistemas na resposta aos pedidos. No entanto, as plataformas cloud tendem a adotar uma semântica de consistência fraca, na qual as réplicas podem variar em estado de forma independente. Estes sistemas oferecem bons tempos de resposta mas com a desvantagem de que têm de lidar com potenciais inconsistências nos dados que podem ter impacto na experiência dos utilizadores. Alguns sistemas propõem adotar soluções com consistência forte, as quais não são tão eficientes mas simplificam o desenvolvimento de aplicações ao garantir que todas as réplicas do sistema mantêm o mesmo estado da base de dados. É então interessante explorar um sistema que garanta replicação forte mas que minimize a sua principal desvantagem: o impacto de performance no momento de coordenar o estado das réplicas nos sistema. Uma possível solução para reduzir o custo de coordenação das réplicas durante transações é o suporte à replicação parcial. Replicar parcialmente uma base de dados permite que cada servidor seja apenas responsável por uma parte dos dados - uma partição - o que significa que quando são realizadas escritas apenas algumas das réplicas têm de ser sincronizadas, melhorando os tempos de resposta. Neste trabalho propomos um algoritmo que implementa um sistema de armazenamento distríbuido replicado que oferece consistência forte com suporte a replicação parcial. A fim de garantir consistência forte num cenário de replicação parcial, o nosso algoritmo é em parte baseado no algoritmo Clock-SI[10], que implementa uma base de dados parcial com multi-versões para garantir consistência forte (snapshot-isolation) e que realiza o protocolo Two-Phase Commit para coordenar as réplicas no momento de aplicar escritas. O algoritmo é suportado por uma arquitectura que torna simples distribuir partições por vários centros de dados e propagar de forma eficiente operações entre todos os nós numa mesma partição, através do algoritmo ChainPaxos[27]

    Rigorous Design of Distributed Transactions

    No full text
    Database replication is traditionally envisaged as a way of increasing fault-tolerance and availability. It is advantageous to replicate the data when transaction workload is predominantly read-only. However, updating replicated data within a transactional framework is a complex affair due to failures and race conditions among conflicting transactions. This thesis investigates various mechanisms for the management of replicas in a large distributed system, formalizing and reasoning about the behavior of such systems using Event-B. We begin by studying current approaches for the management of replicated data and explore the use of broadcast primitives for processing transactions. Subsequently, we outline how a refinement based approach can be used for the development of a reliable replicated database system that ensures atomic commitment of distributed transactions using ordered broadcasts. Event-B is a formal technique that consists of describing rigorously the problem in an abstract model, introducing solutions or design details in refinement steps to obtain more concrete specifications, and verifying that the proposed solutions are correct. This technique requires the discharge of proof obligations for consistency checking and refinement checking. The B tools provide significant automated proof support for generation of the proof obligations and discharging them. The majority of the proof obligations are proved by the automatic prover of the tools. However, some complex proof obligations require interaction with the interactive prover. These proof obligations also help discover new system invariants. The proof obligations and the invariants help us to understand the complexity of the problem and the correctness of the solutions. They also provide a clear insight into the system and enhance our understanding of why a design decision should work. The objective of the research is to demonstrate a technique for the incremental construction of formal models of distributed systems and reasoning about them, to develop the technique for the discovery of gluing invariants due to prover failure to automatically discharge a proof obligation and to develop guidelines for verification of distributed algorithms using the technique of abstraction and refinement

    Mobile Transaction Supports for DBMS

    Get PDF
    National audienceIn recent years data management in mobile environments has generated a great interest. Several proposals concerning mobile transactions have been done. However, it is very difficult to have an overview of all these approaches. In this paper we analyze and compare several contributions on mobile transactions and introduce our ongoing research: the design and implementation of a Mobile Transaction Service. The focus of our study is on execution models, the manner ACID properties are provided and the way geographical movements of hosts (during transaction executions) is supported

    Optimistic replication

    Get PDF
    Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failures in low-quality communication links. The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular. Optimistic replication techniques are different from traditional “pessimistic ” ones. Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen and reaches agreement on the final contents incrementally. We explore the solution space for optimistic replication algorithms. This paper identifies key challenges facing optimistic replication systems — ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence — and provides a comprehensive survey of techniques developed for addressing these challenges

    A modular distributed transactional memory framework

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaThe traditional lock-based concurrency control is complex and error-prone due to its low-level nature and composability challenges. Software transactional memory (STM), inherited from the database world, has risen as an exciting alternative, sparing the programmer from dealing explicitly with such low-level mechanisms. In real world scenarios, software is often faced with requirements such as high availability and scalability, and the solution usually consists on building a distributed system. Given the benefits of STM over traditional concurrency controls, Distributed Software Transactional Memory (DSTM) is now being investigated as an attractive alternative for distributed concurrency control. Our long-term objective is to transparently enable multithreaded applications to execute over a DSTM setting. In this work we intend to pave the way by defining a modular DSTM framework for the Java programming language. We extend an existing, efficient, STM framework with a new software layer to create a DSTM framework. This new layer interacts with the local STM using well-defined interfaces, and allows the implementation of different distributed memory models while providing a non-intrusive, familiar,programming model to applications, unlike any other DSTM framework. Using the proposed DSTM framework we have successfully, and easily, implemented a replicated STM which uses a Certification protocol to commit transactions. An evaluation using common STM benchmarks showcases the efficiency of the replicated STM,and its modularity enables us to provide insight on the relevance of different implementations of the Group Communication System required by the Certification scheme, with respect to performance under different workloads.Fundação para a Ciência e Tecnologia - project (PTDC/EIA-EIA/113613/2009

    Rigorous design of distributed transactions

    Get PDF
    Database replication is traditionally envisaged as a way of increasing fault-tolerance and availability. It is advantageous to replicate the data when transaction workload is predominantly read-only. However, updating replicated data within a transactional framework is a complex affair due to failures and race conditions among conflicting transactions. This thesis investigates various mechanisms for the management of replicas in a large distributed system, formalizing and reasoning about the behavior of such systems using Event-B. We begin by studying current approaches for the management of replicated data and explore the use of broadcast primitives for processing transactions. Subsequently, we outline how a refinement based approach can be used for the development of a reliable replicated database system that ensures atomic commitment of distributed transactions using ordered broadcasts. Event-B is a formal technique that consists of describing rigorously the problem in an abstract model, introducing solutions or design details in refinement steps to obtain more concrete specifications, and verifying that the proposed solutions are correct. This technique requires the discharge of proof obligations for consistency checking and refinement checking. The B tools provide significant automated proof support for generation of the proof obligations and discharging them. The majority of the proof obligations are proved by the automatic prover of the tools. However, some complex proof obligations require interaction with the interactive prover. These proof obligations also help discover new system invariants. The proof obligations and the invariants help us to understand the complexity of the problem and the correctness of the solutions. They also provide a clear insight into the system and enhance our understanding of why a design decision should work. The objective of the research is to demonstrate a technique for the incremental construction of formal models of distributed systems and reasoning about them, to develop the technique for the discovery of gluing invariants due to prover failure to automatically discharge a proof obligation and to develop guidelines for verification of distributed algorithms using the technique of abstraction and refinement.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Desarrollo de un sistema de replicación de bases de datos en entornos dinámicos: particionado y protocolos de replicación asociados

    Get PDF
    We took the Master thesis of I. Arrieta-Salinas and M. Louis Rodríguez as a starting point for this project. We are going to deploy a distributed database to be used in a cloud environment as a specific case of Platform-as-a-service. We assume that data is partitioned and several replicas store a copy of a given partition. The clients issue transactions by means of a standard library such as JDBC. To do so, they need information about the data placement that is managed by a Metadata Manager. The Metadata Manager manages the partitioning and the replica placement among all replicas building a replica cluster on each partition. The replication cluster has a few replicas running a replication protocol to provide strong consistency and the rest receive the propagation of updates in a lazy manner. These replicas are logically constituted as onion layers around the core replicas running a given replication protocol. The implementation of this system had several drawbacks that we try to fix in this work. First of all, clients an the MM need to be physically in the same machine which leads to a penalty performance in heavily loaded scenarios. The system was optimized for YCSB that consisted in transactions with a single operation and they are run over two replication protocols: primary copy and active replication that are known to perform badly update intensive scenarios. Moreover, there was no load balancing at all according to replica performance, it was merely a round-robin policy among all replicas at the core level. We try to argument the system limitations (described in more detail in Section 2.1) and to going into the system implementation. This is going to be explained in the rest of this work. The main goals of this project are focused in the different parts of the system. In regard to the Client Module, originally the client was the OLPT-Benchmark, a module that consist in send specific types of transactions to the system by a JDBC connection. In the actual version this module has been modified allowing to the transaction to have more than one operation and several parameters has been introduced to the transaction which allow to the system to treat them differently. Respecting to the Metadata Manager one of the main goals between the others developed in this project is the decentralization of the Client and Meta- data Manager modules physically. The rest of modifications are the creation of a structure that allow to the Metadata Manager to know the architecture of the Replicas Cluster and the development of a new ReplicaChooser function based on the CPU charge allowing a correct load balancing. And finally in the Replicas Cluster has been implemented new protocols that have permitted to run different replication protocols in different partitions simultaneously without the knowledge of the Client and the Metadata Manager.Ingeniería en InformáticaInformatika Ingeniaritz
    corecore