35 research outputs found

    Database replication for enterprise applications

    Get PDF
    The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and PortoA common pattern for enterprise applications, particularly in small and medium businesses, is the reliance on an integrated traditional relational database system that provides persistence and where the relational aspect underlies the core logic of the application. While several solutions are proposed for scaling out such applications, database replication is key if the relational aspect is to be preserved. However, it is worrisome that because proposed solutions for database replication have been evaluated using simple synthetic benchmarks, their applicability to enterprise applications is not straightforward: the performance of conservative solutions hinges on the ability to conveniently partition applications while optimistic solutions may experience unacceptable abort rates, compromising fairness, particularly considering long-running transactions. In this thesis, we address these challenges. First, by performing a detailed evaluation of the applicability of database replication protocols based on conservative concurrency control to enterprise applications. Results invalidate the common assumption that real-world databases can be easily partitioned. Then, we tackle the issue of unacceptable abort rates in optimistic solutions by proposing a novel transaction scheduler, AJITTS, which uses an adaptive mechanism that by reaching and maintaining the optimal level of concurrency in the system, minimizes aborts and improves throughput.Um padrão comum no que toca a aplicações empresariais, particularmente em pequenas e médias empresas, é a dependência de um sistema de base dados relacional integrado que garante a persistência dos dados e no qual o aspeto relacional é parte integral da logica da aplicação. Embora várias soluções tenham sido propostas para dotar este tipo de aplicações de escalabilidade horizontal, a replicação de base de dados é a solução se o aspeto relacional deve ser preservado. No entanto, é preocupante que, dado que as soluções existentes para replicação de base de dados têm sido avaliadas utilizando testes de desempenho sintéticos e simples, a aplicabilidade destes a aplicações empresariais não é directa: o desempenho de soluções conservadoras está intimamente ligado à capacidade de particionar a aplicação convenientemente, enquanto que soluções optimistas podem sofrer de taxas de insucesso inaceitáveis o que compromete a equidade das mesmas, em particular no caso de transações especialmente longas. Nesta tese, abordamos estes desafios. Primeiro, através de uma avaliação detalhada da aplicabilidade de protocolos de replicação de base de dados baseados em controlo de concorrência conservador a aplicações empresariais. Os resultados obtidos invalidam o pressuposto comum de que bases de dados reais podem ser facilmente particionadas. Assim sendo, abordámos o problema das possíveis taxas de insucesso inaceitáveis em soluções optimistas propondo um novo escalonador de transações, o AJITTS, que utiliza um mecanismo adaptativo que ao atingir e manter o nível ótimo de concorrência no sistema, minimiza a taxa de insucesso e melhora o desempenho do mesmo

    Partial replication with strong consistency

    Get PDF
    In response to the increasing expectations of their clients, cloud services exploit geo-replication to provide fault-tolerance, availability and low latency when executing requests. However, cloud platforms tend to adopt weak consistency semantics, in which replicas may diverge in state independently. These systems offer good response times but at the disadvantage of allowing potential data inconsistencies that may affect user experience. Some systems propose to adopt solutions with strong consistency, which are not as efficient but simplify the development of correct applications by guaranteeing that all replicas in the system maintain the same database state. Therefore, it is interesting to explore a system that can offer strong consistency while minimizing its main disadvantage: the impact in performance that results from coordinating every replica in the system. A possible solution to reduce the cost of replica coordination is to support partial replication. Partially replicating a database allows for each server to only be responsible for a subset of the data - a partition - which means that when updating the database only some of replicas have to be synchronized, improving response times. In this dissertation, we propose an algorithm that implements a distributed replicated database that offers strong consistency with support for partial replication. To achieve strong consistency in a partially replicated scenario, our algorithm is in part based on the Clock-SI[10] research, which presents an algorithm that implements a multi-versioned database for strong consistency (snapshot-isolation) and performs the Two-Phase Commit protocol when coordinating replicas during updates. The algorithm is supported by an architecture that simplifies distributing partitions among datacenters and efficiently propagating operations across nodes in the same partition, thanks to the ChainPaxos[27] algorithm.Como forma de responder às expectativas cada vez maiores dos seus clientes, as operadoras cloud tiram partido da geo-replicação para oferecer tolerância a falhas, disponibilidade e baixa latência dos seus sistemas na resposta aos pedidos. No entanto, as plataformas cloud tendem a adotar uma semântica de consistência fraca, na qual as réplicas podem variar em estado de forma independente. Estes sistemas oferecem bons tempos de resposta mas com a desvantagem de que têm de lidar com potenciais inconsistências nos dados que podem ter impacto na experiência dos utilizadores. Alguns sistemas propõem adotar soluções com consistência forte, as quais não são tão eficientes mas simplificam o desenvolvimento de aplicações ao garantir que todas as réplicas do sistema mantêm o mesmo estado da base de dados. É então interessante explorar um sistema que garanta replicação forte mas que minimize a sua principal desvantagem: o impacto de performance no momento de coordenar o estado das réplicas nos sistema. Uma possível solução para reduzir o custo de coordenação das réplicas durante transações é o suporte à replicação parcial. Replicar parcialmente uma base de dados permite que cada servidor seja apenas responsável por uma parte dos dados - uma partição - o que significa que quando são realizadas escritas apenas algumas das réplicas têm de ser sincronizadas, melhorando os tempos de resposta. Neste trabalho propomos um algoritmo que implementa um sistema de armazenamento distríbuido replicado que oferece consistência forte com suporte a replicação parcial. A fim de garantir consistência forte num cenário de replicação parcial, o nosso algoritmo é em parte baseado no algoritmo Clock-SI[10], que implementa uma base de dados parcial com multi-versões para garantir consistência forte (snapshot-isolation) e que realiza o protocolo Two-Phase Commit para coordenar as réplicas no momento de aplicar escritas. O algoritmo é suportado por uma arquitectura que torna simples distribuir partições por vários centros de dados e propagar de forma eficiente operações entre todos os nós numa mesma partição, através do algoritmo ChainPaxos[27]

    High performance data processing

    Get PDF
    Dissertação de mestrado em Informatics EngeneeringÀ medida que as aplicações atingem uma maior quantidade de utilizadores, precisam de processar uma crescente quantidade de pedidos. Para além disso, precisam de muitas vezes satisfazer pedidos de utilizadores de diferentes partes do globo, onde as latências de rede têm um impacto significativo no desempenho em instalações monolíticas. Portanto, distribuição é uma solução muito procurada para melhorar a performance das camadas aplicacional e de dados. Contudo, distribuir dados não é uma tarefa simples se pretendemos assegurar uma forte consistência. Isto leva a que muitos sistemas de base de dados dependam de protocolos de sincronização pesados, como two-phase commit, consenso distribuído, bloqueamento distribuído, entre outros, enquanto que outros sistemas dependem em consistência fraca, não viável para alguns casos de uso. Esta tese apresenta o design, implementação e avaliação de duas soluções que têm como objetivo reduzir o impacto de assegurar garantias de forte consistência em sistemas de base de dados, especialmente aqueles distribuídos pelo globo. A primeira é o Primary Semi-Primary, uma arquitetura de base de dados distribuída com total replicação que permite que as réplicas evoluam independentemente, para evitar que os clientes precisem de esperar que escritas precedentes que não geram conflitos sejam propagadas. Apesar das réplicas poderem processar tanto leituras como escritas, melhorando a escalabilidade, o sistema continua a oferecer garantias de consistência forte, através do envio da certificação de transações para um nó central. O seu design é independente de modelos de dados, mas a sua implementação pode tirar partido do controlo de concorrência nativo oferecido por algumas base de dados, como é mostrado na implementação usando PostgreSQL e o seu Snapshot Isolation. Os resultados apresentam várias vantagens tanto em ambientes locais como globais. A segunda solução são os Multi-Record Values, uma técnica que particiona dinâmicamente valores numéricos em múltiplos registros, permitindo que escritas concorrentes possam executar com uma baixa probabilidade de colisão, reduzindo a taxa de abortos e/ou contenção na adquirição de locks. Garantias de limites inferiores, exigido por objetos como saldos bancários ou inventários, são assegurados por esta estratégia, ao contrário de muitas outras alternativas. O seu design é também indiferente do modelo de dados, sendo que as suas vantagens podem ser encontradas em sistemas SQL e NoSQL, bem como distribuídos ou centralizados, tal como apresentado na secção de avaliação.As applications reach an wider audience that ever before, they must process larger and larger amounts of requests. In addition, they often must be able to serve users all over the globe, where network latencies have a significant negative impact on monolithic deployments. Therefore, distribution is a well sought-after solution to improve performance of both applicational and database layers. However, distributing data is not an easy task if we want to ensure strong consistency guarantees. This leads many databases systems to rely on expensive synchronization controls protocols such as two-phase commit, distributed consensus, distributed locking, among others, while other systems rely on weak consistency, unfeasible for some use cases. This thesis presents the design, implementation and evaluation of two solutions aimed at reducing the impact of ensuring strong consistency guarantees on database systems, especially geo-distributed ones. The first is the Primary Semi-Primary, a full replication distributed database architecture that allows different replicas to evolve independently, to avoid that clients wait for preceding non-conflicting updates. Al though replicas can process both reads and writes, improving scalability, the system still ensures strong consistency guarantees, by relaying transactions’ certifications to a central node. Its design is independent of the underlying data model, but its implementation can take advantage of the native concurrency control offered by some systems, as is exemplified by an implementation using PostgreSQL and its Snapshot Isolation. The results present several advantages in both throughput and response time, when comparing to other alternative architectures, in both local and geo-distributed environments. The second solution is the Multi-Record Values, a technique that dynami cally partitions numeric values into multiple records, allowing concurrent writes to execute with low conflict probability, reducing abort rate and/or locking contention. Lower limit guarantees, required by objects such as balances or stocks, are ensure by this strategy, unlike many other similar alternatives. Its design is also data model agnostic, given its advantages can be found in both SQL and NoSQL systems, as well as both centralized and distributed database, as presented in the evaluation section

    Archiving the Relaxed Consistency Web

    Full text link
    The historical, cultural, and intellectual importance of archiving the web has been widely recognized. Today, all countries with high Internet penetration rate have established high-profile archiving initiatives to crawl and archive the fast-disappearing web content for long-term use. As web technologies evolve, established web archiving techniques face challenges. This paper focuses on the potential impact of the relaxed consistency web design on crawler driven web archiving. Relaxed consistent websites may disseminate, albeit ephemerally, inaccurate and even contradictory information. If captured and preserved in the web archives as historical records, such information will degrade the overall archival quality. To assess the extent of such quality degradation, we build a simplified feed-following application and simulate its operation with synthetic workloads. The results indicate that a non-trivial portion of a relaxed consistency web archive may contain observable inconsistency, and the inconsistency window may extend significantly longer than that observed at the data store. We discuss the nature of such quality degradation and propose a few possible remedies.Comment: 10 pages, 6 figures, CIKM 201

    Architectural Principles for Database Systems on Storage-Class Memory

    Get PDF
    Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM

    Conceptualização e desenvolvimento de uma framework de clustering

    Get PDF
    Com a proliferação de todo o tipo de serviços baseados em plataformas digitais, como por exemplo, o e-commerce o home banking ou mesmo as redes sociais, o conceito de sistemas distribuídos ganhou um novo folgo, e com ele, surgiram novas necessidades de se atingir altos níveis de disponibilidade para determinados sistemas de software. Este cenário obriga a que as infraestruturas tecnológicas atuais incluam várias réplicas desses mesmos sistemas, de forma a manter o serviço sempre disponível ainda que ocorra uma falha num ou noutro sistema. A maior parte dos sistemas atuais incluem duas camadas distintas, a camada aplicacional, onde corre a lógica de negócio, e a camada de persistência onde os dados são guardados de forma não volátil. Embora, normalmente, de forma simples se consigam replicar os aplicacionais desses sistemas, replicar as camadas de persistência revela-se a maior parte das vezes um desafio bem mais complexo. Esta dissertação apresenta um problema concreto de uma necessidade de aplicar replicação de dados num sistema distribuído que se encontra atualmente em ambiente de produção, de forma a poder garantir-se a disponibilidade do mesmo. Do estudo realizado sobre os principais conceitos de replicação de dados, assim como algumas frameworks de replicação a nível de middleware, e o problema em questão, foi possível conceptualizar e desenvolver uma nova framework de clustering ao nível do middleware que pode ser aplicada em sistemas aos quais se queira adicionar capacidade de clustering, independentemente do tipo de persistência com os quais os mesmos interagem.With the proliferation of all kinds of services based on digital platforms, as for example, the ecommerce, the home banking or even the social networks, the concept of distributed systems gained a new breadth, and with it, appeared new necessities to achieve higher levels of high availability in some specific software systems. This scenario forces the need of the actual technological infrastructures to include several replicas of those systems, in order to ensure the service availability, even in an advent of a failure in one or more systems. The majority of the actual systems include two distinct layers, the application layer, where the business logic runs, and the persistence layer, where the data is stored in a non-volatile way. Although, usually, is simple to apply replication to the application layer of those systems, applying replication on the persistence layers reveals itself most of the times a much more complex challenge. This master thesis presents a concrete problem of the necessity to apply data replication to a distributed system that is currently in a production environment, in order to ensure its availability. Through study performed both on the main concepts of data replication, as on some middleware based replication frameworks, and taking into the account the problem in hand, it was possible to conceptualize and develop a new middleware clustering framework that can be applied to systems to which is wanted to add clustering capabilities, regardless of the persistence type they interact with

    TRANSACTION MANAGEMENT IN MULTI-CORE MAIN-MEMORY DATABASE SYSTEMS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Consistency Models in Distributed Systems with Physical Clocks

    Get PDF
    Most existing distributed systems use logical clocks to order events in the implementation of various consistency models. Although logical clocks are straightforward to implement and maintain, they may affect the scalability, availability, and latency of the system when being used to totally order events in strong consistency models. They can also incur considerable overhead when being used to track and check the causal relationships among events in some weak consistency models. In this thesis we explore how to efficiently implement different consistency models using loosely synchronized physical clocks. Compared with logical clocks, physical clocks move forward at approximately the same speed and can be loosely synchronized with well-known standard protocols. Hence a group of physical clocks located at different servers can be used to order events in a distributed system at very low cost. We first describe Clock-SI, a fully distributed implementation of snapshot isolation for partitioned data stores. It uses the local physical clock at each partition to assign snapshot and commit timestamps to transactions. By avoiding a centralized service for timestamp management, Clock-SI improves the throughput, latency, and availability of the system. We then introduce Clock-RSM, which is a low-latency state machine replication protocol that provides linearizability. It totally orders state machine commands by assigning them physical timestamps obtained from the local replica. By eliminating the message step for command ordering in existing solutions, Clock-RSM reduces the latency of consistent geo-replication across multiple data centers. Finally, we present Orbe, which provides an efficient and scalable implementation of causal consistency for both partitioned and replicated data stores. Orbe builds an explicit total order, consistent with causality, among all operations using physical timestamps. It reduces the number of dependencies that have to be carried in update replication messages and checked on installation of replicated updates. As a result, Orbe improves the throughput of the system
    corecore