35 research outputs found

    Consistency in a Partitioned Network: A Survey

    Get PDF
    Recently, several strategies for transaction processing in partitioned distributed database systems with replicated data have been proposed. We survey these strategies in light of the competing goals of maintaining correctness and achieving high availability. Extensions and combinations are then discussed, and guidelines for the selection of a strategy for a particular application are presented

    Efficient middleware for database replication

    Get PDF
    Dissertação de Mestrado em Engenharia InformáticaDatabase systems are used to store data on the most varied applications, like Web applications, enterprise applications, scientific research, or even personal applications. Given the large use of database in fundamental systems for the users, it is necessary that database systems are efficient e reliable. Additionally, in order for these systems to serve a large number of users, databases must be scalable, to be able to process large numbers of transactions. To achieve this, it is necessary to resort to data replication. In a replicated system, all nodes contain a copy of the database. Then, to guarantee that replicas converge, write operations must be executed on all replicas. The way updates are propagated leads to two different replication strategies. The first is known as asynchronous or optimistic replication, and the updates are propagated asynchronously after the conclusion of an update transaction. The second is known as synchronous or pessimistic replication, where the updates are broadcasted synchronously during the transaction. In pessimistic replication, contrary to the optimistic replication, the replicas remain consistent. This approach simplifies the programming of the applications, since the replication of the data is transparent to the applications. However, this approach presents scalability issues, caused by the number of exchanged messages during synchronization, which forces a delay to the termination of the transaction. This leads the user to experience a much higher latency in the pessimistic approach. On this work is presented the design and implementation of a database replication system, with snapshot isolation semantics, using a synchronous replication approach. The system is composed by a primary replica and a set of secondary replicas that fully replicate the database- The primary replica executes the read-write transactions, while the remaining replicas execute the read-only transactions. After the conclusion of a read-write transaction on the primary replica the updates are propagated to the remaining replicas. This approach is proper to a model where the fraction of read operations is considerably higher than the write operations, allowing the reads load to be distributed over the multiple replicas. To improve the performance of the system, the clients execute some operations speculatively, in order to avoid waiting during the execution of a database operation. Thus, the client may continue its execution while the operation is executed on the database. If the result replied to the client if found to be incorrect, the transaction will be aborted, ensuring the correctness of the execution of the transactions

    High performance deferred update replication

    Get PDF
    Replication is a well-known approach to implementing storage systems that can tolerate failures. Replicated storage systems are designed such that the state of the system is kept at several replicas. A replication protocol ensures that the failure of a replica is masked by the rest of the system, in a way that is transparent to its users. Replicated storage systems are among the most important building blocks in the design of large scale applications. Applications at scale are often deployed on top of commodity hardware, store a vast amount of data, and serve a large number of users. The larger the system, the higher its vulnerability to failures. The ability to tolerate failures is not the only desirable feature in a replicated system. Storage systems need to be efficient in order to accommodate requests from a large user base while achieving low response times. In that respect, replication can leverage multiple replicas to parallelize the execution of user requests. This thesis focuses on Deferred Update Replication (DUR), a well-established database replication approach. It provides high availability in that every replica can execute client transactions. In terms of performance, it is better than other replication techniques in that only one replica executes a given transaction while the other replicas only apply state changes. However, DUR suffers from the following drawback: each replica stores a full copy of the database, which has consequences in terms of performance. The first consequence is that DUR cannot take advantage of the aggregated memory available to the replicas. Our first contribution is a distributed caching mechanism that addresses the problem. It makes efficient use of the main memory of an entire cluster of machines, while guaranteeing strong consistency. The second consequence is that DUR cannot scale with the number of replicas. The throughput of a fully replicated system is inherently limited by the number of transactions that a single replica can apply to its local storage. We propose a scalable version of the DUR approach where the system state is partitioned in smaller replica sets. Transactions that access disjoint partitions are parallelized. The last part of the thesis focuses on latency. We show that the scalable DUR-based approach may have detrimental effects on response time, especially when replicas are geographically distributed. The thesis considers different deployments and their implications on latency. We propose optimizations that provide substantial gains in geographically distributed environments

    Toteutus datasynkronisaatiosta haasteellisen verkon ylitse

    Get PDF
    This thesis is related to the trend of the industrial internet of things. There exists a fair number of product and service examples where a manufacturer has a need for usage data harvesting. The gathered usage data can be used, e.g., in product development. In this thesis the product is mining equipment and its maintenance. Sending the data straight from the mining equipment to the manufacturer is problematic, since mines often lack Internet connection. In some cases mines have local area networks, but in other cases those are not available. The only method of gathering the data can be transportating via USB flash drives or similar. The way the data is moved with the flash drive from the mining equipment to a location with Internet connection is called aided mine network. This location can be, e.g., an office building near the mining area. The core problem of the thesis is the gathering, moving, and synchronization of the usage data using the aided mine network. In this thesis, a plan to implement the gathering of the data is developed. The solution is called DATAMiNe, i.e., Data Aggregation Through Aided Mine Network. The network consists of three parts. The parts are a Manager, an Edge Relay, and a Data Aggregator. DATAMiNe's architecture is designed so that it supports an easy replacement of the aided mine network. Replacement can be a local area network, or an integrated Internet connection in the mining equipment. A communication protocol between the Manager and the Edge Relay is designed so that it supports the special needs of the aided mine network. The development of DATAMiNe starts with an initial plan, which bases on the mining equipment manufacturer's vision, and use cases about unified data gathering into a single Data Aggregator. DATAMiNe is developed by ordinary software design methods, by programming a proof of concept test software, and finally by verifying a protocol with the Spin tools. With Spin, it is possible to formally check the interaction between connected state automata. All development steps play a part towards the next implementation phase. That is, the production implementation. The verification model forces attention to the details that otherwise would be ignored in the design phase. The test program implementation helps to choose the cost effective ways in the design

    Performance characteristics of semantics-based concurrency control protocols.

    Get PDF
    by Keith, Hang-kwong Mak.Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 122-127).Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 2 --- Background --- p.4Chapter 2.1 --- Read/Write Model --- p.4Chapter 2.2 --- Abstract Data Type Model --- p.5Chapter 2.3 --- Overview of Semantics-Based Concurrency Control Protocols --- p.7Chapter 2.4 --- Concurrency Hierarchy --- p.9Chapter 2.5 --- Control Flow of the Strict Two Phase Locking Protocol --- p.11Chapter 2.5.1 --- Flow of an Operation --- p.12Chapter 2.5.2 --- Response Time of a Transaction --- p.13Chapter 2.5.3 --- Factors Affecting the Response Time of a Transaction --- p.14Chapter 3 --- Semantics-Based Concurrency Control Protocols --- p.16Chapter 3.1 --- Strict Two Phase Locking --- p.16Chapter 3.2 --- Conflict Relations --- p.17Chapter 3.2.1 --- Commutativity (COMM) --- p.17Chapter 3.2.2 --- Forward and Right Backward Commutativity --- p.19Chapter 3.2.3 --- Exploiting Context-Specific Information --- p.21Chapter 3.2.4 --- Relaxing Correctness Criterion by Allowing Bounded Inconsistency --- p.26Chapter 4 --- Related Work --- p.32Chapter 4.1 --- Exploiting Transaction Semantics --- p.32Chapter 4.2 --- Exploting Object Semantics --- p.34Chapter 4.3 --- Sacrificing Consistency --- p.35Chapter 4.4 --- Other Approaches --- p.37Chapter 5 --- Performance Study (Testbed Approach) --- p.39Chapter 5.1 --- System Model --- p.39Chapter 5.1.1 --- Main Memory Database --- p.39Chapter 5.1.2 --- System Configuration --- p.40Chapter 5.1.3 --- Execution of Operations --- p.41Chapter 5.1.4 --- Recovery --- p.42Chapter 5.2 --- Parameter Settings and Performance Metrics --- p.43Chapter 6 --- Performance Results and Analysis (Testbed Approach) --- p.46Chapter 6.1 --- Read/Write Model vs. Abstract Data Type Model --- p.46Chapter 6.2 --- Using Context-Specific Information --- p.52Chapter 6.3 --- Role of Conflict Ratio --- p.55Chapter 6.4 --- Relaxing the Correctness Criterion --- p.58Chapter 6.4.1 --- Overhead and Performance Gain --- p.58Chapter 6.4.2 --- Range Queries using Bounded Inconsistency --- p.63Chapter 7 --- Performance Study (Simulation Approach) --- p.69Chapter 7.1 --- Simulation Model --- p.70Chapter 7.1.1 --- Logical Queueing Model --- p.70Chapter 7.1.2 --- Physical Queueing Model --- p.71Chapter 7.2 --- Experiment Information --- p.74Chapter 7.2.1 --- Parameter Settings --- p.74Chapter 7.2.2 --- Performance Metrics --- p.75Chapter 8 --- Performance Results and Analysis (Simulation Approach) --- p.76Chapter 8.1 --- Relaxing Correctness Criterion of Serial Executions --- p.77Chapter 8.1.1 --- Impact of Resource Contention --- p.77Chapter 8.1.2 --- Impact of Infinite Resources --- p.80Chapter 8.1.3 --- Impact of Limited Resources --- p.87Chapter 8.1.4 --- Impact of Multiple Resources --- p.89Chapter 8.1.5 --- Impact of Transaction Type --- p.95Chapter 8.1.6 --- Impact of Concurrency Control Overhead --- p.96Chapter 8.2 --- Exploiting Context-Specific Information --- p.98Chapter 8.2.1 --- Impact of Limited Resource --- p.98Chapter 8.2.2 --- Impact of Infinite and Multiple Resources --- p.101Chapter 8.2.3 --- Impact of Transaction Length --- p.106Chapter 8.2.4 --- Impact of Buffer Size --- p.108Chapter 8.2.5 --- Impact of Concurrency Control Overhead --- p.110Chapter 8.3 --- Summary and Discussion --- p.113Chapter 8.3.1 --- Summary of Results --- p.113Chapter 8.3.2 --- Relaxing Correctness Criterion vs. Exploiting Context-Specific In- formation --- p.114Chapter 9 --- Conclusions --- p.116Bibliography --- p.122Chapter A --- Commutativity Tables for Queue Objects --- p.128Chapter B --- Specification of a Queue Object --- p.129Chapter C --- Commutativity Tables with Bounded Inconsistency for Queue Objects --- p.132Chapter D --- Some Implementation Issues --- p.134Chapter D.1 --- Important Data Structures --- p.134Chapter D.2 --- Conflict Checking --- p.136Chapter D.3 --- Deadlock Detection --- p.137Chapter E --- Simulation Results --- p.139Chapter E.l --- Impact of Infinite Resources (Bounded Inconsistency) --- p.140Chapter E.2 --- Impact of Multiple Resource (Bounded Inconsistency) --- p.141Chapter E.3 --- Impact of Transaction Type (Bounded Inconsistency) --- p.142Chapter E.4 --- Impact of Concurrency Control Overhead (Bounded Inconsistency) --- p.144Chapter E.4.1 --- Infinite Resources --- p.144Chapter E.4.2 --- Limited Resource --- p.146Chapter E.5 --- Impact of Resource Levels (Exploiting Context-Specific Information) --- p.149Chapter E.6 --- Impact of Buffer Size (Exploiting Context-Specific Information) --- p.150Chapter E.7 --- Impact of Concurrency Control Overhead (Exploiting Context-Specific In- formation) --- p.155Chapter E.7.1 --- Impact of Infinite Resources --- p.155Chapter E.7.2 --- Impact of Limited Resources --- p.157Chapter E.7.3 --- Impact of Transaction Length --- p.160Chapter E.7.4 --- Role of Conflict Ratio --- p.16

    Bounded version vectors

    Get PDF
    Version vectors play a central role in update tracking under optimistic distributed systems, allowing the detection of obsolete or inconsistent versions of replicated data. Version vectors do not have a bounded representation; they are based on integer counters that grow indefinitely as updates occur. Existing approaches to this problem are scarce; the mechanisms proposed are either unbounded or operate only under specific settings. This paper examines version vectors as a mechanism for data causality tracking and clarifies their role with respect to vector clocks. Then, it introduces bounded stamps and proves them to be a correct alternative to integer counters in version vectors. The resulting mechanism, bounded version vectors, represents the first bounded solution to data causality tracking between replicas subject to local updates and pairwise symmetrical synchronization.FCT project POSI/ICHS/44304/2002, FCT under grant BSAB/390/2003

    Botnets and how to automatic detect them: exploring new ways of dealing with botnet classification: Botnets e como detectá-los automaticamente: explorando novas maneiras de lidar com a classificação botnet

    Get PDF
    Threats such as Botnets have become very popular in the current usage of the Internet, such as attacks like distributed denial of services (DoS) which can cause a significant impact on the use of technology. One way to mitigate such issues can be a focus on using intelligent models that can attempt to identify the existence of Botnets in the network traffic early. Thus, this work aims to evaluate the current state of the art on threats related to Botnets and how intelligent technology has been used in real-world restrictions such as real-time deadlines and increased network traffic. From our findings, we have indications that Botnet detection in real-time still is a more significant challenge because the computation power has not grown at the same rate that Internet traffic. This has pointed out other restrictions that must be considered, like privacy legislation and employing cryptography methods for all communications. In this context, we discuss the following steps to deal with the identified issues

    Invariant preservation in geo-replicated data stores

    Get PDF
    The Internet has enabled people from all around the globe to communicate with each other in a matter of milliseconds. This possibility has a great impact in the way we work, behave and communicate, while the full extent of possibilities are yet to be known. As we become more dependent of Internet services, the more important is to ensure that these systems operate correctly, with low latency and high availability for millions of clients scattered all around the globe. To be able to provide service to a large number of clients, and low access latency for clients in different geographical locations, Internet services typically rely on georeplicated storage systems. Replication comes with costs that may affect service quality. To propagate updates between replicas, systems either choose to lose consistency in favor of better availability and latency (weak consistency), or maintain consistency, but the system might become unavailable during partitioning (strong consistency). In practice, many production systems rely on weak consistency storage systems to enhance user experience, overlooking that applications can become incorrect due to the weaker consistency assumptions. In this thesis, we study how to exploit application’s semantics to build correct applications without affecting the availability and latency of operations. We propose a new consistency model that breaks apart from traditional knowledge that applications consistency is dependent on coordinating the execution of operations across replicas. We show that it is possible to execute most operations with low latency and in an highly available way, while preserving application’s correctness. Our approach consists in specifying the fundamental properties that define the correctness of applications, i.e. the application invariants, and identify and prevent concurrent executions that potentially can make the state of the database inconsistent, i.e. that may violate some invariant. We explore different, complementary, approaches to implement this model. The Indigo approach consists in preventing conflicting operations from executing concurrently, by restricting the operations that each replica can execute at each moment to maintain application’s correctness. The IPA approach does not preclude the execution of any operation, ensuring high availability. To maintain application correctness, operations are modified to prevent invariant violations during replica reconciliation, or, if modifying operations provides an unsatisfactory semantics, it is possible to correct any invariant violations before a client can read an inconsistent state, by executing compensations. Evaluation shows that our approaches can ensure both low latency and high availability for most operations in common Internet application workloads, with small execution overhead in comparison to unmodified weak consistency systems, while enforcing application invariants, as in strong consistency systems

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.

    A Critique of the CAP Theorem

    Get PDF
    The CAP Theorem is a frequently cited impossibility result in distributed systems, especially among NoSQL distributed databases. In this paper we survey some of the confusion about the meaning of CAP, including inconsistencies and ambiguities in its definitions, and we highlight some problems in its formalization. CAP is often interpreted as proof that eventually consistent databases have better availability properties than strongly consistent databases; although there is some truth in this, we show that more careful reasoning is required. These problems cast doubt on the utility of CAP as a tool for reasoning about trade-offs in practical systems. As alternative to CAP, we propose a "delay-sensitivity" framework, which analyzes the sensitivity of operation latency to network delay, and which may help practitioners reason about the trade-offs between consistency guarantees and tolerance of network faults
    corecore