404 research outputs found

    PaRiS: Causally Consistent Transactions with Non-blocking Reads and Partial Replication

    Get PDF
    Geo-replicated data platforms are at the backbone of several large-scale online services. Transactional Causal Consistency (TCC) is an attractive consistency level for building such platforms. TCC avoids many anomalies of eventual consistency, eschews the synchronization costs of strong consistency, and supports interactive read-write transactions. Partial replication is another attractive design choice for building geo-replicated platforms, as it increases the storage capacity and reduces update propagation costs. This paper presents PaRiS, the first TCC system that supports partial replication and implements non-blocking parallel read operations, whose latency is paramount for the performance of read-intensive applications. PaRiS relies on a novel protocol to track dependencies, called Universal Stable Time (UST). By means of a lightweight background gossip process, UST identifies a snapshot of the data that has been installed by every DC in the system. Hence, transactions can consistently read from such a snapshot on any server in any replication site without having to block. Moreover, PaRiS requires only one timestamp to track dependencies and define transactional snapshots, thereby achieving resource efficiency and scalability. We evaluate PaRiS on a large-scale AWS deployment composed of up to 10 replication sites. We show that PaRiS scales well with the number of DCs and partitions, while being able to handle larger data-sets than existing solutions that assume full replication. We also demonstrate a performance gain of non-blocking reads vs. a blocking alternative (up to 1.47x higher throughput with 5.91x lower latency for read-dominated workloads and up to 1.46x higher throughput with 20.56x lower latency for write-heavy workloads)

    Adaptive gossip-based broadcast

    Get PDF
    This paper presents a novel adaptation mechanism that allows every node of a gossip-based broadcast algorithm to adjust the rate of message emission 1) to the amount of resources available to the nodes within the same broadcast group and 2) to the global level of congestion in the system. The adaptation mechanism can be applied to all gossip-based broadcast algorithms we know of and makes their use more realistic in practical situations where nodes have limited resources whose quantity changes dynamically with time without decreasing the reliability.(undefined

    Scaling Private Collaborated Consortium Blockchains Using State Machine Replication Over Random Graphs

    Get PDF
    Blockchain technology has redefined the way the software industry\u27s core mechanisms operate. With recent generations of improvement observed in blockchain, the industry is surging ahead towards replacing the existing computing paradigms with consortium blockchain-enabled solutions. For this, there is much research observed which aims to make blockchain technology’s performance at par with existing systems. Most of the research involves the optimization of the consensus algorithms that govern the system. One of the major aspects of upcoming iterations in blockchain technology is making individual consortium blockchains collaborate with other consortium blockchains to validate operations on a common set of data shared among the systems. The traditional approach involves requiring all the organizations to run the consensus and validate the change. This approach is computationally expensive and reduces the modularity of the system. Also, the optimized consensus algorithms have their specific requirements and assumptions which if extended to all the organizations leads to a cluttered system with high magnitudes of dependencies.This thesis proposes an architecture that leverages the use of state machine replication extended to all the nodes of different organizations with seamless updates over a random graph network without involving all the nodes participating in the consensus. This also enables organizations to run their respective consensus algorithms depending on their requirements. This approach guarantees the finality of consistent data updates with reduced computations with high magnitudes of scalability and flexibility

    Scalable Byzantine Reliable Broadcast

    Get PDF
    Byzantine reliable broadcast is a powerful primitive that allows a set of processes to agree on a message from a designated sender, even if some processes (including the sender) are Byzantine. Existing broadcast protocols for this setting scale poorly, as they typically build on quorum systems with strong intersection guarantees, which results in linear per-process communication and computation complexity. We generalize the Byzantine reliable broadcast abstraction to the probabilistic setting, allowing each of its properties to be violated with a fixed, arbitrarily small probability. We leverage these relaxed guarantees in a protocol where we replace quorums with stochastic samples. Compared to quorums, samples are significantly smaller in size, leading to a more scalable design. We obtain the first Byzantine reliable broadcast protocol with logarithmic per-process communication and computation complexity. We conduct a complete and thorough analysis of our protocol, deriving bounds on the probability of each of its properties being compromised. During our analysis, we introduce a novel general technique that we call adversary decorators. Adversary decorators allow us to make claims about the optimal strategy of the Byzantine adversary without imposing any additional assumptions. We also introduce Threshold Contagion, a model of message propagation through a system with Byzantine processes. To the best of our knowledge, this is the first formal analysis of a probabilistic broadcast protocol in the Byzantine fault model. We show numerically that practically negligible failure probabilities can be achieved with realistic security parameters

    On Data Dissemination for Large-Scale Complex Critical Infrastructures

    Get PDF
    Middleware plays a key role for the achievement of the mission of future largescalecomplexcriticalinfrastructures, envisioned as federations of several heterogeneous systems over Internet. However, available approaches for datadissemination result still inadequate, since they are unable to scale and to jointly assure given QoS properties. In addition, the best-effort delivery strategy of Internet and the occurrence of node failures further exacerbate the correct and timely delivery of data, if the middleware is not equipped with means for tolerating such failures. This paper presents a peer-to-peer approach for resilient and scalable datadissemination over large-scalecomplexcriticalinfrastructures. The approach is based on the adoption of epidemic dissemination algorithms between peer groups, combined with the semi-active replication of group leaders to tolerate failures and assure the resilient delivery of data, despite the increasing scale and heterogeneity of the federated system. The effectiveness of the approach is shown by means of extensive simulation experiments, based on Stochastic Activity Networks

    NEEM: network-friendly epidemic multicast

    Get PDF
    Epidemic, or probabilistic, multicast protocols have emerged as a viable mechanism to circumvent the scalabil- ity problems of reliable multicast protocols. However, most existing epidemic approaches use connectionless transport protocols to exchange messages and rely on the intrinsic robustness of the epidemic dissemination to mask network omissions. Unfortunately, such an approach is not network- friendly, since the epidemic protocol makes no effort to re- duce the load imposed on the network when the system is congested. In this paper, we propose a novel epidemic protocol whose main characteristic is to be network-friendly. This property is achieved by relying on connection-oriented transport connections, such as TCP/IP, to support the com- munication among peers. Since during congestion mes- sages accumulate in the border of the network, the pro- tocol uses an innovative buffer management scheme, that combines different selection techniques to discard messages upon overflow. This technique improves the quality of the information delivered to the application during periods of network congestion. The protocol has been implemented and the benefits of the approach are illustrated using a com- bination of experimental and simulation results

    Emergent structure in unstructured epidemic multicast

    Get PDF
    In epidemic or gossip-based multicast protocols, each node simply relays each message to some random neighbors, such that all destinations receive it at least once with high proba- bility. In sharp contrast, structured multicast protocols explicitly build and use a spanning tree to take advantage of efficient paths, and aim at having each message received exactly once. Unfortunately, when failures occur, the tree must be rebuilt. Gossiping thus provides simplicity and resilience at the expense of performance and resource efficiency. In this paper we propose a novel technique that exploits knowledge about the environment to schedule payload transmission when gossiping. The resulting protocol retains the desirable qualities of gossip, but approximates the performance of structured multicast. In some sense, instead of imposing structure by construction, we let it emerge from the operation of the gossip protocol. Experimental evaluation shows that this approach is effective even when knowledge about the environment is only approximate.(undefined
    • …
    corecore