76 research outputs found

    Sequentially consistent versus linearizable counting networks

    Full text link
    We compare the impact of timing conditions on implementing sequentially consistent and linearizable counters using (uniform) counting networks in distributed systems. For counting problems in application domains which do not require linearizability but will run correctly if only sequential consistency is provided, the results of our investigation, and their potential payoffs, are threefold: • First, we show that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics of linearizability, which simplifies proofs and enhances compositionality. • Second, we identify local timing conditions that support sequential consistency but not linearizability; thus, we suggest weaker, easily implementable timing conditions that are likely to be sufficient in many applications. • Third, we show that any kind of synchronization that is too weak to support even sequential consistency may violate it significantly for some counting networks; hence

    The impact of timing on linearizability in counting networks

    No full text
    {\em Counting networks} form a new class of distributed, low-contention data structures, made up of {\em balancers} and {\em wires,} which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A {\em linearizable} counting network guarantees that the order of the values it returns respects the real-time order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support. In this work, we further pursue the systematic study of the impact of {\em timing} assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch~{\em et~al.} in [18]. We consider two basic {\em timing} models, the {instantaneous balancer} model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the {\em periodic balancer} model, where balancers send out tokens at a fixed rate. In both models, we assume lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and sufficient conditions for linearizability in these models, in the form of precise inequalities that involve not only parameters of the timing models, but also certain structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks

    Quantifiability: Concurrent Correctness from First Principles

    Get PDF
    Architectural imperatives due to the slowing of Moore\u27s Law, the broad acceptance of relaxed semantics and the O(n!) worst case verification complexity of sequential histories motivate a new approach to concurrent correctness. Desiderata for a new correctness condition are that it be independent of sequential histories, compositional over objects, flexible as to timing, modular as to semantics and free of inherent locking or waiting. This dissertation proposes Quantifiability, a novel correctness condition based on intuitive first principles. Quantifiablity is formally defined with its system model. Useful properties of quantifiability such as compositionality, measurablility and observational refinement are demonstrated. Quantifiability models a system in vector space to launch a new mathematical analysis of concurrency. The vector space model is suitable for a wide range of concurrent systems and their associated data structures. Proof of correctness is facilitated with linear algebra, better supported and of more efficient time complexity than traditional combinatorial methods. Experimental results are presented showing that quantifiable data structures are highly scalable due to their use of relaxed semantics, an implementation trade-off that is explicitly permitted by quantifiability. The speedups attainable are theoretically analyzed. Because previous work lacked a metric for evaluating such trade-offs, a new measure is proposed here that applies communication theory to the disordered results of concurrent data structures. This entropy measure opens the way to analyze degrees of concurrent correctness across implementations to engineer system scalability and evaluate data structure quality under different workloads. With all its innovation, quantifiability is presented the context of previous work and existing correctness conditions

    A Concurrency and Time Centered Framework for Certification of Autonomous Space Systems

    Get PDF
    Future space missions, such as Mars Science Laboratory, suggest the engineering of some of the most complex man-rated autonomous software systems. The present process-oriented certification methodologies are becoming prohibitively expensive and do not reach the level of detail of providing guidelines for the development and validation of concurrent software. Time and concurrency are the most critical notions in an autonomous space system. In this work we present the design and implementation of the first concurrency and time centered framework for product-oriented software certification of autonomous space systems. To achieve fast and reliable concurrent interactions, we define and apply the notion of Semantically Enhanced Containers (SEC). SECs are data structures that are designed to provide the flexibility and usability of the popular ISO C++ STL containers, while at the same time they are hand-crafted to guarantee domain-specific policies, such as conformance to a given concurrency model. The application of nonblocking programming techniques is critical to the implementation of our SEC containers. Lock-free algorithms help avoid the hazards of deadlock, livelock, and priority inversion, and at the same time deliver fast and scalable performance. Practical lock-free algorithms are notoriously difficult to design and implement and pose a number of hard problems such as ABA avoidance, high complexity, portability, and meeting the linearizability correctness requirements. This dissertation presents the design of the first lock-free dynamically resizable array. Our approach o ers a set of practical, portable, lock-free, and linearizable STL vector operations and a fast and space effcient implementation when compared to the alternative lock- and STM-based techniques. Currently, the literature does not offer an explicit analysis of the ABA problem, its relation to the most commonly applied nonblocking programming techniques, and the possibilities for its detection and avoidance. Eliminating the hazards of ABA is left to the ingenuity of the software designer. We present a generic and practical solution to the fundamental ABA problem for lock-free descriptor-based designs. To enable our SEC container with the property of validating domain-specific invariants, we present Basic Query, our expression template-based library for statically extracting semantic information from C++ source code. The use of static analysis allows for a far more efficient implementation of our nonblocking containers than would have been otherwise possible when relying on the traditional run-time based techniques. Shared data in a real-time cyber-physical system can often be polymorphic (as is the case with a number of components part of the Mission Data System's Data Management Services). The use of dynamic cast is important in the design of autonomous real-time systems since the operation allows for a direct representation of the management and behavior of polymorphic data. To allow for the application of dynamic cast in mission critical code, we validate and improve a methodology for constant-time dynamic cast that shifts the complexity of the operation to the compiler's static checker. In a case study that demonstrates the applicability of the programming and validation techniques of our certification framework, we show the process of verification and semantic parallelization of the Mission Data System's (MDS) Goal Networks. MDS provides an experimental platform for testing and development of autonomous real-time flight applications

    Security and Fairness of Blockchain Consensus Protocols

    Get PDF
    The increasing popularity of blockchain technology has created a need to study and understand consensus protocols, their properties, and security. As users seek alternatives to traditional intermediaries, such as banks, the challenge lies in establishing trust within a robust and secure system. This dissertation explores the landscape beyond cryptocurrencies, including consensus protocols and decentralized finance (DeFi). Cryptocurrencies, like Bitcoin and Ethereum, symbolize the global recognition of blockchain technology. At the core of every cryptocurrency lies a consensus protocol. Utilizing a proof-of-work consensus mechanism, Bitcoin ensures network security through energy-intensive mining. Ethereum, a representative of the proof-of-stake mechanism, enhances scalability and energy efficiency. Ripple, with its native XRP, utilizes a consensus algorithm based on voting for efficient cross-border transactions. The first part of the dissertation dives into Ripple's consensus protocol, analyzing its security. The Ripple network operates on a Byzantine fault-tolerant agreement protocol. Unlike traditional Byzantine protocols, Ripple lacks global knowledge of all participating nodes, relying on each node's trust for voting. This dissertation offers a detailed abstract description of the Ripple consensus protocol derived from the source code. Additionally, it highlights potential safety and liveness violations in the protocol during simple executions and relatively benign network assumptions. The second part of this thesis focuses on decentralized finance, a rapidly growing sector of the blockchain industry. DeFi applications aim to provide financial services without intermediaries, such as banks. However, the lack of regulation leaves space for different kinds of attacks. This dissertation focuses on the so-called front-running attacks. Front-running is a transaction-ordering attack where a malicious party exploits the knowledge of pending transactions to gain an advantage. To mitigate this problem, recent efforts introduced order fairness for transactions as a safety property for consensus, enhancing traditional agreement and liveness properties. Our work addresses limitations in existing formalizations and proposes a new differential order fairness property. The novel quick order-fair atomic broadcast (QOF) protocol ensures transaction delivery in a differentially fair order, proving more efficient than current protocols. It works optimally in asynchronous and eventually synchronous networks, tolerating up to one-third parties corruption, an improvement from previous solutions tolerating fewer faults. This work is further extended by presenting a modular implementation of the QOF protocol. Empirical evaluations compare QOF's performance to a fairness-lacking consensus protocol, revealing a marginal 5\% throughput decrease and approximately 50ms latency increase. The study contributes to understanding the practical aspects of QOF protocol, establishing connections with similar fairness-imposing protocols from the literature. The last part of this dissertation provides an overview of existing protocols designed to prevent transaction reordering within DeFi. These defense methods are systematically classified into four categories. The first category employs distributed cryptography to prevent side information leaks to malicious insiders, ensuring a causal order on the consensus-generated transaction sequence. The second category, receive-order fairness, analyzes how individual parties participating in the consensus protocol receive transactions, imposing corresponding constraints on the resulting order. The third category, known as randomized order, aims to neutralize the influence of consensus-running parties on transaction order. The fourth category, architectural separation, proposes separating the task of ordering transactions and assigning them to a distinct service
    corecore