2,258 research outputs found

    Compact Deterministic Self-Stabilizing Leader Election: The Exponential Advantage of Being Talkative

    Full text link
    This paper focuses on compact deterministic self-stabilizing solutions for the leader election problem. When the protocol is required to be \emph{silent} (i.e., when communication content remains fixed from some point in time during any execution), there exists a lower bound of Omega(\log n) bits of memory per node participating to the leader election (where n denotes the number of nodes in the system). This lower bound holds even in rings. We present a new deterministic (non-silent) self-stabilizing protocol for n-node rings that uses only O(\log\log n) memory bits per node, and stabilizes in O(n\log^2 n) rounds. Our protocol has several attractive features that make it suitable for practical purposes. First, the communication model fits with the model used by existing compilers for real networks. Second, the size of the ring (or any upper bound on this size) needs not to be known by any node. Third, the node identifiers can be of various sizes. Finally, no synchrony assumption, besides a weakly fair scheduler, is assumed. Therefore, our result shows that, perhaps surprisingly, trading silence for exponential improvement in term of memory space does not come at a high cost regarding stabilization time or minimal assumptions

    Global state predicates in rough real-time

    Get PDF
    Distributed systems are characterized by the fact that the constituent processes have neither common memory nor a common system clock. These processes communicate solely via message passing. While providing a number of benefits such as increased reliability, increased computational power, and geographic dispersion, this architecture significantly complicates many of the tasks of software development and verification, including evaluation of the program state. In the case of distributed systems, the program state is comprised of the local states of the constituent processes, as well as the state of the channels between processes, and is called the global state.;With no common system clock, many distributed system protocols rely on the global ordering of local process events imposed by the message passing that occurs between processes. This leads to a partial global ordering of local process events, which can then be used to determine which process states could (or could not) have occurred simultaneously.;Traditional predicate evaluation protocols evaluate predicates on the global state of a distributed computation using consistent global states. This evaluation is complicated by the fact that the event ordering imposed by message passing is only partial. A complete history of the global states that occurred during an execution cannot always be constructed. This introduces inefficiency into predicate detection protocols and prohibits detection of certain predicates.;This dissertation explores the use of this rough global time base for global state predicate evaluation within distributed systems. By structuring the evaluation on the assumption that a global time base exists, we can develop simple and efficient protocols for both stable and unstable predicate evaluation. Further, we can evaluate certain predicates which are not easily evaluated using consistent global states. We demonstrate these advantages by developing protocols for detection of distributed termination, distributed deadlock detection, and detection of certain unstable predicates as they occur. as the global time base is rough, we can only detect unstable predicates which remain true for a sufficient duration. We additionally develop several formalizations which assist the protocol developer in dealing with the fact that the global time base is not perfect. We demonstrate the application of these formalizations within the protocols that we develop

    On the Runtime of Chemical Reaction Networks Beyond Idealized Conditions

    Get PDF
    This paper studies the (discrete) chemical reaction network (CRN) computational model that emerged in the last two decades as an abstraction for molecular programming. The correctness of CRN protocols is typically established under one of two possible schedulers that determine how the execution advances: (1) a stochastic scheduler that obeys the (continuous time) Markov process dictated by the standard model of stochastic chemical kinetics; or (2) an adversarial scheduler whose only commitment is to maintain a certain fairness condition. The latter scheduler is justified by the fact that the former one crucially assumes "idealized conditions" that more often than not, do not hold in real wet-lab experiments. However, when it comes to analyzing the runtime of CRN protocols, the existing literature focuses strictly on the stochastic scheduler, thus raising the research question that drives this work: Is there a meaningful way to quantify the runtime of CRNs without the idealized conditions assumption? The main conceptual contribution of the current paper is to answer this question in the affirmative, formulating a new runtime measure for CRN protocols that does not rely on idealized conditions. This runtime measure is based on an adapted (weaker) fairness condition as well as a novel scheme that enables partitioning the execution into short rounds and charging the runtime for each round individually (inspired by definitions for the runtime of asynchronous distributed algorithms). Following that, we turn to investigate various fundamental computational tasks and establish (often tight) bounds on the runtime of the corresponding CRN protocols operating under the adversarial scheduler. This includes an almost complete chart of the runtime complexity landscape of predicate decidability tasks

    Necessary and Sufficient Conditions on Partial Orders for Modeling Concurrent Computations

    Full text link
    Partial orders are used extensively for modeling and analyzing concurrent computations. In this paper, we define two properties of partially ordered sets: width-extensibility and interleaving-consistency, and show that a partial order can be a valid state based model: (1) of some synchronous concurrent computation iff it is width-extensible, and (2) of some asynchronous concurrent computation iff it is width-extensible and interleaving-consistent. We also show a duality between the event based and state based models of concurrent computations, and give algorithms to convert models between the two domains. When applied to the problem of checkpointing, our theory leads to a better understanding of some existing results and algorithms in the field. It also leads to efficient detection algorithms for predicates whose evaluation requires knowledge of states from all the processes in the system

    Survey of Distributed Decision

    Get PDF
    We survey the recent distributed computing literature on checking whether a given distributed system configuration satisfies a given boolean predicate, i.e., whether the configuration is legal or illegal w.r.t. that predicate. We consider classical distributed computing environments, including mostly synchronous fault-free network computing (LOCAL and CONGEST models), but also asynchronous crash-prone shared-memory computing (WAIT-FREE model), and mobile computing (FSYNC model)

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin
    • …
    corecore