4 research outputs found
Phase Clocks for Transient Fault Repair
Phase clocks are synchronization tools that implement a form of logical time
in distributed systems. For systems tolerating transient faults by self-repair
of damaged data, phase clocks can enable reasoning about the progress of
distributed repair procedures. This paper presents a phase clock algorithm
suited to the model of transient memory faults in asynchronous systems with
read/write registers. The algorithm is self-stabilizing and guarantees accuracy
of phase clocks within O(k) time following an initial state that is k-faulty.
Composition theorems show how the algorithm can be used for the timing of
distributed procedures that repair system outputs.Comment: 22 pages, LaTe
Fast self-stabilizing byzantine tolerant digital clock synchronization
Consider a distributed network in which up to a third of the nodes may be Byzantine, and in which the non-faulty nodes may be subject to transient faults that alter their memory in an arbitrary fashion. Within the context of this model, we are interested in the digital clock synchronization problem; which consists of agreeing on bounded integer counters, and increasing these counters regularly. It has been postulated in the past that synchronization cannot be solved in a Byzantine tolerant and self-stabilizing manner. The first solution to this problem had an expected exponential convergence time. Later, a deterministic solution was published with linear convergence time, which is optimal for deterministic solutions. In the current paper we achieve an expected constant convergence time. We thus obtain the optimal probabilistic solution, both in terms of convergence time and in terms of resilience to Byzantine adversaries
Minimizing Message Size in Stochastic Communication Patterns: Fast Self-Stabilizing Protocols with 3 bits
This paper considers the basic model of communication, in
which in each round, each agent extracts information from few randomly chosen
agents. We seek to identify the smallest amount of information revealed in each
interaction (message size) that nevertheless allows for efficient and robust
computations of fundamental information dissemination tasks. We focus on the
Majority Bit Dissemination problem that considers a population of agents,
with a designated subset of source agents. Each source agent holds an input bit
and each agent holds an output bit. The goal is to let all agents converge
their output bits on the most frequent input bit of the sources (the majority
bit). Note that the particular case of a single source agent corresponds to the
classical problem of Broadcast. We concentrate on the severe fault-tolerant
context of self-stabilization, in which a correct configuration must be reached
eventually, despite all agents starting the execution with arbitrary initial
states.
We first design a general compiler which can essentially transform any
self-stabilizing algorithm with a certain property that uses -bits
messages to one that uses only -bits messages, while paying only a
small penalty in the running time. By applying this compiler recursively we
then obtain a self-stabilizing Clock Synchronization protocol, in which agents
synchronize their clocks modulo some given integer , within rounds w.h.p., and using messages that contain bits only.
We then employ the new Clock Synchronization tool to obtain a
self-stabilizing Majority Bit Dissemination protocol which converges in time, w.h.p., on every initial configuration, provided that the
ratio of sources supporting the minority opinion is bounded away from half.
Moreover, this protocol also uses only 3 bits per interaction.Comment: 28 pages, 4 figure