5 research outputs found

    Silent MST approximation for tiny memory

    Get PDF
    In network distributed computing, minimum spanning tree (MST) is one of the key problems, and silent self-stabilization one of the most demanding fault-tolerance properties. For this problem and this model, a polynomial-time algorithm with O(log2 ⁣n)O(\log^2\!n) memory is known for the state model. This is memory optimal for weights in the classic [1,poly(n)][1,\text{poly}(n)] range (where nn is the size of the network). In this paper, we go below this O(log2 ⁣n)O(\log^2\!n) memory, using approximation and parametrized complexity. More specifically, our contributions are two-fold. We introduce a second parameter~ss, which is the space needed to encode a weight, and we design a silent polynomial-time self-stabilizing algorithm, with space O(logns)O(\log n \cdot s). In turn, this allows us to get an approximation algorithm for the problem, with a trade-off between the approximation ratio of the solution and the space used. For polynomial weights, this trade-off goes smoothly from memory O(logn)O(\log n) for an nn-approximation, to memory O(log2 ⁣n)O(\log^2\!n) for exact solutions, with for example memory O(lognloglogn)O(\log n\log\log n) for a 2-approximation

    A Self-Stabilizing Memory Efficient Algorithm for the Minimum Diameter Spanning Tree under an Omnipotent Daemon

    Get PDF
    International audienceRouting protocols are at the core of distributed systems performances, especially in the presence of faults. A classical approach to this problem is to build a spanning tree of the distributed system. Numerous spanning tree construction algorithms depending on the optimized metric exist (total weight, height, distance with respect to a particular process, . . . ) both in fault-free and faulty environments. In this paper, we aim at optimizing the diameter of the spanning tree by constructing a minimum diameter spanning tree. We target environments subject to transient faults (i.e. faults of finite duration). Hence, we present a self-stabilizing algorithm for the minimum diameter spanning tree construction problem in the state model. Our protocol has the following attractive features. It is the first algorithm for this problem that operates under the unfair and distributed adversary (or daemon). In other words, no restriction is made on the asynchronous behavior of the system. Second, our algorithm needs only O (log n) bits of memory per process (where n is the number of processes), that improves the previous result by a factor n. These features are not achieved to the detriment of the convergence time, which stays polynomial

    A Self-Stabilizing Memory Efficient Algorithm for the Minimum Diameter Spanning Tree under an Omnipotent Daemon

    Get PDF
    International audienceThe diameter of a network is one of the most fundamental network parameters. Being able to compute the diameter is an important problem in the analysis of large networks, and moreover this parameter has many important practical applications in real networks. As a consequence, it is natural to study this problem in a distributed system, and more specifically in a distributed system tolerant to transient faults. More specifically, we are interested in the problem to identify one of the centers of graph. Once done, we construct a minimum diameter spanning tree rooted in this centre. Of course, the challenging problem is to compute one centre of the graph. We present a uniform self-stabilizing algorithm for the minimum diameter spanning tree construction problem in the state model. Our protocol has several attractive features that makes it suitable for practical purposes. It is the first algorithm for this problem that operates under the unfair adversary (also called unfair daemon). In other words, no restriction is made on the distributed behaviour of the system. Consequently, it is the hardest adversary to deal with. Moreover, our algorithm needs only O(log n) bits of memory per process (where n is the number of processes), that improves the previous result by a factor n. These improvements are not achieved to the detriment of the convergence time, that stays reasonable with O(n2) rounds
    corecore