15,638 research outputs found

    Memory lower bounds for deterministic self-stabilization

    Full text link
    In the context of self-stabilization, a \emph{silent} algorithm guarantees that the register of every node does not change once the algorithm has stabilized. At the end of the 90's, Dolev et al. [Acta Inf. '99] showed that, for finding the centers of a graph, for electing a leader, or for constructing a spanning tree, every silent algorithm must use a memory of Ω(logn)\Omega(\log n) bits per register in nn-node networks. Similarly, Korman et al. [Dist. Comp. '07] proved, using the notion of proof-labeling-scheme, that, for constructing a minimum-weight spanning trees (MST), every silent algorithm must use a memory of Ω(log2n)\Omega(\log^2n) bits per register. It follows that requiring the algorithm to be silent has a cost in terms of memory space, while, in the context of self-stabilization, where every node constantly checks the states of its neighbors, the silence property can be of limited practical interest. In fact, it is known that relaxing this requirement results in algorithms with smaller space-complexity. In this paper, we are aiming at measuring how much gain in terms of memory can be expected by using arbitrary self-stabilizing algorithms, not necessarily silent. To our knowledge, the only known lower bound on the memory requirement for general algorithms, also established at the end of the 90's, is due to Beauquier et al.~[PODC '99] who proved that registers of constant size are not sufficient for leader election algorithms. We improve this result by establishing a tight lower bound of Θ(logΔ+loglogn)\Theta(\log \Delta+\log \log n) bits per register for self-stabilizing algorithms solving (Δ+1)(\Delta+1)-coloring or constructing a spanning tree in networks of maximum degree~Δ\Delta. The lower bound Ω(loglogn)\Omega(\log \log n) bits per register also holds for leader election

    Deterministic Capacity of MIMO Relay Networks

    Full text link
    The deterministic capacity of a relay network is the capacity of a network when relays are restricted to transmitting \emph{reliable} information, that is, (asymptotically) deterministic function of the source message. In this paper it is shown that the deterministic capacity of a number of MIMO relay networks can be found in the low power regime where \SNR\to0. This is accomplished through deriving single letter upper bounds and finding the limit of these as \SNR\to0. The advantage of this technique is that it overcomes the difficulty of finding optimum distributions for mutual information.Comment: Submitted to IEEE Transactions on Information Theor

    Network synchronization: Optimal and Pessimal Scale-Free Topologies

    Full text link
    By employing a recently introduced optimization algorithm we explicitely design optimally synchronizable (unweighted) networks for any given scale-free degree distribution. We explore how the optimization process affects degree-degree correlations and observe a generic tendency towards disassortativity. Still, we show that there is not a one-to-one correspondence between synchronizability and disassortativity. On the other hand, we study the nature of optimally un-synchronizable networks, that is, networks whose topology minimizes the range of stability of the synchronous state. The resulting ``pessimal networks'' turn out to have a highly assortative string-like structure. We also derive a rigorous lower bound for the Laplacian eigenvalue ratio controlling synchronizability, which helps understanding the impact of degree correlations on network synchronizability.Comment: 11 pages, 4 figs, submitted to J. Phys. A (proceedings of Complex Networks 2007

    Robust Leader Election in a Fast-Changing World

    Full text link
    We consider the problem of electing a leader among nodes in a highly dynamic network where the adversary has unbounded capacity to insert and remove nodes (including the leader) from the network and change connectivity at will. We present a randomized Las Vegas algorithm that (re)elects a leader in O(D\log n) rounds with high probability, where D is a bound on the dynamic diameter of the network and n is the maximum number of nodes in the network at any point in time. We assume a model of broadcast-based communication where a node can send only 1 message of O(\log n) bits per round and is not aware of the receivers in advance. Thus, our results also apply to mobile wireless ad-hoc networks, improving over the optimal (for deterministic algorithms) O(Dn) solution presented at FOMC 2011. We show that our algorithm is optimal by proving that any randomized Las Vegas algorithm takes at least omega(D\log n) rounds to elect a leader with high probability, which shows that our algorithm yields the best possible (up to constants) termination time.Comment: In Proceedings FOMC 2013, arXiv:1310.459

    A Lower Bound Technique for Communication in BSP

    Get PDF
    Communication is a major factor determining the performance of algorithms on current computing systems; it is therefore valuable to provide tight lower bounds on the communication complexity of computations. This paper presents a lower bound technique for the communication complexity in the bulk-synchronous parallel (BSP) model of a given class of DAG computations. The derived bound is expressed in terms of the switching potential of a DAG, that is, the number of permutations that the DAG can realize when viewed as a switching network. The proposed technique yields tight lower bounds for the fast Fourier transform (FFT), and for any sorting and permutation network. A stronger bound is also derived for the periodic balanced sorting network, by applying this technique to suitable subnetworks. Finally, we demonstrate that the switching potential captures communication requirements even in computational models different from BSP, such as the I/O model and the LPRAM

    Communication cost of consensus for nodes with limited memory

    Full text link
    Motivated by applications in blockchains and sensor networks, we consider a model of nn nodes trying to reach consensus on their majority bit. Each node ii is assigned a bit at time zero, and is a finite automaton with mm bits of memory (i.e., 2m2^m states) and a Poisson clock. When the clock of ii rings, ii can choose to communicate, and is then matched to a uniformly chosen node jj. The nodes jj and ii may update their states based on the state of the other node. Previous work has focused on minimizing the time to consensus and the probability of error, while our goal is minimizing the number of communications. We show that when m>3logloglog(n)m>3 \log\log\log(n), consensus can be reached at linear communication cost, but this is impossible if m<logloglog(n)m<\log\log\log(n). We also study a synchronous variant of the model, where our upper and lower bounds on mm for achieving linear communication cost are 2logloglog(n)2\log\log\log(n) and logloglog(n)\log\log\log(n), respectively. A key step is to distinguish when nodes can become aware of knowing the majority bit and stop communicating. We show that this is impossible if their memory is too low.Comment: 62 pages, 5 figure
    corecore