2,242 research outputs found

    Codes for Graph Erasures

    Full text link
    Motivated by systems where the information is represented by a graph, such as neural networks, associative memories, and distributed systems, we present in this work a new class of codes, called codes over graphs. Under this paradigm, the information is stored on the edges of an undirected graph, and a code over graphs is a set of graphs. A node failure is the event where all edges in the neighborhood of the failed node have been erased. We say that a code over graphs can tolerate ρ\rho node failures if it can correct the erased edges of any ρ\rho failed nodes in the graph. While the construction of such codes can be easily accomplished by MDS codes, their field size has to be at least O(n2)O(n^2), when nn is the number of nodes in the graph. In this work we present several constructions of codes over graphs with smaller field size. In particular, we present optimal codes over graphs correcting two node failures over the binary field, when the number of nodes in the graph is a prime number. We also present a construction of codes over graphs correcting ρ\rho node failures for all ρ\rho over a field of size at least (n+1)/21(n+1)/2-1, and show how to improve this construction for optimal codes when ρ=2,3\rho=2,3.Comment: To appear in IEEE International Symposium on Information Theor

    Tree Codes Improve Convergence Rate of Consensus Over Erasure Channels

    Get PDF
    We study the problem of achieving average consensus between a group of agents over a network with erasure links. In the context of consensus problems, the unreliability of communication links between nodes has been traditionally modeled by allowing the underlying graph to vary with time. In other words, depending on the realization of the link erasures, the underlying graph at each time instant is assumed to be a subgraph of the original graph. Implicit in this model is the assumption that the erasures are symmetric: if at time t the packet from node i to node j is dropped, the same is true for the packet transmitted from node j to node i. However, in practical wireless communication systems this assumption is unreasonable and, due to the lack of symmetry, standard averaging protocols cannot guarantee that the network will reach consensus to the true average. In this paper we explore the use of channel coding to improve the performance of consensus algorithms. For symmetric erasures, we show that, for certain ranges of the system parameters, repetition codes can speed up the convergence rate. For asymmetric erasures we show that tree codes (which have recently been designed for erasure channels) can be used to simulate the performance of the original "unerased" graph. Thus, unlike conventional consensus methods, we can guarantee convergence to the average in the asymmetric case. The price is a slowdown in the convergence rate, relative to the unerased network, which is still often faster than the convergence rate of conventional consensus algorithms over noisy links

    Coding with Encoding Uncertainty

    Full text link
    We study the channel coding problem when errors and uncertainty occur in the encoding process. For simplicity we assume the channel between the encoder and the decoder is perfect. Focusing on linear block codes, we model the encoding uncertainty as erasures on the edges in the factor graph of the encoder generator matrix. We first take a worst-case approach and find the maximum tolerable number of erasures for perfect error correction. Next, we take a probabilistic approach and derive a sufficient condition on the rate of a set of codes, such that decoding error probability vanishes as blocklength tends to infinity. In both scenarios, due to the inherent asymmetry of the problem, we derive the results from first principles, which indicates that robustness to encoding errors requires new properties of codes different from classical properties.Comment: 12 pages; a shorter version of this work will appear in the proceedings of ISIT 201

    Cooperative Local Repair in Distributed Storage

    Full text link
    Erasure-correcting codes, that support local repair of codeword symbols, have attracted substantial attention recently for their application in distributed storage systems. This paper investigates a generalization of the usual locally repairable codes. In particular, this paper studies a class of codes with the following property: any small set of codeword symbols can be reconstructed (repaired) from a small number of other symbols. This is referred to as cooperative local repair. The main contribution of this paper is bounds on the trade-off of the minimum distance and the dimension of such codes, as well as explicit constructions of families of codes that enable cooperative local repair. Some other results regarding cooperative local repair are also presented, including an analysis for the well-known Hadamard/Simplex codes.Comment: Fixed some minor issues in Theorem 1, EURASIP Journal on Advances in Signal Processing, December 201

    Linear-time list recovery of high-rate expander codes

    Full text link
    We show that expander codes, when properly instantiated, are high-rate list recoverable codes with linear-time list recovery algorithms. List recoverable codes have been useful recently in constructing efficiently list-decodable codes, as well as explicit constructions of matrices for compressive sensing and group testing. Previous list recoverable codes with linear-time decoding algorithms have all had rate at most 1/2; in contrast, our codes can have rate 1ϵ1 - \epsilon for any ϵ>0\epsilon > 0. We can plug our high-rate codes into a construction of Meir (2014) to obtain linear-time list recoverable codes of arbitrary rates, which approach the optimal trade-off between the number of non-trivial lists provided and the rate of the code. While list-recovery is interesting on its own, our primary motivation is applications to list-decoding. A slight strengthening of our result would implies linear-time and optimally list-decodable codes for all rates, and our work is a step in the direction of solving this important problem
    corecore