6 research outputs found

    Universal and Robust Distributed Network Codes

    Full text link
    Random linear network codes can be designed and implemented in a distributed manner, with low computational complexity. However, these codes are classically implemented over finite fields whose size depends on some global network parameters (size of the network, the number of sinks) that may not be known prior to code design. Also, if new nodes join the entire network code may have to be redesigned. In this work, we present the first universal and robust distributed linear network coding schemes. Our schemes are universal since they are independent of all network parameters. They are robust since if nodes join or leave, the remaining nodes do not need to change their coding operations and the receivers can still decode. They are distributed since nodes need only have topological information about the part of the network upstream of them, which can be naturally streamed as part of the communication protocol. We present both probabilistic and deterministic schemes that are all asymptotically rate-optimal in the coding block-length, and have guarantees of correctness. Our probabilistic designs are computationally efficient, with order-optimal complexity. Our deterministic designs guarantee zero error decoding, albeit via codes with high computational complexity in general. Our coding schemes are based on network codes over ``scalable fields". Instead of choosing coding coefficients from one field at every node, each node uses linear coding operations over an ``effective field-size" that depends on the node's distance from the source node. The analysis of our schemes requires technical tools that may be of independent interest. In particular, we generalize the Schwartz-Zippel lemma by proving a non-uniform version, wherein variables are chosen from sets of possibly different sizes. We also provide a novel robust distributed algorithm to assign unique IDs to network nodes.Comment: 12 pages, 7 figures, 1 table, under submission to INFOCOM 201

    Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks

    Get PDF
    Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix

    Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks

    Get PDF
    Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix

    Network Coding for Error Correction

    Get PDF
    In this thesis, network error correction is considered from both theoretical and practical viewpoints. Theoretical parameters such as network structure and type of connection (multicast vs. nonmulticast) have a profound effect on network error correction capability. This work is also dictated by the practical network issues that arise in wireless ad-hoc networks, networks with limited computational power (e.g., sensor networks) and real-time data streaming systems (e.g., video/audio conferencing or media streaming). Firstly, multicast network scenarios with probabilistic error and erasure occurrence are considered. In particular, it is shown that in networks with both random packet erasures and errors, increasing the relative occurrence of erasures compared to errors favors network coding over forwarding at network nodes, and vice versa. Also, fountain-like error-correcting codes, for which redundancy is incrementally added until decoding succeeds, are constructed. These codes are appropriate for use in scenarios where the upper bound on the number of errors is unknown a priori. Secondly, network error correction in multisource multicast and nonmulticast network scenarios is discussed. Capacity regions for multisource multicast network error correction with both known and unknown topologies (coherent and noncoherent network coding) are derived. Several approaches to lower- and upper-bounding error-correction capacity regions of general nonmulticast networks are given. For 3-layer two-sink and nested-demand nonmulticast network topologies some of the given lower and upper bounds match. For these network topologies, code constructions that employ only intrasession coding are designed. These designs can be applied to streaming erasure correction code constructions.</p
    corecore