18 research outputs found

    Expander Chunked Codes

    Full text link
    Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks, and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance,where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 percent of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.Comment: 26 pages, 3 figures, submitted for journal publicatio

    Batched Sparse Codes

    Full text link
    Network coding can significantly improve the transmission rate of communication networks with packet loss compared with routing. However, using network coding usually incurs high computational and storage costs in the network devices and terminals. For example, some network coding schemes require the computational and/or storage capacities of an intermediate network node to increase linearly with the number of packets for transmission, making such schemes difficult to be implemented in a router-like device that has only constant computational and storage capacities. In this paper, we introduce BATched Sparse code (BATS code), which enables a digital fountain approach to resolve the above issue. BATS code is a coding scheme that consists of an outer code and an inner code. The outer code is a matrix generation of a fountain code. It works with the inner code that comprises random linear coding at the intermediate network nodes. BATS codes preserve such desirable properties of fountain codes as ratelessness and low encoding/decoding complexity. The computational and storage capacities of the intermediate network nodes required for applying BATS codes are independent of the number of packets for transmission. Almost capacity-achieving BATS code schemes are devised for unicast networks, two-way relay networks, tree networks, a class of three-layer networks, and the butterfly network. For general networks, under different optimization criteria, guaranteed decoding rates for the receiving nodes can be obtained.Comment: 51 pages, 12 figures, submitted to IEEE Transactions on Information Theor

    A Markov chain model for the decoding probability of sparse network coding

    Get PDF
    Random linear network coding has been shown to offer an efficient communication scheme, leveraging a remarkable robustness against packet losses. However, it suffers from a high-computational complexity, and some novel approaches, which follow the same idea, have been recently proposed. One of such solutions is sparse network coding (SNC), where only few packets are combined with each transmission. The amount of data packets to be combined can be set from a density parameter/distribution, which could be eventually adapted. In this paper, we present a semi-analytical model that captures the performance of SNC on an accurate way. We exploit an absorbing Markov process, where the states are defined by the number of useful packets received by the decoder, i.e., the decoding matrix rank, and the number of non-zero columns at such matrix. The model is validated by the means of a thorough simulation campaign, and the difference between model and simulation is negligible. We also include in the comparison of some more general bounds that have been recently used, showing that their accuracy is rather poor. The proposed model would enable a more precise assessment of the behavior of SNC techniques.This work has been supported by the Spanish Government (Ministerio de Economía y Competitividad, Fondo Europeo de Desarrollo Regional, FEDER) by means of the projects COSAIF, “Connectivity as a Service: Access for the Internet of the Future” (TEC2012-38754-C02-01), and ADVICE (TEC2015-71329-C2-1-R). This work was also financed in part by the TuneSCode project (No. DFF 1335-00125) granted by the Danish Council for Independent Research

    On Achievable Rates of Line Networks with Generalized Batched Network Coding

    Full text link
    To better understand the wireless network design with a large number of hops, we investigate a line network formed by general discrete memoryless channels (DMCs), which may not be identical. Our focus lies on Generalized Batched Network Coding (GBNC) that encompasses most existing schemes as special cases and achieves the min-cut upper bounds as the parameters batch size and inner block length tend to infinity. The inner blocklength of GBNC provides upper bounds on the required latency and buffer size at intermediate network nodes. By employing a bottleneck status technique, we derive new upper bounds on the achievable rates of GBNCs These bounds surpass the min-cut bound for large network lengths when the inner blocklength and batch size are small. For line networks of canonical channels, certain upper bounds hold even with relaxed inner blocklength constraints. Additionally, we employ a channel reduction technique to generalize the existing achievability results for line networks with identical DMCs to networks with non-identical DMCs. For line networks with packet erasure channels, we make refinement in both the upper bound and the coding scheme, and showcase their proximity through numerical evaluations.Comment: This paper was presented in part at ISIT 2019 and 2020, and is accepted by a JSAC special issu
    corecore