67,978 research outputs found

    Low Complexity Encoding for Network Codes

    Get PDF
    In this paper we consider the per-node run-time complexity of network multicast codes. We show that the randomized algebraic network code design algorithms described extensively in the literature result in codes that on average require a number of operations that scales quadratically with the blocklength m of the codes. We then propose an alternative type of linear network code whose complexity scales linearly in m and still enjoys the attractive properties of random algebraic network codes. We also show that these codes are optimal in the sense that any rate-optimal linear network code must have at least a linear scaling in run-time complexity

    Fast Encoding and Decoding of Gabidulin Codes

    Full text link
    Gabidulin codes are the rank-metric analogs of Reed-Solomon codes and have a major role in practical error control for network coding. This paper presents new encoding and decoding algorithms for Gabidulin codes based on low-complexity normal bases. In addition, a new decoding algorithm is proposed based on a transform-domain approach. Together, these represent the fastest known algorithms for encoding and decoding Gabidulin codes.Comment: 5 pages, 1 figure, to be published at ISIT 200

    Good approximate quantum LDPC codes from spacetime circuit Hamiltonians

    Get PDF
    We study approximate quantum low-density parity-check (QLDPC) codes, which are approximate quantum error-correcting codes specified as the ground space of a frustration-free local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum error-correcting codes with sparse, low-weight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constant-weight stabilizers exist? We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]][[N,k,d,\varepsilon]] approximate QLDPC codes that encode k=Ω~(N)k = \widetilde{\Omega}(N) logical qubits into NN physical qubits with distance d=Ω~(N)d = \widetilde{\Omega}(N) and approximation infidelity ε=O(1/polylog(N))\varepsilon = \mathcal{O}(1/\textrm{polylog}(N)). The code space is stabilized by a set of 10-local noncommuting projectors, with each physical qubit only participating in O(polylogN)\mathcal{O}(\textrm{polylog} N) projectors. We prove the existence of an efficient encoding map, and we show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth. Finally, we show that the spectral gap of the code Hamiltonian is Ω~(N−3.09)\widetilde{\Omega}(N^{-3.09}) by analyzing a spacetime circuit-to-Hamiltonian construction for a bitonic sorting network architecture that is spatially local in polylog(N)\textrm{polylog}(N) dimensions.Comment: 51 pages, 13 figure

    A detailed study on LDPC encoding techniques

    Get PDF
    This survey deals with LDPC encoding techniques. Different types of error detection and correction codes have been studied. BHC codes, Turbo code, LDPC Codes, Hamming codes are some of the vast classes of codes. Low Decoding complexity and efficient throughput are the achieved by using LDPC codes. Robert G.Gallager introduced this code so LDPC codes are Gallager code. After then Mackay and Neal in 1995 rediscovered LDPC codes because of its bit error performance. It consist of sparse of ones ie., low density of one’s because of this property decoding is simple. The major setback in LDPC codes are Encoding Complexity. WLAN (IEEE 802.11n) and MIMO OFDM are some of the applications of code. This code is a class of forward error correction (FEC) technique that exhibits capacity of impending near Shannon’s limit. LDPC codes are well identified for their capacity-approaching performance The LDPC codes have been selected as forward error correction in application including digital video broadcasting (DVBS2), 10 Gigabit Ethernet (10GBASE-T) broadband wireless access (Wi-Max), wireless local area network, deep-space communications

    Construction of Short-length High-rates Ldpc Codes Using Difference Families

    Full text link
    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, andperforms well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quitesuitable for applications in future wireless local area network

    Degree Distribution Optimization in Raptor Network Coding

    Get PDF
    We consider a multi-source delivery system, where Raptor coding at sources and linear network coding in overlay nodes work in concert for efficient data delivery in networks with diversity. Such a combination permits to increase throughput and loss resiliency in multicast scenarios with possibly multiple sources. The network coding operations however change the degree distribution in the set of packets that reach the receivers, so that the low complexity decoding benefits of Raptor codes are unfortunately diminished. We propose in this paper to change the degree distribution at encoder, in such a way that the degree distribution after network coding operations recovers a form that leads to low complexity decoding. We first analyze how the degree distribution of the encoded symbols is altered by network coding operations and losses in a regular network. Then we formulate a geometric optimization problem in order to compute the best degree distribution for encoding at sources, such that the decoding complexity is low and close to Raptor decoders' performance. Simulations show that it is possible to maintain the low complexity decoding performance of Raptor codes even after linear network coding operations, as long as the coding at sources is adapted to the network characteristics
    • …
    corecore