4,817 research outputs found

    Convolutional Codes in Rank Metric with Application to Random Network Coding

    Full text link
    Random network coding recently attracts attention as a technique to disseminate information in a network. This paper considers a non-coherent multi-shot network, where the unknown and time-variant network is used several times. In order to create dependencies between the different shots, particular convolutional codes in rank metric are used. These codes are so-called (partial) unit memory ((P)UM) codes, i.e., convolutional codes with memory one. First, distance measures for convolutional codes in rank metric are shown and two constructions of (P)UM codes in rank metric based on the generator matrices of maximum rank distance codes are presented. Second, an efficient error-erasure decoding algorithm for these codes is presented. Its guaranteed decoding radius is derived and its complexity is bounded. Finally, it is shown how to apply these codes for error correction in random linear and affine network coding.Comment: presented in part at Netcod 2012, submitted to IEEE Transactions on Information Theor

    Block Network Error Control Codes and Syndrome-based Complete Maximum Likelihood Decoding

    Full text link
    In this paper, network error control coding is studied for robust and efficient multicast in a directed acyclic network with imperfect links. The block network error control coding framework, BNEC, is presented and the capability of the scheme to correct a mixture of symbol errors and packet erasures and to detect symbol errors is studied. The idea of syndrome-based decoding and error detection is introduced for BNEC, which removes the effect of input data and hence decreases the complexity. Next, an efficient three-stage syndrome-based BNEC decoding scheme for network error correction is proposed, in which prior to finding the error values, the position of the edge errors are identified based on the error spaces at the receivers. In addition to bounded-distance decoding schemes for error correction up to the refined Singleton bound, a complete decoding scheme for BNEC is also introduced. Specifically, it is shown that using the proposed syndrome-based complete decoding, a network error correcting code with redundancy order d for receiver t, can correct d-1 random additive errors with a probability sufficiently close to 1, if the field size is sufficiently large. Also, a complete maximum likelihood decoding scheme for BNEC is proposed. As the probability of error in different network edges is not equal in general, and given the equivalency of certain edge errors within the network at a particular receiver, the number of edge errors, assessed in the refined Singleton bound, is not a sufficient statistic for ML decoding

    On The Performance of Random Block Codes over Finite-State Fading Channels

    Full text link
    As the mobile application landscape expands, wireless networks are tasked with supporting various connection profiles, including real-time communications and delay-sensitive traffic. Among many ensuing engineering challenges is the need to better understand the fundamental limits of forward error correction in non-asymptotic regimes. This article seeks to characterize the performance of block codes over finite-state channels with memory. In particular, classical results from information theory are revisited in the context of channels with rate transitions, and bounds on the probabilities of decoding failure are derived for random codes. This study offers new insights about the potential impact of channel correlation over time on overall performance

    Short, unit-memory, Byte-oriented, binary convolutional codes having maximal free distance

    Get PDF
    It is shown that (n sub 0, k sub 0) convolutional codes with unit memory always achieve the largest free distance among all codes of the same rate k sub 0/n sub 0 and same number 2MK sub 0 of encoder states, where M is the encoder memory. A unit-memory code with maximal free distance is given at each place where this free distance exceeds that of the best code with k sub 0 and n sub 0 relatively prime, for all Mk sub 0 less than or equal to 6 and for R = 1/2, 1/3, 1/4, 2/3. It is shown that the unit-memory codes are byte-oriented in such a way as to be attractive for use in concatenated coding systems

    Renormalization group decoder for a four-dimensional toric code

    Full text link
    We describe a computationally-efficient heuristic algorithm based on a renormalization-group procedure which aims at solving the problem of finding minimal surface given its boundary (curve) in any hypercubic lattice of dimension D>2D>2. We use this algorithm to correct errors occurring in a four-dimensional variant of the toric code, having open as opposed to periodic boundaries. For a phenomenological error model which includes measurement errors we use a five-dimensional version of our algorithm, achieving a threshold of 4.35±0.1%4.35\pm0.1\%. For this error model, this is the highest known threshold of any topological code. Without measurement errors, a four-dimensional version of our algorithm can be used and we find a threshold of 7.3±0.1%7.3\pm0.1\%. For the gate-based depolarizing error model we find a threshold of 0.31±0.01%0.31\pm0.01\% which is below the threshold found for the two-dimensional toric code.Comment: 18 pages, 12 figures, 3 tables. Comments are welcom

    Concatenation of convolutional and block codes Final report

    Get PDF
    Comparison of concatenated and sequential decoding systems and convolutional code structural propertie

    Inter-Block Permuted Turbo Codes

    Full text link
    The structure and size of the interleaver used in a turbo code critically affect the distance spectrum and the covariance property of a component decoder's information input and soft output. This paper introduces a new class of interleavers, the inter-block permutation (IBP) interleavers, that can be build on any existing "good" block-wise interleaver by simply adding an IBP stage. The IBP interleavers reduce the above-mentioned correlation and increase the effective interleaving size. The increased effective interleaving size improves the distance spectrum while the reduced covariance enhances the iterative decoder's performance. Moreover, the structure of the IBP(-interleaved) turbo codes (IBPTC) is naturally fit for high rate applications that necessitate parallel decoding. We present some useful bounds and constraints associated with the IBPTC that can be used as design guidelines. The corresponding codeword weight upper bounds for weight-2 and weight-4 input sequences are derived. Based on some of the design guidelines, we propose a simple IBP algorithm and show that the associated IBPTC yields 0.3 to 1.2 dB performance gain, or equivalently, an IBPTC renders the same performance with a much reduced interleaving delay. The EXIT and covariance behaviors provide another numerical proof of the superiority of the proposed IBPTC.Comment: 44 pages, 17 figure

    Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes

    Get PDF
    In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of Reed-Solomon codes.Comment: Submitted to IEEE for publication in Jan 200

    On Scaling Rules for Energy of VLSI Polar Encoders and Decoders

    Full text link
    It is shown that all polar encoding schemes of rate R>12R>\frac{1}{2} of block length NN implemented according to the Thompson VLSI model must take energy EΩ(N3/2)E\ge\Omega\left(N^{3/2}\right). This lower bound is achievable up to polylogarithmic factors using a mesh network topology defined by Thompson and the encoding algorithm defined by Arikan. A general class of circuits that compute successive cancellation decoding adapted from Arikan's butterfly network algorithm is defined. It is shown that such decoders implemented on a rectangle grid for codes of rate R>2/3R>2/3 must take energy EΩ(N3/2)E\ge\Omega(N^{3/2}), and this can also be reached up to polylogarithmic factors using a mesh network. Capacity approaching sequences of energy optimal polar encoders and decoders, as a function of reciprocal gap to capacity χ=(1R/C)1\chi = (1-R/C)^{-1}, have energy that scales as Ω(χ5.325)EO(χ7.05log4(χ))\Omega\left(\chi^{5.325}\right)\le E \le O\left(\chi^{7.05}\log^{4}\left(\chi\right)\right)

    Computing coset leaders and leader codewords of binary codes

    Full text link
    In this paper we use the Gr\"obner representation of a binary linear code C\mathcal C to give efficient algorithms for computing the whole set of coset leaders, denoted by CL(C)\mathrm{CL}(\mathcal C) and the set of leader codewords, denoted by L(C)\mathrm L(\mathcal C). The first algorithm could be adapted to provide not only the Newton and the covering radius of C\mathcal C but also to determine the coset leader weight distribution. Moreover, providing the set of leader codewords we have a test-set for decoding by a gradient-like decoding algorithm. Another contribution of this article is the relation stablished between zero neighbours and leader codewords
    corecore