6,653 research outputs found

    Order Statistics Based List Decoding Techniques for Linear Binary Block Codes

    Full text link
    The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated. The construction of the list of the test error patterns is considered. The original order statistics decoding is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the order statistics based decoding is further reduced by assuming a partial ordering of the received bits in order to avoid the complex Gauss elimination. The probability of the test error patterns in the decoding list is derived. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms is studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity.Comment: 17 pages, 2 tables, 6 figures, submitted to IEEE Transactions on Information Theor

    Low-Complexity Joint Channel Estimation and List Decoding of Short Codes

    Get PDF
    A pilot-assisted transmission (PAT) scheme is proposed for short blocklengths, where the pilots are used only to derive an initial channel estimate for the list construction step. The final decision of the message is obtained by applying a non-coherent decoding metric to the codewords composing the list. This allows one to use very few pilots, thus reducing the channel estimation overhead. The method is applied to an ordered statistics decoder for communication over a Rayleigh block-fading channel. Gains of up to 1.21.2 dB as compared to traditional PAT schemes are demonstrated for short codes with QPSK signaling. The approach can be generalized to other list decoders, e.g., to list decoding of polar codes.Comment: Accepted at the 12th International ITG Conference on Systems, Communications and Coding (SCC 2019), Rostock, German

    A Study on the Impact of Locality in the Decoding of Binary Cyclic Codes

    Full text link
    In this paper, we study the impact of locality on the decoding of binary cyclic codes under two approaches, namely ordered statistics decoding (OSD) and trellis decoding. Given a binary cyclic code having locality or availability, we suitably modify the OSD to obtain gains in terms of the Signal-To-Noise ratio, for a given reliability and essentially the same level of decoder complexity. With regard to trellis decoding, we show that careful introduction of locality results in the creation of cyclic subcodes having lower maximum state complexity. We also present a simple upper-bounding technique on the state complexity profile, based on the zeros of the code. Finally, it is shown how the decoding speed can be significantly increased in the presence of locality, in the moderate-to-high SNR regime, by making use of a quick-look decoder that often returns the ML codeword.Comment: Extended version of a paper submitted to ISIT 201

    Iterative Soft Input Soft Output Decoding of Reed-Solomon Codes by Adapting the Parity Check Matrix

    Full text link
    An iterative algorithm is presented for soft-input-soft-output (SISO) decoding of Reed-Solomon (RS) codes. The proposed iterative algorithm uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix of the RS code. The novelty is in reducing a submatrix of the binary parity check matrix that corresponds to less reliable bits to a sparse nature before the SPA is applied at each iteration. The proposed algorithm can be geometrically interpreted as a two-stage gradient descent with an adaptive potential function. This adaptive procedure is crucial to the convergence behavior of the gradient descent algorithm and, therefore, significantly improves the performance. Simulation results show that the proposed decoding algorithm and its variations provide significant gain over hard decision decoding (HDD) and compare favorably with other popular soft decision decoding methods.Comment: 10 pages, 10 figures, final version accepted by IEEE Trans. on Information Theor

    A New Chase-type Soft-decision Decoding Algorithm for Reed-Solomon Codes

    Full text link
    This paper addresses three relevant issues arising in designing Chase-type algorithms for Reed-Solomon codes: 1) how to choose the set of testing patterns; 2) given the set of testing patterns, what is the optimal testing order in the sense that the most-likely codeword is expected to appear earlier; and 3) how to identify the most-likely codeword. A new Chase-type soft-decision decoding algorithm is proposed, referred to as tree-based Chase-type algorithm. The proposed algorithm takes the set of all vectors as the set of testing patterns, and hence definitely delivers the most-likely codeword provided that the computational resources are allowed. All the testing patterns are arranged in an ordered rooted tree according to the likelihood bounds of the possibly generated codewords. While performing the algorithm, the ordered rooted tree is constructed progressively by adding at most two leafs at each trial. The ordered tree naturally induces a sufficient condition for the most-likely codeword. That is, whenever the proposed algorithm exits before a preset maximum number of trials is reached, the output codeword must be the most-likely one. When the proposed algorithm is combined with Guruswami-Sudan (GS) algorithm, each trial can be implement in an extremely simple way by removing one old point and interpolating one new point. Simulation results show that the proposed algorithm performs better than the recently proposed Chase-type algorithm by Bellorado et al with less trials given that the maximum number of trials is the same. Also proposed are simulation-based performance bounds on the MLD algorithm, which are utilized to illustrate the near-optimality of the proposed algorithm in the high SNR region. In addition, the proposed algorithm admits decoding with a likelihood threshold, that searches the most-likely codeword within an Euclidean sphere rather than a Hamming sphere

    On joint detection and decoding of linear block codes on Gaussian vector channels

    Get PDF
    Optimal receivers recovering signals transmitted across noisy communication channels employ a maximum-likelihood (ML) criterion to minimize the probability of error. The problem of finding the most likely transmitted symbol is often equivalent to finding the closest lattice point to a given point and is known to be NP-hard. In systems that employ error-correcting coding for data protection, the symbol space forms a sparse lattice, where the sparsity structure is determined by the code. In such systems, ML data recovery may be geometrically interpreted as a search for the closest point in the sparse lattice. In this paper, motivated by the idea of the "sphere decoding" algorithm of Fincke and Pohst, we propose an algorithm that finds the closest point in the sparse lattice to the given vector. This given vector is not arbitrary, but rather is an unknown sparse lattice point that has been perturbed by an additive noise vector whose statistical properties are known. The complexity of the proposed algorithm is thus a random variable. We study its expected value, averaged over the noise and over the lattice. For binary linear block codes, we find the expected complexity in closed form. Simulation results indicate significant performance gains over systems employing separate detection and decoding, yet are obtained at a complexity that is practically feasible over a wide range of system parameters
    corecore