515 research outputs found

    An Iteratively Decodable Tensor Product Code with Application to Data Storage

    Full text link
    The error pattern correcting code (EPCC) can be constructed to provide a syndrome decoding table targeting the dominant error events of an inter-symbol interference channel at the output of the Viterbi detector. For the size of the syndrome table to be manageable and the list of possible error events to be reasonable in size, the codeword length of EPCC needs to be short enough. However, the rate of such a short length code will be too low for hard drive applications. To accommodate the required large redundancy, it is possible to record only a highly compressed function of the parity bits of EPCC's tensor product with a symbol correcting code. In this paper, we show that the proposed tensor error-pattern correcting code (T-EPCC) is linear time encodable and also devise a low-complexity soft iterative decoding algorithm for EPCC's tensor product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a 1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same decoder complexity.Comment: Hakim Alhussien, Jaekyun Moon, "An Iteratively Decodable Tensor Product Code with Application to Data Storage

    Order Statistics Based List Decoding Techniques for Linear Binary Block Codes

    Full text link
    The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated. The construction of the list of the test error patterns is considered. The original order statistics decoding is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the order statistics based decoding is further reduced by assuming a partial ordering of the received bits in order to avoid the complex Gauss elimination. The probability of the test error patterns in the decoding list is derived. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms is studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity.Comment: 17 pages, 2 tables, 6 figures, submitted to IEEE Transactions on Information Theor

    Rate-Flexible Fast Polar Decoders

    Full text link
    Polar codes have gained extensive attention during the past few years and recently they have been selected for the next generation of wireless communications standards (5G). Successive-cancellation-based (SC-based) decoders, such as SC list (SCL) and SC flip (SCF), provide a reasonable error performance for polar codes at the cost of low decoding speed. Fast SC-based decoders, such as Fast-SSC, Fast-SSCL, and Fast-SSCF, identify the special constituent codes in a polar code graph off-line, produce a list of operations, store the list in memory, and feed the list to the decoder to decode the constituent codes in order efficiently, thus increasing the decoding speed. However, the list of operations is dependent on the code rate and as the rate changes, a new list is produced, making fast SC-based decoders not rate-flexible. In this paper, we propose a completely rate-flexible fast SC-based decoder by creating the list of operations directly in hardware, with low implementation complexity. We further propose a hardware architecture implementing the proposed method and show that the area occupation of the rate-flexible fast SC-based decoder in this paper is only 38%38\% of the total area of the memory-based base-line decoder when 5G code rates are supported

    Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes

    Get PDF
    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word

    Novel Methods in the Improvement of Turbo Codes and their Decoding

    Get PDF
    The performance of turbo codes can often be improved by improving the weight spectra of such codes. Methods of producing the weight spectra of turbo codes have been investigated and many improvements were made to refine the techniques. A much faster method of weight spectrum evaluation has been developed that allows calculation of weight spectra within a few minutes on a typical desktop PC. Simulation results show that new high performance turbo codes are produced by the optimisation methods presented. The two further important areas of concern are the code itself and the decoding. Improvements of the code are accomplished through optimisation of the interleaver and choice of constituent coders. Optimisation of interleaves can also be accomplished automatically using the algorithms described in this work. The addition of a CRC as an outer code proved to offer a vast improvement on the overall code performance. This was achieved without any code rate loss as the turbo code is punctured to make way for the CRC remainder. The results show a gain of 0.4dB compared to the non-CRC (1014,676) turbo code. Another improvement to the decoding performance was achieved through a combination of MAP decoding and Ordered Reliability decoding. The simulations show a performance of just 0.2dB from the Shannon limit. The same code without ordered reliability decoding has a performance curve which is 0.6dB from the Shannon limit. In situations where the MAP decoder fails to converge ordered reliability decoding succeeds in producing a codeword much closer to the received vector, often the correct codeword. The ordered reliability decoding adds to the computational complexity but lends itself to FPGA implementation.Engineering and Physical Sciences Research Council (EPSRC
    corecore