10 research outputs found

    PAC Code Rate-Profile Design Using Search-Constrained Optimization Algorithms

    Full text link
    In this paper, we introduce a novel rate-profile design based on search-constrained optimization techniques to assess the performance of polarization-adjusted convolutional (PAC) codes under Fano (sequential) decoding. The results demonstrate that the resulting PAC code offers much reduced computational complexity compared to a construction based on a conventional genetic algorithm without a performance loss in error-correction performance. As the fitness function of our algorithm, we propose an adaptive successive cancellation list decoding algorithm to determine the weight distribution of the rate profiles. The simulation results indicate that, for a PAC(256, 128) code, only 8% of the population requires that their fitness function be evaluated with a large list size. This represents an improvement of almost 92% over a conventional evolutionary algorithm. For a PAC(64, 32) code, this improvement is about 99%. We also plotted the performance of the high-rate PAC(128, 105) and PAC(64, 51) codes, and the results show that they exhibit superior performance compared to other algorithms

    Polarization-Adjusted Convolutional (PAC) Codes as a Concatenation of Inner Cyclic and Outer Polar- and Reed-Muller-like Codes

    Full text link
    Polarization-adjusted convolutional (PAC) codes are a new family of linear block codes that can perform close to the theoretical bounds in the short block-length regime. These codes combine polar coding and convolutional coding. In this study, we show that PAC codes are equivalent to a new class of codes consisting of inner cyclic codes and outer polar- and Reed-Muller-like codes. We leverage the properties of cyclic codes to establish that PAC codes outperform polar- and Reed-Muller-like codes in terms of minimum distance

    On the Weight Spectrum Improvement of Pre-transformed Reed-Muller Codes and Polar Codes

    Full text link
    Pre-transformation with an upper-triangular matrix (including cyclic redundancy check (CRC), parity-check (PC) and polarization-adjusted convolutional (PAC) codes) improves the weight spectrum of Reed-Muller (RM) codes and polar codes significantly. However, a theoretical analysis to quantify the improvement is missing. In this paper, we provide asymptotic analysis on the number of low-weight codewords of the original and pre-transformed RM codes respectively, and prove that pre-transformation significantly reduces low-weight codewords, even in the order sense. For polar codes, we prove that the average number of minimum-weight codewords does not increase after pre-transformation. Both results confirm the advantages of pre-transformation

    On the Weight Distribution of Weights Less than 2wmin2w_{\min} in Polar Codes

    Full text link
    The number of low-weight codewords is critical to the performance of error-correcting codes. In 1970, Kasami and Tokura characterized the codewords of Reed-Muller (RM) codes whose weights are less than 2wmin2w_{\min}, where wminw_{\min} represents the minimum weight. In this paper, we extend their results to decreasing polar codes. We present the closed-form expressions for the number of codewords in decreasing polar codes with weights less than 2wmin2w_{\min}. Moreover, the proposed enumeration algorithm runs in polynomial time with respect to the code length

    Error Coefficient-reduced Polar/PAC Codes

    Full text link
    Polar codes are normally designed based on the reliability of the sub-channels in the polarized vector channel. There are various methods with diverse complexity and accuracy to evaluate the reliability of the sub-channels. However, designing polar codes solely based on the sub-channel reliability may result in poor Hamming distance properties. In this work, we propose a different approach to design the information set for polar codes and PAC codes where the objective is to reduce the number of codewords with minimum weight (a.k.a. error coefficient) of a code designed for maximum reliability. This approach is based on the coset-wise characterization of the rows of polar transform GN\mathbf{G}_N involved in the formation of the minimum-weight codewords. Our analysis capitalizes on the properties of the polar transform based on its row and column indices. The numerical results show that the designed codes outperform PAC codes and CRC-Polar codes at the practical block error rate of 10210310^{-2}-10^{-3}. Furthermore, a by-product of the combinatorial properties analyzed in this paper is an alternative enumeration method of the minimum-weight codewords.Comment: 19 pages, 10 figures, 4 tables, 2 listing

    PAC Codes: Sequential Decoding vs List Decoding

    Full text link
    In the Shannon lecture at the 2019 International Symposium on Information Theory (ISIT), Ar{\i}kan proposed to employ a one-to-one convolutional transform as a pre-coding step before the polar transform. The resulting codes of this concatenation are called polarization-adjusted convolutional (PAC) codes. In this scheme, a pair of polar mapper and demapper as pre- and postprocessing devices are deployed around a memoryless channel, which provides polarized information to an outer decoder leading to improved error correction performance of the outer code. In this paper, the list decoding and sequential decoding (including Fano decoding and stack decoding) are first adapted for use to decode PAC codes. Then, to reduce the complexity of sequential decoding of PAC/polar codes, we propose (i) an adaptive heuristic metric, (ii) tree search constraints for backtracking to avoid exploration of unlikely sub-paths, and (iii) tree search strategies consistent with the pattern of error occurrence in polar codes. These contribute to the reduction of the average decoding time complexity from 50% to 80%, trading with 0.05 to 0.3 dB degradation in error correction performance within FER=10^-3 range, respectively, relative to not applying the corresponding search strategies. Additionally, as an important ingredient in Fano decoding of PAC/polar codes, an efficient computation method for the intermediate LLRs and partial sums is provided. This method is effective in backtracking and avoids storing the intermediate information or restarting the decoding process. Eventually, all three decoding algorithms are compared in terms of performance, complexity, and resource requirements.Comment: 14 pages, 12 figures, 1 table, 6 algorithm
    corecore