14,224 research outputs found

    Fast-SSC-Flip Decoding of Polar Codes

    Full text link
    Polar codes are widely considered as one of the most exciting recent discoveries in channel coding. For short to moderate block lengths, their error-correction performance under list decoding can outperform that of other modern error-correcting codes. However, high-speed list-based decoders with moderate complexity are challenging to implement. Successive-cancellation (SC)-flip decoding was shown to be capable of a competitive error-correction performance compared to that of list decoding with a small list size, at a fraction of the complexity, but suffers from a variable execution time and a higher worst-case latency. In this work, we show how to modify the state-of-the-art high-speed SC decoding algorithm to incorporate the SC-flip ideas. The algorithmic improvements are presented as well as average execution-time results tailored to a hardware implementation. The results show that the proposed fast-SSC-flip algorithm has a decoding speed close to an order of magnitude better than the previous works while retaining a comparable error-correction performance.Comment: 5 pages, 3 figures, appeared at IEEE Wireless Commun. and Netw. Conf. (WCNC) 201

    Temporal Trends of Emergency Department Visits of Patients with Atrial Fibrillation:A Nationwide Population-Based Study

    Get PDF
    The question of list decoding error-correcting codes over finite fields (under the Hamming metric) has been widely studied in recent years. Motivated by the similar discrete linear structure of linear codes and point lattices in R N, and their many shared applications across complexity theory, cryptography, and coding theory, we initiate the study of list decoding for lattices. Namely: for a lattice L ⊆ R N, given a target vector r ∈ R N and a distance parameter d, output the set of all lattice points w ∈ L that are within distance d of r. In this work we focus on combinatorial and algorithmic questions related to list decoding for the well-studied family of Barnes-Wall lattices. Our main contributions are twofold: 1. We give tight (up to polynomials) combinatorial bounds on the worst-case list size, showing it to be polynomial in the lattice dimension for any error radius bounded away from the lattice’s minimum distance (in the Euclidean norm). 2. Building on the unique decoding algorithm of Micciancio and Nicolosi (ISIT ’08), we give a list-decoding algorithm that runs in time polynomial in the lattice dimension and worst-case list size, for any error radius. Moreover, our algorithm is highly parallelizable, and with sufficiently many processors can run in parallel time only poly-logarithmic in the lattice dimension. In particular, our results imply a polynomial-time list-decoding algorithm for any error radius bounded away from the minimum distance, thus beating a typical barrier for natural error-correcting codes posed by the Johnson radius

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2−ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1−ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity
    • …
    corecore