53,311 research outputs found

    On the structure of optimal error-correcting codes

    Get PDF
    AbstractKabatyanskii and Panchenko asked whether two sets of size 10 consisting of binary 7-tuples exist, such that all 100 sums with one element from each set are distinct. This question is here answered in the negative by showing that the existence of such sets would imply the existence of a binary single-error-correcting code of length 9 and size 40 (which is unique) with a certain property, which such a code does not have

    Error-Correcting Data Structures

    Get PDF
    We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This new model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.Comment: 15 pages LaTeX; an abridged version will appear in the Proceedings of the STACS 2009 conferenc

    It'll probably work out: improved list-decoding through random operations

    Full text link
    In this work, we introduce a framework to study the effect of random operations on the combinatorial list-decodability of a code. The operations we consider correspond to row and column operations on the matrix obtained from the code by stacking the codewords together as columns. This captures many natural transformations on codes, such as puncturing, folding, and taking subcodes; we show that many such operations can improve the list-decoding properties of a code. There are two main points to this. First, our goal is to advance our (combinatorial) understanding of list-decodability, by understanding what structure (or lack thereof) is necessary to obtain it. Second, we use our more general results to obtain a few interesting corollaries for list decoding: (1) We show the existence of binary codes that are combinatorially list-decodable from 1/2−ϵ1/2-\epsilon fraction of errors with optimal rate Ω(ϵ2)\Omega(\epsilon^2) that can be encoded in linear time. (2) We show that any code with Ω(1)\Omega(1) relative distance, when randomly folded, is combinatorially list-decodable 1−ϵ1-\epsilon fraction of errors with high probability. This formalizes the intuition for why the folding operation has been successful in obtaining codes with optimal list decoding parameters; previously, all arguments used algebraic methods and worked only with specific codes. (3) We show that any code which is list-decodable with suboptimal list sizes has many subcodes which have near-optimal list sizes, while retaining the error correcting capabilities of the original code. This generalizes recent results where subspace evasive sets have been used to reduce list sizes of codes that achieve list decoding capacity

    Improve the Usability of Polar Codes: Code Construction, Performance Enhancement and Configurable Hardware

    Full text link
    Error-correcting codes (ECC) have been widely used for forward error correction (FEC) in modern communication systems to dramatically reduce the signal-to-noise ratio (SNR) needed to achieve a given bit error rate (BER). Newly invented polar codes have attracted much interest because of their capacity-achieving potential, efficient encoder and decoder implementation, and flexible architecture design space.This dissertation is aimed at improving the usability of polar codes by providing a practical code design method, new approaches to improve the performance of polar code, and a configurable hardware design that adapts to various specifications. State-of-the-art polar codes are used to achieve extremely low error rates. In this work, high-performance FPGA is used in prototyping polar decoders to catch rare-case errors for error-correcting performance verification and error analysis. To discover the polarization characteristics and error patterns of polar codes, an FPGA emulation platform for belief-propagation (BP) decoding is built by a semi-automated construction flow. The FPGA-based emulation achieves significant speedup in large-scale experiments involving trillions of data frames. The platform is a key enabler of this work. The frozen set selection of polar codes, known as bit selection, is critical to the error-correcting performance of polar codes. A simulation-based in-order bit selection method is developed to evaluate the error rate of each bit using Monte Carlo simulations. The frozen set is selected based on the bit reliability ranking. The resulting code construction exhibits up to 1 dB coding gain with respect to the conventional bit selection. To further improve the coding gain of BP decoder for low-error-rate applications, the decoding error mechanisms are studied and analyzed, and the errors are classified based on their distinct signatures. Error detection is enabled by low-cost CRC concatenation, and post-processing algorithms targeting at each type of the error is designed to mitigate the vast majority of the decoding errors. The post-processor incurs only a small implementation overhead, but it provides more than an order of magnitude improvement of the error-correcting performance. The regularity of the BP decoder structure offers many hardware architecture choices. Silicon area, power consumption, throughput and latency can be traded to reach the optimal design points for practical use cases. A comprehensive design space exploration reveals several practical architectures at different design points. The scalability of each architecture is also evaluated based on the implementation candidates. For dynamic communication channels, such as wireless channels in the upcoming 5G applications, multiple codes of different lengths and code rates are needed to t varying channel conditions. To minimize implementation cost, a universal decoder architecture is proposed to support multiple codes through hardware reuse. A 40nm length- and rate-configurable polar decoder ASIC is demonstrated to fit various communication environments and service requirements.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/140817/1/shuangsh_1.pd

    Quantum Coding with Entanglement

    Full text link
    Quantum error-correcting codes will be the ultimate enabler of a future quantum computing or quantum communication device. This theory forms the cornerstone of practical quantum information theory. We provide several contributions to the theory of quantum error correction--mainly to the theory of "entanglement-assisted" quantum error correction where the sender and receiver share entanglement in the form of entangled bits (ebits) before quantum communication begins. Our first contribution is an algorithm for encoding and decoding an entanglement-assisted quantum block code. We then give several formulas that determine the optimal number of ebits for an entanglement-assisted code. The major contribution of this thesis is the development of the theory of entanglement-assisted quantum convolutional coding. A convolutional code is one that has memory and acts on an incoming stream of qubits. We explicitly show how to encode and decode a stream of information qubits with the help of ancilla qubits and ebits. Our entanglement-assisted convolutional codes include those with a Calderbank-Shor-Steane structure and those with a more general structure. We then formulate convolutional protocols that correct errors in noisy entanglement. Our final contribution is a unification of the theory of quantum error correction--these unified convolutional codes exploit all of the known resources for quantum redundancy.Comment: Ph.D. Thesis, University of Southern California, 2008, 193 pages, 2 tables, 12 figures, 9 limericks; Available at http://digitallibrary.usc.edu/search/controller/view/usctheses-m1491.htm

    Spin squeezed GKP codes for quantum error correction in atomic ensembles

    Full text link
    GKP codes encode a qubit in displaced phase space combs of a continuous-variable (CV) quantum system and are useful for correcting a variety of high-weight photonic errors. Here we propose atomic ensemble analogues of the single-mode CV GKP code by using the quantum central limit theorem to pull back the phase space structure of a CV system to the compact phase space of a quantum spin system. We study the optimal recovery performance of these codes under error channels described by stochastic relaxation and isotropic ballistic dephasing processes using the diversity combining approach for calculating channel fidelity. We find that the spin GKP codes outperform other spin system codes such as cat codes or binomial codes. Our spin GKP codes based on the two-axis countertwisting interaction and superpositions of SU(2) coherent states are direct spin analogues of the finite-energy CV GKP codes, whereas our codes based on one-axis twisting do not yet have well-studied CV analogues. An implementation of the spin GKP codes is proposed which uses the linear combination of unitaries method, applicable to both the CV and spin GKP settings. Finally, we discuss a fault-tolerant approximate gate set for quantum computing with spin GKP-encoded qubits, obtained by translating gates from the CV GKP setting using quantum central limit theorem.Comment: More details added to the previous versions with more figure
    • …
    corecore