21,624 research outputs found

    Iterative channel equalization, channel decoding and source decoding

    No full text
    The performance of soft source decoding is evaluated over dispersive AWGN channels. By employing source codes having error-correcting capabilities, such as Reversible Variable-Length Codes (RVLCs) and Variable-Length Error-Correcting (VLEC) codes, the softin/soft-out (SISO) source decoder benefits from exchanging information with the MAP equalizer, and effectively eliminates the inter-symbol interference (ISI) after a few iterations. It was also found that the soft source decoder is capable of significantly improving the attainable performance of the turbo receiver provided that channel equalization, channel decoding and source decoding are carried out jointly and iteratively. At SER = 10-4, the performance of this three-component turbo receiver is about 2 dB better in comparison to the benchmark scheme carrying out channel equalization and channel decoding jointly, but source decoding separately. At this SER value, the performance of the proposed scheme is about 1 dB worse than that of the ½-rate convolutional coded non-dispersive AWGN channel.<br/

    Optimal Single-Shot Decoding of Quantum Codes

    Full text link
    We discuss single-shot decoding of quantum Calderbank-Shor-Steane codes with faulty syndrome measurements. We state the problem as a joint source-channel coding problem. By adding redundant rows to the code's parity-check matrix we obtain an additional syndrome error correcting code which addresses faulty syndrome measurements. Thereby, the redundant rows are chosen to obtain good syndrome error correcting capabilities while keeping the stabilizer weights low. Optimal joint decoding rules are derived which, though too complex for general codes, can be evaluated for short quantum codes.Comment: 6 pages, 4 figure

    Towards practical minimum-entropy universal decoding

    Get PDF
    Minimum-entropy decoding is a universal decoding algorithm used in decoding block compression of discrete memoryless sources as well as block transmission of information across discrete memoryless channels. Extensions can also be applied for multiterminal decoding problems, such as the Slepian-Wolf source coding problem. The 'method of types' has been used to show that there exist linear codes for which minimum-entropy decoders achieve the same error exponent as maximum-likelihood decoders. Since minimum-entropy decoding is NP-hard in general, minimum-entropy decoders have existed primarily in the theory literature. We introduce practical approximation algorithms for minimum-entropy decoding. Our approach, which relies on ideas from linear programming, exploits two key observations. First, the 'method of types' shows that that the number of distinct types grows polynomially in n. Second, recent results in the optimization literature have illustrated polytope projection algorithms with complexity that is a function of the number of vertices of the projected polytope. Combining these two ideas, we leverage recent results on linear programming relaxations for error correcting codes to construct polynomial complexity algorithms for this setting. In the binary case, we explicitly demonstrate linear code constructions that admit provably good performance

    Adaptive iterative detection for expediting the convergence of a serially concatenated unary error correction decoder, turbo decoder and an iterative demodulator

    No full text
    Unary Error Correction (UEC) codes constitute a recently proposed Joint Source and Channel Code (JSCC) family, conceived for alphabets having an infinite cardinality, whilst out-performing previously used Separate Source and Channel Codes (SSCCs). UEC based schemes rely on an iterative decoding process, which involves three decoding blocks when concatenated with a turbo code. Owing to this, following the activation of one of the three blocks, the next block to be activated must be chosen from the other two decoding block options. Furthermore, the UEC decoder offers a number of decoding options, allowing its complexity and error correction capability to be dynamically adjusted. It has been shown that iterative decoding convergence can be expedited by activating the specific decoding option that offers the highest Mutual Information (MI) improvement to computational complexity ratio. This paper introduces an iterative demodulator, which is shown to improve the associated error correction performance, while reducing the overall iterative decoding complexity. The challenge is that the iterative demodulator has to forward its soft-information to the other two iterative decoding blocks, and hence the corresponding MI improvements cannot be compared on a like-for-like basis. Additionally, we also propose a method of eliminating the logarithmic calculations from the adaptive iterative decoding algorithm, hence further reducing its implementational complexity without impacting its error correcting performance

    Comparison of memory thresholds for planar qudit geometries

    Get PDF
    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6% compared to the 8.0% obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9%. All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes

    Design Of Fountain Codes With Error Control

    Get PDF
    This thesis is focused on providing unequal error protection (uep) to two disjoint sources which are communicating to a comdestination via a comrelay by using distributed lt codes over a binary erasure channel (bec), and designing fountain codes with error control property by integrating lt codes with turbo codes over a binary input additive white gaussian noise (bi-awgn) channel. A simple yet efficient technique for decomposing the rsd into two entirely different degree distributions is developed and presented in this thesis. These two distributions are used to encode data symbols at the sources and the encoded symbols from the sources are selectively xored at the relay based on a suitable relay operation before the combined codeword is transmitted to the destination. By doing so, it is shown that the uep can be provided to these sources. The performance of lt codes over the awgn channel is well studied and presented in this thesis which indicates that these codes have weak error correction ability over the channel. But, errors introduced into individual symbols during the transmission of information over noisy channels need correction by some error correcting codes. Since it is found that lt codes alone are weak at correcting those errors, lt codes are integrated with turbo codes which are good error correcting codes. Therefore, the source data (symbols) are at first turbo encoded and then lt encoded and transmitted over the awgn channel. When the corrupted encoded symbols are received at receiver, lt decoding is conducted folloby turbo decoding. The overall performance of the integrated system is studied and presented in this thesis, which suggests that the errors left after lt decoding can be corrected to some extent by turbo decoder

    Statistical mechanics of image restoration and error-correcting codes

    Full text link
    We develop a statistical-mechanical formulation for image restoration and error-correcting codes. These problems are shown to be equivalent to the Ising spin glass with ferromagnetic bias under random external fields. We prove that the quality of restoration/decoding is maximized at a specific set of parameter values determined by the source and channel properties. For image restoration in mean-field system a line of optimal performance is shown to exist in the parameter space. These results are illustrated by solving exactly the infinite-range model. The solutions enable us to determine how precisely one should estimate unknown parameters. Monte Carlo simulations are carried out to see how far the conclusions from the infinite-range model are applicable to the more realistic two-dimensional case in image restoration.Comment: 20 pages, 9 figures, ReVTe

    Joint Source-Channel Coding with Real Number BCH and Reed-Solomon Codes: Their Properties and Performance in the Presence of Additive Noise

    Get PDF
    This thesis investigates the joint source-channel coding properties of real number BCH and Reed-Solomon codes in the presence of additive noise. From previous results, it was known that additive noise can cause the error correction ability of a real number code to degrade. This degradation results in decoding failures. Knowing this, there are two main objectives of this research. The first objective is to determine under what conditions a given real number code is reliable. More specifically, for a given real number BCH or Reed-Solomon code, I sought to determine the highest additive noise level for which the real number code could still be accurately decoded within a specified probability of failure. Using these results, the second objective is to determine whether a real number code can obtain better joint source-channel performance than a comparable finite field code. During the investigation process, I formalized the source coding properties that had been mentioned in previous research. The frrst objective was met by deriving an upper bound to the probability of a decoding failure as a function of the signal to noise ratio, the transmission error magnitudes and the code parameters. These bounds assume that a full search decoding method is implemented. Siflce the full search method is impractical and the traditional decoding method performed poorly in the presence of additive noise, an alternate decoding algorithm was developed. This algorithm attempts to combine the directness of the traditional BCH decoding algorithm with the robustness of the full search decoder. The second objective was met with mixed success since deriving an accurate average channel coding performance for multiple error correcting codes proved elusive. However, simulated results for a four error correcting code is examined.Electrical Engineerin

    Message passing algorithms for non-linear nodes and data compression

    Full text link
    The use of parity-check gates in information theory has proved to be very efficient. In particular, error correcting codes based on parity checks over low-density graphs show excellent performances. Another basic issue of information theory, namely data compression, can be addressed in a similar way by a kind of dual approach. The theoretical performance of such a Parity Source Coder can attain the optimal limit predicted by the general rate-distortion theory. However, in order to turn this approach into an efficient compression code (with fast encoding/decoding algorithms) one must depart from parity checks and use some general random gates. By taking advantage of analytical approaches from the statistical physics of disordered systems and SP-like message passing algorithms, we construct a compressor based on low-density non-linear gates with a very good theoretical and practical performance.Comment: 13 pages, European Conference on Complex Systems, Paris (Nov 2005
    corecore