91 research outputs found

    Teimitsudo pariti kensa fugo o mochiita basto shoshitsu teiseiho ni kansuru kenkyu

    Get PDF
    制度:新 ; 報告番号:乙2707号 ; 学位の種類:博士(工学) ; 授与年月日:2008/12/11 ; 早大学位記番号:新489

    Error-correction on non-standard communication channels

    Get PDF
    Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels. First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB. Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels. Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes. Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code

    Dynamic information and constraints in source and channel coding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 237-251).This thesis explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion side information is a natural complement to Wyner-Ziv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual of channel coding with feedback).(cont.) Using duality, we develop a linear complexity algorithm which exploits the feedforward information to achieve the rate-distortion bound. The second part of the thesis focuses on channel dynamics in communication by introducing a new system model to study delay in streaming applications. We first consider an adversarial channel model where at any time the channel may suffer a burst of degraded performance (e.g., due to signal fading, interference, or congestion) and prove a coding theorem for the minimum decoding delay required to recover from such a burst. Our coding theorem illustrates the relationship between the structure of a code, the dynamics of the channel, and the resulting decoding delay. We also consider more general channel dynamics. Specifically, we prove a coding theorem establishing that, for certain collections of channel ensembles, delay-universal codes exist that simultaneously achieve the best delay for any channel in the collection. Practical constructions with low encoding and decoding complexity are described for both cases.(cont.) Finally, we also consider architectures consisting of both source and channel coding which deal with channel dynamics by spreading information over space, frequency, multiple antennas, or alternate transmission paths in a network to avoid coding delays. Specifically, we explore whether the inherent diversity in such parallel channels should be exploited at the application layer via multiple description source coding, at the physical layer via parallel channel coding, or through some combination of joint source-channel coding. For on-off channel models application layer diversity architectures achieve better performance while for channels with a continuous range of reception quality (e.g., additive Gaussian noise channels with Rayleigh fading), the reverse is true. Joint source-channel coding achieves the best of both by performing as well as application layer diversity for on-off channels and as well as physical layer diversity for continuous channels.by Emin Martinian.Ph.D

    On Cyclic Polar Codes and the Burst Erasure Performance of Spatially-Coupled LDPC Codes

    Get PDF
    In this thesis, we produce our work on two of the state-of-the-art techniques in modern coding theory: polar codes and spatially-coupled LDPC codes. Polar codes were introduced in 2009 and proven to achieve the symmetric capacity of any binary-input discrete memoryless channel under low-complexity successive cancellation decoding. Since then, finite length (non-asymptotic) performance has been the primary concern with respect to polar codes. In this work, we construct cyclic polar codes based on a mixed-radix Cooley-Tukey decomposition of the Galois field Fourier transform. The main results are: we can, for the first time, construct, encode and decode polar codes that are cyclic, with their blocklength being arbitrary; for a given target block erasure rate, we can achieve significantly higher code rates on the erasure channel than the original polar codes, at comparable blocklengths; on the symmetric channel with only errors, we can perform much better than equivalent rate Reed-Solomon codes with the same blocklength, by using soft-decision decoding; and, since the codes are subcodes of higher rate RS codes, a RS decoder can be used if suboptimal performance suffices for the application as a trade-o_ for higher decoding speed. The programs developed for this work can be accessed at https://github.com/nrenga/cyclic_polar. In 2010, it was shown that spatially-coupled low-density parity-check (LDPC) codes approach the capacity of binary memoryless channels, asymptotically, with belief-propagation (BP) decoding. In our work, we are interested in the finite length average performance of randomly coupled LDPC ensembles on binary erasure channels with memory. The significant contributions of this work are: tight lower bounds for the block erasure probability (PB) under various scenarios for the burst pattern; bounds focused on practical scenarios where a burst affects exactly one of the coupled codes; expected error floor for the bit erasure probability (Pb) on the binary erasure channel; and, characterization of the performance of random regular ensembles, on erasure channels, with a single vector describing distinct types of size-2 stopping sets. All these results are verified using Monte-Carlo simulations. Further, we show that increasing variable node degree combined with expurgation can improve PB by several orders of magnitude in the number of bits per coupled code

    LDPC Codes for 2D Arrays

    Get PDF
    Binary codes over 2D arrays are very useful in data storage, where each array column represents a storage device or unit that may suffer failure. In this paper, we propose a new framework for probabilistic construction of codes on 2D arrays. Instead of a pure combinatorial erasure model used in traditional array codes, we propose a mixed combinatorial-probabilistic model of limiting the number of column failures, and assuming a binary erasure channel in each failing column. For this model, we give code constructions and detailed analysis that allow sustaining a large number of column failures with graceful degradation in the fraction of erasures correctable in failing columns. Another advantage of the new framework is that it uses low-complexity iterative decoding. The key component in the analysis of the new codes is to analyze the decoding graphs induced by the failed columns, and infer the decoding performance as a function of the code design parameters, as well as the array size and failure parameters. A particularly interesting class of codes, called probabilistically maximum distance separable (MDS) array codes, gives fault-tolerance that is equivalent to traditional MDS array codes. The results also include a proof that the 2D codes outperform standard 1D low-density parity-check codes

    Introduction to Forward-Error-Correcting Coding

    Get PDF
    This reference publication introduces forward error correcting (FEC) and stresses definitions and basic calculations for use by engineers. The seven chapters include 41 example problems, worked in detail to illustrate points. A glossary of terms is included, as well as an appendix on the Q function. Block and convolutional codes are covered

    LDPC Codes for 2D Arrays

    Full text link

    Efficient soft decoding techniques for reed-solomon codes

    Get PDF
    The main focus of this thesis is on finding efficient decoding methods for Reed-Solomon (RS) codes, i.e., algorithms with acceptable performance and affordable complexity. Three classes of decoders are considered including sphere decoding, belief propagation decoding and interpolation-based decoding. Originally proposed for finding the exact solution of least-squares problems, sphere decoding (SD) is used along with the most reliable basis (MRB) to design an efficient soft decoding algorithm for RS codes. For an (N, K ) RS code, given the received vector and the lattice of all possible transmitted vectors, we propose to look for only those lattice points that fall within a sphere centered at the received vector and also are valid codewords. To achieve this goal, we use the fact that RS codes are maximum distance separable (MDS). Therefore, we use sphere decoding in order to find tentative solutions consisting of the K most reliable code symbols that fall inside the sphere. The acceptable values for each of these symbols are selected from an ordered set of most probable transmitted symbols. Based on the MDS property, K code symbols of each tentative solution can he used to find the rest of codeword symbols. If the resulting codeword is within the search radius, it is saved as a candidate transmitted codeword. Since we first find the most reliable code symbols and for each of them we use an ordered set of most probable transmitted symbols, candidate codewords are found quickly resulting in reduced complexity. Considerable coding gains are achieved over the traditional hard decision decoders with moderate increase in complexity. Due to their simplicity and good performance when used for decoding low density parity check (LDPC) codes, iterative decoders based on belief propagation (BP) have also been considered for RS codes. However, the parity check matrix of RS codes is very dense resulting in lots of short cycles in the factor graph and consequently preventing the reliability updates (using BP) from converging to a codeword. In this thesis, we propose two BP based decoding methods. In both of them, a low density extended parity check matrix is used because of its lower number of short cycles. In the first method, the cyclic structure of RS codes is taken into account and BP algorithm is applied on different cyclically shifted versions of received reliabilities, capable of detecting different error patterns. This way, some deterministic errors can be avoided. The second method is based on information correction in BP decoding where all possible values are tested for selected bits with low reliabilities. This way, the chance of BP iterations to converge to a codeword is improved significantly. Compared to the existing iterative methods for RS codes, our proposed methods provide a very good trade-off between the performance and the complexity. We also consider interpolation based decoding of RS codes. We specifically focus on Guruswami-Sudan (GS) interpolation decoding algorithm. Using the algebraic structure of RS codes and bivariate interpolation, the GS method has shown improved error correction capability compared to the traditional hard decision decoders. Based on the GS method, a multivariate interpolation decoding method is proposed for decoding interleaved RS (IRS) codes. Using this method all the RS codewords of the interleaved scheme are decoded simultaneously. In the presence of burst errors, the proposed method has improved correction capability compared to the GS method. This method is applied for decoding IRS codes when used as outer codes in concatenated code

    An erasure-resilient and compute-efficient coding scheme for storage applications

    Get PDF
    Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme
    corecore