95 research outputs found

    TTCM-aided rate-adaptive distributed source coding for Rayleigh fading channels

    No full text
    Adaptive turbo-trellis-coded modulation (TTCM)-aided asymmetric distributed source coding (DSC) is proposed, where two correlated sources are transmitted to a destination node. The first source sequence is TTCM encoded and is further compressed before it is transmitted through a Rayleigh fading channel, whereas the second source signal is assumed to be perfectly decoded and, hence, to be flawlessly shown at the destination for exploitation as side information for improving the decoding performance of the first source. The proposed scheme is capable of reliable communications within 0.80 dB of the Slepian-Wolf/Shannon (SW/S) theoretical limit at a bit error rate (BER) of 10-5. Furthermore, its encoder is capable of accommodating time-variant short-term correlation between the two sources

    The Road From Classical to Quantum Codes: A Hashing Bound Approaching Design Procedure

    Full text link
    Powerful Quantum Error Correction Codes (QECCs) are required for stabilizing and protecting fragile qubits against the undesirable effects of quantum decoherence. Similar to classical codes, hashing bound approaching QECCs may be designed by exploiting a concatenated code structure, which invokes iterative decoding. Therefore, in this paper we provide an extensive step-by-step tutorial for designing EXtrinsic Information Transfer (EXIT) chart aided concatenated quantum codes based on the underlying quantum-to-classical isomorphism. These design lessons are then exemplified in the context of our proposed Quantum Irregular Convolutional Code (QIRCC), which constitutes the outer component of a concatenated quantum code. The proposed QIRCC can be dynamically adapted to match any given inner code using EXIT charts, hence achieving a performance close to the hashing bound. It is demonstrated that our QIRCC-based optimized design is capable of operating within 0.4 dB of the noise limit

    Reliability information in channel decoding : practical aspects and information theoretical bounds

    Get PDF
    This thesis addresses the use of reliability information in channel decoding. The considered transmission systems comprise linear binary channel encoders, symmetric memoryless communication channels, and non-iterative or iterative symbol-by-symbol soft-output channel decoders. The notions of accurate and mismatched reliability values are introduced, and the measurement and improvement of the quality of reliability values are discussed. A criterion based on the Kullback-Leibler distance is proposed to assess the difference between accurate and mismatched reliability values. Accurate reliability values may be exploited to estimate transmission quality parameters, such as the bit-error probability or the symbol-wise mutual information between encoder input and decoder output. The proposed method is unbiased, does not require knowledge of the transmitted data, and has a smaller estimation variance than the conventional method. Symbol-by-symbol soft-output decoding may be interpreted as processing of mutual information. The behavior of a decoder may be characterized by information transfer functions, such as information processing characteristics (IPCs) or extrinsic information transfer (EXIT) functions. Bounds on information transfer functions are derived using the concept of bounding combined information. The resulting bounds are valid for all binary-input symmetric memoryless channels. Single parity-check codes, repetition codes, and the accumulator are addressed. Based on such bounds, decoding thresholds for low-density parity-check codes are analytically determined

    Importance Sampling Simulation of the Stack Algorithm with Application to Sequential Decoding

    Get PDF
    Importance sampling is a Monte Carlo variance reduction technique which in many applications has resulted in a significant reduction in computational cost required to obtain accurate Monte Carlo estimates. The basic idea is to generate the random inputs using a biased simulation distribution. That is, one that differs from the true underlying probability model. Simulation data is then weighted by an appropriate likelihood ratio in order to obtain an unbiased estimate of the desired parameter. This thesis presents new importance sampling techniques for the simulation of systems that employ the stack algorithm. The stack algorithm is primarily used in digital communications to decode convolutional codes, but there are also other applications. For example, sequential edge linking is a method of finding edges in images that employs the stack algorithm. In brief, the stack algorithm is an algorithm that attempts to find the maximum metric path through a large decision tree. There are two quantities that characterize its performance. First there is the probability of a branching error. The second quantity is the distribution of computation. It turns out that the number of tree nodes examined in order to make a specific branching decision is a random variable. The distribution of computation is the distribution of this random variable. The estimation of the distribution of computation, and parameters derived from this distribution, is the main goal of this work. We present two new importance sampling schemes (including some variations) for estimating the distribution of computation of the stack algorithm. The first general method is called the reference path method. This method biases noise inputs using the weight distribution of the associated convolutional code. The second method is the partitioning method. This method uses a stationary biasing of noise inputs that alters the drift of the node metric process in an ensemble average sense. The biasing is applied only up to a certain point in time; the point where the correct path node metric minimum occurs. This method is inspired by both information theory and large deviations theory. This thesis also presents another two importance sampling techniques. The first is called the error events simulation method. This scheme will be used to estimate the error probabilities of stack algorithm decoders. The second method that we shall present is a new importance sampling technique for simulating the sequential edge linking algorithm. The main goal of this presentation will be the development of the basic theory that is relevant to this simulation problem, and to discuss some of the key issues that are related to the sequential edge linking simulation

    Codes on Graphs and More

    Get PDF
    Modern communication systems strive to achieve reliable and efficient information transmission and storage with affordable complexity. Hence, efficient low-complexity channel codes providing low probabilities for erroneous receptions are needed. Interpreting codes as graphs and graphs as codes opens new perspectives for constructing such channel codes. Low-density parity-check (LDPC) codes are one of the most recent examples of codes defined on graphs, providing a better bit error probability than other block codes, given the same decoding complexity. After an introduction to coding theory, different graphical representations for channel codes are reviewed. Based on ideas from graph theory, new algorithms are introduced to iteratively search for LDPC block codes with large girth and to determine their minimum distance. In particular, new LDPC block codes of different rates and with girth up to 24 are presented. Woven convolutional codes are introduced as a generalization of graph-based codes and an asymptotic bound on their free distance, namely, the Costello lower bound, is proven. Moreover, promising examples of woven convolutional codes are given, including a rate 5/20 code with overall constraint length 67 and free distance 120. The remaining part of this dissertation focuses on basic properties of convolutional codes. First, a recurrent equation to determine a closed form expression of the exact decoding bit error probability for convolutional codes is presented. The obtained closed form expression is evaluated for various realizations of encoders, including rate 1/2 and 2/3 encoders, of as many as 16 states. Moreover, MacWilliams-type identities are revisited and a recursion for sequences of spectra of truncated as well as tailbitten convolutional codes and their duals is derived. Finally, the dissertation is concluded with exhaustive searches for convolutional codes of various rates with either optimum free distance or optimum distance profile, extending previously published results

    Optimization of bit interleaved coded modulation using genetic algorithms

    Get PDF
    Modern wireless communication systems must be optimized with respect to both bandwidth efficiency and energy efficiency. A common approach to achieve these goals is to use multi-level modulation such as quadrature-amplitude modulation (QAM) for bandwidth efficiency and an error-control code for energy efficiency. In benign additive white Gaussian noise (AWGN) channels, Ungerboeck proposed trellis-coded modulation (TCM), which combines modulation and coding into a joint operation. However, in fading channels, it is important to maximize diversity. As shown by Zehavi, diversity is maximized by performing coding and modulation separately and interleaving bits that are passed from the encoder to the modulator. Such systems are termed BICM for bit-interleaved coded modulation. Later, Li and Ritcey proposed a method for improving the performance of BICM systems by iteratively passing information between the demodulator and decoder. Such systems are termed BICM-ID , for BICM with Iterative Decoding. The bit error rate (BER) curve of a typical BICM-ID system is characterized by a steeply sloping waterfall region followed by an error floor with a gradual slope.;This thesis is focused on optimizing BICM-ID systems in the error floor region. The problem of minimizing the error bound is formulated as an instance of the Quadratic Assignment Problem (QAP) and solved using a genetic algorithm. First, an optimization is performed by fixing the modulation and varying the bit-to-symbol mapping. This approach provides the lowest possible error floor for a BICM-ID system using standard QAM and phase-shift keying (PSK) modulations. Next, the optimization is performed by varying not only the bit-to-symbol mapping, but also the location of the signal points within the two-dimensional constellation. This provides an error floor that is lower than that achieved with the best QAM and PSK systems, although at the cost of a delayed waterfall region

    Reduced Receivers for Faster-than-Nyquist Signaling and General Linear Channels

    Get PDF
    Fast and reliable data transmission together with high bandwidth efficiency are important design aspects in a modern digital communication system. Many different approaches exist but in this thesis bandwidth efficiency is obtained by increasing the data transmission rate with the faster-than-Nyquist (FTN) framework while keeping a fixed power spectral density (PSD). In FTN consecutive information carrying symbols can overlap in time and in that way introduce a controlled amount of intentional intersymbol interference (ISI). This technique was introduced already in 1975 by Mazo and has since then been extended in many directions. Since the ISI stemming from practical FTN signaling can be of significant duration, optimum detection with traditional methods is often prohibitively complex, and alternative equalization methods with acceptable complexity-performance tradeoffs are needed. The key objective of this thesis is therefore to design reduced-complexity receivers for FTN and general linear channels that achieve optimal or near-optimal performance. Although the performance of a detector can be measured by several means, this thesis is restricted to bit error rate (BER) and mutual information results. FTN signaling is applied in two ways: As a separate uncoded narrowband communication system or in a coded scenario consisting of a convolutional encoder, interleaver and the inner ISI mechanism in serial concatenation. Turbo equalization where soft information in the form of log likelihood ratios (LLRs) is exchanged between the equalizer and the decoder is a commonly used decoding technique for coded FTN signals. The first part of the thesis considers receivers and arising stability problems when working within the white noise constraint. New M-BCJR algorithms for turbo equalization are proposed and compared to reduced-trellis VA and BCJR benchmarks based on an offset label idea. By adding a third low-complexity M-BCJR recursion, LLR quality is improved for practical values of M. M here measures the reduced number of BCJR computations for each data symbol. An improvement of the minimum phase conversion that sharpens the focus of the ISI model energy is proposed. When combined with a delayed and slightly mismatched receiver, the decoding allows a smaller M without significant loss in BER. The second part analyzes the effect of the internal metric calculations on the performance of Forney- and Ungerboeck-based reduced-complexity equalizers of the M-algorithm type for both ISI and multiple-input multiple-output (MIMO) channels. Even though the final output of a full-complexity equalizer is identical for both models, the internal metric calculations are in general different. Hence, suboptimum methods need not produce the same final output. Additionally, new models working in between the two extremes are proposed and evaluated. Note that the choice of observation model does not impact the detection complexity as the underlying algorithm is unaltered. The last part of the thesis is devoted to a different complexity reducing approach. Optimal channel shortening detectors for linear channels are optimized from an information theoretical perspective. The achievable information rates of the shortened models as well as closed form expressions for all components of the optimal detector of the class are derived. The framework used in this thesis is more general than what has been previously used within the area

    A Hardware Implementation of the Soft Output Viterbi Algorithm for Serially Concatenated Convolutional Codes

    Get PDF
    This thesis outlines the hardware design of a soft output Viterbi algorithm decoder for use in a serially concatenated convolutional code system. Convolutional codes and their related structures are described, as well as the algorithms used to decode them. A decoder design intended for a field-programmable gate array is presented. Simulations of the proposed design are compared with simulations of a software reference decoder that is known to be correct. Results of the simulations are shown and interpreted, and suggestions for future improvements are given
    • …
    corecore