305 research outputs found

    Iterative Equalization and Source Decoding for Vector Quantized Sources

    No full text
    In this contribution an iterative (turbo) channel equalization and source decoding scheme is considered. In our investigations the source is modelled as a Gaussian-Markov source, which is compressed with the aid of vector quantization. The communications channel is modelled as a time-invariant channel contaminated by intersymbol interference (ISI). Since the ISI channel can be viewed as a rate-1 encoder and since the redundancy of the source cannot be perfectly removed by source encoding, a joint channel equalization and source decoding scheme may be employed for enhancing the achievable performance. In our study the channel equalization and the source decoding are operated iteratively on a bit-by-bit basis under the maximum aposteriori (MAP) criterion. The channel equalizer accepts the a priori information provided by the source decoding and also extracts extrinsic information, which in turn acts as a priori information for improving the source decoding performance. Simulation results are presented for characterizing the achievable performance of the iterative channel equalization and source decoding scheme. Our results show that iterative channel equalization and source decoding is capable of achieving an improved performance by efficiently exploiting the residual redundancy of the vector quantization assisted source coding

    PACKET-BASED MARKOV MODELING OF REED-SOLOMON BLOCK CODED CORRELATED FADING CHANNELS

    Get PDF
    This paper considers the transmission of a Reed-Solomon (RS) code over a binary modulated time-correlated flat Rician fading channel with hard-decision demodulation. We define a binary packet (symbol) error sequence that indicates whether or not an RS symbol is transmitted successfully across the discrete channel whose input enters the modulator and whose output exits the demodulator. We then approximate the discrete channel’s packet error sequence using an Mth order Markov queue-based channel (QBC). In other words, the QBC is used to model the discrete channel at the packet level. Modeling accuracy is evaluated by comparing the simulated probability of codeword error (PCE) for the discrete channel with the numerically evaluated PCE for the QBC. Modeling results identify accurate low-order QBCs for a wide range of fading conditions and reveal that modeling the discrete channel at the packet level is an efficient tool for non-binary coding performance evaluation over channels with memory. 1

    A Practical Approach to Lossy Joint Source-Channel Coding

    Full text link
    This work is devoted to practical joint source channel coding. Although the proposed approach has more general scope, for the sake of clarity we focus on a specific application example, namely, the transmission of digital images over noisy binary-input output-symmetric channels. The basic building blocks of most state-of the art source coders are: 1) a linear transformation; 2) scalar quantization of the transform coefficients; 3) probability modeling of the sequence of quantization indices; 4) an entropy coding stage. We identify the weakness of the conventional separated source-channel coding approach in the catastrophic behavior of the entropy coding stage. Hence, we replace this stage with linear coding, that maps directly the sequence of redundant quantizer output symbols into a channel codeword. We show that this approach does not entail any loss of optimality in the asymptotic regime of large block length. However, in the practical regime of finite block length and low decoding complexity our approach yields very significant improvements. Furthermore, our scheme allows to retain the transform, quantization and probability modeling of current state-of the art source coders, that are carefully matched to the features of specific classes of sources. In our working example, we make use of ``bit-planes'' and ``contexts'' model defined by the JPEG2000 standard and we re-interpret the underlying probability model as a sequence of conditionally Markov sources. The Markov structure allows to derive a simple successive coding and decoding scheme, where the latter is based on iterative Belief Propagation. We provide a construction example of the proposed scheme based on punctured Turbo Codes and we demonstrate the gain over a conventional separated scheme by running extensive numerical experiments on test images.Comment: 51 pages, submitted to IEEE Transactions on Information Theor

    Advances in Detection and Error Correction for Coherent Optical Communications: Regular, Irregular, and Spatially Coupled LDPC Code Designs

    Full text link
    In this chapter, we show how the use of differential coding and the presence of phase slips in the transmission channel affect the total achievable information rates and capacity of a system. By means of the commonly used QPSK modulation, we show that the use of differential coding does not decrease the total amount of reliably conveyable information over the channel. It is a common misconception that the use of differential coding introduces an unavoidable differential loss. This perceived differential loss is rather a consequence of simplified differential detection and decoding at the receiver. Afterwards, we show how capacity-approaching coding schemes based on LDPC and spatially coupled LDPC codes can be constructed by combining iterative demodulation and decoding. For this, we first show how to modify the differential decoder to account for phase slips and then how to use this modified differential decoder to construct good LDPC codes. This construction method can serve as a blueprint to construct good and practical LDPC codes for other applications with iterative detection, such as higher order modulation formats with non-square constellations, multi-dimensional optimized modulation formats, turbo equalization to mitigate ISI (e.g., due to nonlinearities) and many more. Finally, we introduce the class of spatially coupled (SC)-LDPC codes, which are a generalization of LDPC codes with some outstanding properties and which can be decoded with a very simple windowed decoder. We show that the universal behavior of spatially coupled codes makes them an ideal candidate for iterative differential demodulation/detection and decoding.Comment: "Enabling Technologies for High Spectral-efficiency Coherent Optical Communication Networks" edited by X. Zhou and C. Xie, John Wiley & Sons, Inc., April 201

    Telemetering and telecommunications research

    Get PDF
    The New Mexico State University (NMSU) Center for Space Telemetering and Telecommunications systems is engaged in advanced communications systems research. Four areas of study that are being sponsored concern investigations into the use of trellis-coded modulation (TCM). In particular, two areas concentrate on carrier synchronization research in TCM M-ary phase shift key (MPSK) systems. A third research topic is the study of interference effects on TCM, while the fourth research area is in the field of concatenated TCM systems

    Turbo space-time coded modulation : principle and performance analysis

    Get PDF
    A breakthrough in coding was achieved with the invention of turbo codes. Turbo codes approach Shannon capacity by displaying the properties of long random codes, yet allowing efficient decoding. Coding alone, however, cannot fully address the problem of multipath fading channel. Recent advances in information theory have revolutionized the traditional view of multipath channel as an impairment. New results show that high gains in capacity can be achieved through the use of multiple antennas at the transmitter and the receiver. To take advantage of these new results in information theory, it is necessary to devise methods that allow communication systems to operate close to the predicted capacity. One such method recently invented is space-time coding, which provides both coding gain and diversity advantage. In this dissertation, a new class of codes is proposed that extends the concept of turbo coding to include space-time encoders as constituent building blocks of turbo codes. The codes are referred to as turbo spacetime coded modulation (turbo-STCM). The motivation behind the turbo-STCM concept is to fuse the important properties of turbo and space-time codes into a unified design framework. A turbo-STCM encoder is proposed, which consists of two space-time codes in recursive systematic form concatenated in parallel. An iterative symbol-by-symbol maximum a posteriori algorithm operating in the log domain is developed for decoding turbo-STCM. The decoder employs two a posteriori probability (APP) computing modules concatenated in parallel; one module for each constituent code. The analysis of turbo-STCM is demonstrated through simulations and theoretical closed-form expressions. Simulation results are provided for 4-PSK and 8-PSK schemes over the Rayleigh block-fading channel. It is shown that the turbo-STCM scheme features full diversity and full coding rate. The significant gain can be obtained in performance over conventional space-time codes of similar complexity. The analytical union bound to the bit error probability is derived for turbo-STCM over the additive white Gaussian noise (AWGN) and the Rayleigh block-fading channels. The bound makes it possible to express the performance analysis of turbo-STCM in terms of the properties of the constituent space-time codes. The union bound is demonstrated for 4-PSK and 8-PSK turbo-STCM with two transmit antennas and one/two receive antennas. Information theoretic bounds such as Shannon capacity, cutoff rate, outage capacity and the Fano bound, are computed for multiantenna systems over the AWGN and fading channels. These bounds are subsequently used as benchmarks for demonstrating the performance of turbo-STCM

    Channels with block interference

    Get PDF
    A new class of channel models with memory is presented in order to study various kinds of interference phenomena. It is shown, among other things, that when all other parameters are held fixed, channel capacity C is an increasing function of the memory length, while the cutoff rate R0 generally is a decreasing function. Calculations with various explicit coding schemes indicate that C is better than R0 as a performance measure for these channel models. As a partial resolution of this C versus R0 paradox, the conjecture is offered that R0 is more properly a measure of coding delay rather than of coding complexity

    Delay-Sensitive Communication over Fading Channel: Queueing Behavior and Code Parameter Selection

    Full text link
    This article examines the queueing performance of communication systems that transmit encoded data over unreliable channels. A fading formulation suitable for wireless environments is considered where errors are caused by a discrete channel with correlated behavior over time. Random codes and BCH codes are employed as means to study the relationship between code-rate selection and the queueing performance of point-to-point data links. For carefully selected channel models and arrival processes, a tractable Markov structure composed of queue length and channel state is identified. This facilitates the analysis of the stationary behavior of the system, leading to evaluation criteria such as bounds on the probability of the queue exceeding a threshold. Specifically, this article focuses on system models with scalable arrival profiles, which are based on Poisson processes, and finite-state channels with memory. These assumptions permit the rigorous comparison of system performance for codes with arbitrary block lengths and code rates. Based on the resulting characterizations, it is possible to select the best code parameters for delay-sensitive applications over various channels. The methodology introduced herein offers a new perspective on the joint queueing-coding analysis of finitestate channels with memory, and it is supported by numerical simulations

    Partially Block Markov Superposition Transmission of Gaussian Source with Nested Lattice Codes

    Full text link
    This paper studies the transmission of Gaussian sources through additive white Gaussian noise (AWGN) channels in bandwidth expansion regime, i.e., the channel bandwidth is greater than the source bandwidth. To mitigate the error propagation phenomenon of conventional digital transmission schemes, we propose in this paper a new capacity-approaching joint source channel coding (JSCC) scheme based on partially block Markov superposition transmission (BMST) of nested lattice codes. In the proposed scheme, first, the Gaussian source sequence is discretized by a lattice-based quantizer, resulting in a sequence of lattice points. Second, these lattice points are encoded by a short systematic group code. Third, the coded sequence is partitioned into blocks of equal length and then transmitted in the BMST manner. Main characteristics of the proposed JSCC scheme include: 1) Entropy coding is not used explicitly. 2) Only parity-check sequence is superimposed, hence, termed partially BMST (PBMST). This is different from the original BMST. To show the superior performance of the proposed scheme, we present extensive simulation results which show that the proposed scheme performs within 1.0 dB of the Shannon limits. Hence, the proposed scheme provides an attractive candidate for transmission of Gaussian sources.Comment: 22 pages, 9 figures, Submitted to IEEE Transaction on Communication

    Error correction for asynchronous communication and probabilistic burst deletion channels

    Get PDF
    Short-range wireless communication with low-power small-size sensors has been broadly applied in many areas such as in environmental observation, and biomedical and health care monitoring. However, such applications require a wireless sensor operating in always-on mode, which increases the power consumption of sensors significantly. Asynchronous communication is an emerging low-power approach for these applications because it provides a larger potential of significant power savings for recording sparse continuous-time signals, a smaller hardware footprint, and a lower circuit complexity compared to Nyquist-based synchronous signal processing. In this dissertation, the classical Nyquist-based synchronous signal sampling is replaced by asynchronous sampling strategies, i.e., sampling via level crossing (LC) sampling and time encoding. Novel forward error correction schemes for sensor communication based on these sampling strategies are proposed, where the dominant errors consist of pulse deletions and insertions, and where encoding is required to take place in an instantaneous fashion. For LC sampling the presented scheme consists of a combination of an outer systematic convolutional code, an embedded inner marker code, and power-efficient frequency-shift keying modulation at the sensor node. Decoding is first obtained via a maximum a-posteriori (MAP) decoder for the inner marker code, which achieves synchronization for the insertion and deletion channel, followed by MAP decoding for the outer convolutional code. By iteratively decoding marker and convolutional codes along with interleaving, a significant reduction in terms of the expected end-to-end distortion between original and reconstructed signals can be obtained compared to non-iterative processing. Besides investigating the rate trade-off between marker and convolutional codes, it is shown that residual redundancy in the asynchronously sampled source signal can be successfully exploited in combination with redundancy only from a marker code. This provides a new low complexity alternative for deletion and insertion error correction compared to using explicit redundancy. For time encoding, only the pulse timing is of relevance at the receiver, and the outer channel code is replaced by a quantizer to represent the relative position of the pulse timing. Numerical simulations show that LC sampling outperforms time encoding in the low to moderate signal-to-noise ratio regime by a large margin. In the second part of this dissertation, a new burst deletion correction scheme tailored to low-latency applications such as high-read/write-speed non-volatile memory is proposed. An exemplary version is given by racetrack memory, where the element of information is stored in a cell, and data reading is performed by many read ports or heads. In order to read the information, multiple cells shift to its closest head in the same direction and at the same speed, which means a block of bits (i.e., a non-binary symbol) are read by multiple heads in parallel during a shift of the cells. If the cells shift more than by one single cell location, it causes consecutive (burst) non-binary symbol deletions. In practical systems, the maximal length of consecutive non-binary deletions is limited. Existing schemes for this scenario leverage non-binary de Bruijn sequences to perfectly locate deletions. In contrast, in this work binary marker patterns in combination with a new soft-decision decoder scheme is proposed. In this scheme, deletions are soft located by assigning a posteriori probabilities for the location of every burst deletion event and are replaced by erasures. Then, the resulting errors are further corrected by an outer channel code. Such a scheme has an advantage over using non-binary de Bruijn sequences that it in general increases the communication rate
    corecore