27 research outputs found

    Multi-stage decoding for multi-level block modulation codes

    Get PDF
    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity

    A Soft-Aided Staircase Decoder Using Three-Level Channel Reliabilities

    Full text link
    The soft-aided bit-marking (SABM) algorithm is based on the idea of marking bits as highly reliable bits (HRBs), highly unreliable bits (HUBs), and uncertain bits to improve the performance of hard-decision (HD) decoders. The HRBs and HUBs are used to assist the HD decoders to prevent miscorrections and to decode those originally uncorrectable cases via bit flipping (BF), respectively. In this paper, an improved SABM algorithm (called iSABM) is proposed for staircase codes (SCCs). Similar to the SABM, iSABM marks bits with the help of channel reliabilities, i.e., using the absolute values of the log-likelihood ratios. The improvements offered by iSABM include: (i) HUBs being classified using a reliability threshold, (ii) BF randomly selecting HUBs, and (iii) soft-aided decoding over multiple SCC blocks. The decoding complexity of iSABM is comparable of that of SABM. This is due to the fact that on the one hand no sorting is required (lower complexity) because of the use of a threshold for HUBs, while on the other hand multiple SCC blocks use soft information (higher complexity). Additional gains of up to 0.53 dB with respect to SABM and 0.91 dB with respect to standard SCC decoding at a bit error rate of 10610^{-6} are reported. Furthermore, it is shown that using 1-bit reliability marking, i.e., only having HRBs and HUBs, only causes a gain penalty of up to 0.25 dB with a significantly reduced memory requirement

    Distributed Turbo Product Coding Techniques Over Cooperative Communication Systems

    Get PDF
    In this dissertation, we propose a coded cooperative communications framework based on Distributed Turbo Product Code (DTPC). The system uses linear block Extended Bose-Chaudhuri-Hochquenghem (EBCH) codes as component codes. The source broadcasts the EBCH coded frames to the destination and nearby relays. Each relay constructs a product code by arranging the corrected bit sequences in rows and re-encoding them vertically using EBCH as component codes to obtain an Incremental Redundancy (IR) for source\u27s data. Under this frame, we have investigated a number of interesting and important issues. First, to obtain, independent vertical parities from each relay in the same code space, we propose circular interleaving of the decoded EBCH rows before reencoding vertically. We propose and derive a novel soft information relay for the DTPC over cooperative network based on EBCH component codes. The relay generates Log-Likelihood Ratio (LLR) values for the decoded rows are used to construct a product code by re-encoding the matrix along the columns using a novel soft block encoding technique to obtain soft parity bits with different reliabilities that can be used as soft IR for source\u27s data which is forwarded to the destination. To minimize the overall decoding errors, we propose a power allocation method for the distributed encoded system when the channel attenuations for the direct and relay channels are known. We compare the performance of our proposed power allocation method with the fixed power assignments for DTPC system. We also develop a power optimization algorithm to check the validity of our proposed power allocation algorithm. Results for the power allocation and the power optimization prove on the potency of our proposed power allocation criterion and show the maximum possible attainable performance from the DTPC cooperative system. Finally, we propose new joint distributed Space-Time Block Code (STBC)-DTPC by generating the vertical parity on the relay and transmitting it to the destination using STBC on the source and relay. We tested our proposed system in a fast fading environment on the three channels connecting the three nodes in the cooperative network

    Forward Error Correcting Codes for 100 Gbit/s Optical Communication Systems

    Get PDF

    Advanced channel coding techniques using bit-level soft information

    Get PDF
    In this dissertation, advanced channel decoding techniques based on bit-level soft information are studied. Two main approaches are proposed: bit-level probabilistic iterative decoding and bit-level algebraic soft-decision (list) decoding (ASD). In the first part of the dissertation, we first study iterative decoding for high density parity check (HDPC) codes. An iterative decoding algorithm, which uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix adapted in each decoding iteration according to the bit-level reliabilities is proposed. In contrast to the common belief that iterative decoding is not suitable for HDPC codes, this bit-level reliability based adaptation procedure is critical to the conver-gence behavior of iterative decoding for HDPC codes and it significantly improves the iterative decoding performance of Reed-Solomon (RS) codes, whose parity check matrices are in general not sparse. We also present another iterative decoding scheme for cyclic codes by randomly shifting the bit-level reliability values in each iteration. The random shift based adaptation can also prevent iterative decoding from getting stuck with a significant complexity reduction compared with the reliability based parity check matrix adaptation and still provides reasonable good performance for short-length cyclic codes. In the second part of the dissertation, we investigate ASD for RS codes using bit-level soft information. In particular, we show that by carefully incorporating bit¬level soft information in the multiplicity assignment and the interpolation step, ASD can significantly outperform conventional hard decision decoding (HDD) for RS codes with a very small amount of complexity, even though the kernel of ASD is operating at the symbol-level. More importantly, the performance of the proposed bit-level ASD can be tightly upper bounded for practical high rate RS codes, which is in general not possible for other popular ASD schemes. Bit-level soft-decision decoding (SDD) serves as an efficient way to exploit the potential gain of many classical codes, and also facilitates the corresponding per-formance analysis. The proposed bit-level SDD schemes are potential and feasible alternatives to conventional symbol-level HDD schemes in many communication sys-tems

    Coded cooperative diversity with low complexity encoding and decoding algorithms.

    Get PDF
    One of the main concerns in designing the wireless communication systems is to provide sufficiently large data rates while considering the different aspects of the implementation complexity that is often constrained by limited battery power and signal processing capability of the devices. Thus, in this thesis, a low complexity encoding and decoding algorithms are investigated for systems with the transmission diversity, particularly the receiver diversity and the cooperative diversity. Design guidelines for such systems are provided to provide a good trade-off between the implementation complexity and the performance. The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated to reduce the complexity of coded systems. The original order statistics decoding (OSD) is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the OSD is further reduced by assuming a partial ordering of the received bits in order to avoid the highly complex Gauss elimination. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms are studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity. The complexity of the order statistics based list decoding algorithms for linear block codes and binary block turbo codes (BTC) is further reduced by employing highly reliable cyclic redundancy check (CRC) bits. The results show that sending CRC bits for many segments is the most effective tecnhique in reducing the complexity. The coded cooperative diversity is compared with the conventional receiver coded diversity in terms of the pairwise error probability and the overall bit error rate (BER). The expressions for the pairwise error probabilities are obtained analytically and verified by computer simulations. The performance of the cooperative diversity is found to be strongly relay location dependent. Using the analytical as well as extensive numerical results, the geographical areas of the relay locations are obtained for small to medium signal-to-noise ratio values, such that the cooperative coded diversity outperforms the receiver coded diversity. However, for sufficiently large signal-to-noise ratio (SNR) values, or if the path-loss attenuations are not considered, then the receiver coded diversity always outperforms the cooperative coded diversity. The obtained results have important implications on the deployment of the next generation cellular systems supporting the cooperative as well as the receiver diversity

    On distributed coding, quantization of channel measurements and faster-than-Nyquist signaling

    Get PDF
    This dissertation considers three different aspects of modern digital communication systems and is therefore divided in three parts. The first part is distributed coding. This part deals with source and source- channel code design issues for digital communication systems with many transmitters and one receiver or with one transmitter and one receiver but with side information at the receiver, which is not available at the transmitter. Such problems are attracting attention lately, as they constitute a way of extending the classical point-to-point communication theory to networks. In this first part of this dissertation, novel source and source-channel codes are designed by converting each of the considered distributed coding problems into an equivalent classical channel coding or classical source-channel coding problem. The proposed schemes come very close to the theoretical limits and thus, are able to exhibit some of the gains predicted by network information theory. In the other two parts of this dissertation classical point-to-point digital com- munication systems are considered. The second part is quantization of coded chan- nel measurements at the receiver. Quantization is a way to limit the accuracy of continuous-valued measurements so that they can be processed in the digital domain. Depending on the desired type of processing of the quantized data, different quantizer design criteria should be used. In this second part of this dissertation, the quantized received values from the channel are processed by the receiver, which tries to recover the transmitted information. An exhaustive comparison of several quantization cri- teria for this case are studied providing illuminating insight for this quantizer design problem. The third part of this dissertation is faster-than-Nyquist signaling. The Nyquist rate in classical point-to-point bandwidth-limited digital communication systems is considered as the maximum transmission rate or signaling rate and is equal to twice the bandwidth of the channel. In this last part of the dissertation, we question this Nyquist rate limitation by transmitting at higher signaling rates through the same bandwidth. By mitigating the incurred interference due to the faster-than-Nyquist rates, gains over Nyquist rate systems are obtained

    On Non-Binary Constellations for Channel Encoded Physical Layer Network Coding

    Get PDF
    This thesis investigates channel-coded physical layer network coding, in which the relay directly transforms the noisy superimposed channel-coded packets received from the two end nodes, to the network-coded combination of the source packets. This is in contrast to the traditional multiple-access problem, in which the goal is to obtain each message explicitly at the relay. Here, the end nodes AA and BB choose their symbols, SAS_A and SBS_B, from a small non-binary field, F\mathbb{F}, and use non-binary PSK constellation mapper during the transmission phase. The relay then directly decodes the network-coded combination aSA+bSB{aS_A+bS_B} over F\mathbb{F} from the noisy superimposed channel-coded packets received from two end nodes. Trying to obtain SAS_A and SBS_B explicitly at the relay is overly ambitious when the relay only needs aSB+bSBaS_B+bS_B. For the binary case, the only possible network-coded combination, SA+SB{S_A+S_B} over the binary field, does not offer the best performance in several channel conditions. The advantage of working over non-binary fields is that it offers the opportunity to decode according to multiple decoding coefficients (a,b)(a,b). As only one of the network-coded combinations needs to be successfully decoded, a key advantage is then a reduction in error probability by attempting to decode against all choices of decoding coefficients. In this thesis, we compare different constellation mappers and prove that not all of them have distinct performance in terms of frame error rate. Moreover, we derive a lower bound on the frame error rate performance of decoding the network-coded combinations at the relay. Simulation results show that if we adopt concatenated Reed-Solomon and convolutional coding or low density parity check codes at the two end nodes, our non-binary constellations can outperform the binary case significantly in the sense of minimizing the frame error rate and, in particular, the ternary constellation has the best frame error rate performance among all considered cases
    corecore