21 research outputs found

    Bit flipping decoding for binary product codes

    Get PDF
    Error control coding has been used to mitigate the impact of noise on the wireless channel. Today, wireless communication systems have in their design Forward Error Correction (FEC) techniques to help reduce the amount of retransmitted data. When designing a coding scheme, three challenges need to be addressed, the error correcting capability of the code, the decoding complexity of the code and the delay introduced by the coding scheme. While it is easy to design coding schemes with a large error correcting capability, it is a challenge finding decoding algorithms for these coding schemes. Generally increasing the length of a block code increases its error correcting capability and its decoding complexity. Product codes have been identified as a means to increase the block length of simpler codes, yet keep their decoding complexity low. Bit flipping decoding has been identified as simple to implement decoding algorithm. Research has generally been focused on improving bit flipping decoding for Low Density Parity Check codes. In this study we develop a new decoding algorithm based on syndrome checking and bit flipping to use for binary product codes, to address the major challenge of coding systems, i.e., developing codes with a large error correcting capability yet have a low decoding complexity. Simulated results show that the proposed decoding algorithm outperforms the conventional decoding algorithm proposed by P. Elias in BER and more significantly in WER performance. The algorithm offers comparable complexity to the conventional algorithm in the Rayleigh fading channel

    Applications of iterative decoding to magnetic recording channels.

    Get PDF
    Finally, Q-ary LDPC (Q-LDPC) codes are considered for MRCs. Belief propagation decoding for binary LDPC codes is extended to Q-LDPC codes and a reduced-complexity decoding algorithm for Q-LDPC codes is developed. Q-LDPC coded systems perform very well with random noise as well as with burst erasures. Simulations show that Q-LDPC systems outperform RS systems.Secondly, binary low-density parity-check (LDPC) codes are proposed for MRCs. Random binary LDPC codes, finite-geometry LDPC codes and irregular LDPC codes are considered. With belief propagation decoding, LDPC systems are shown to have superior performance over current Reed-Solomon (RS) systems at the range possible for computer simulation. The issue of RS-LDPC concatenation is also addressed.Three coding schemes are investigated for magnetic recording systems. Firstly, block turbo codes, including product codes and parallel block turbo codes, are considered on MRCs. Product codes with other types of component codes are briefly discussed.Magnetic recoding channels (MRCs) are subject to noise contamination and error-correcting codes (ECCs) are used to keep the integrity of the data. Conventionally, hard decoding of the ECCs is performed. In this dissertation, systems using soft iterative decoding techniques are presented and their improved performance is established

    Improving the Bandwidth Efficiency of Multiple Access Channels using Network Coding and Successive Decoding

    Get PDF
    In this thesis, different approaches for improving the bandwidth efficiency of Multiple Access Channels (MAC) have been proposed. Such improvements can be achieved with methods that use network coding, or with methods that implement successive decoding. Both of these two methods have been discussed here. Under the first method, two novel schemes for using network coding in cooperative networks have been proposed. In the first scheme, network coding generates some redundancy in addition to the redundancy that is generated by the channel code. These redundancies are used in an iterative decoding system at the destination. In the second scheme, the output of the channel encoder in each source node is shortened and transmitted. The relay, by use of the network code, sends a compressed version of the parts missing from the original transmission. This facilitates the decoding procedure at the destination. Simulation based optimizations have been developed. The results indicate that in the case of sources with non-identical power levels, both scenarios outperform the non-relay case. The second method, involves a scheme to increase the channel capacity of an existing channel. This increase is made possible by the introduction of a new Raptor coded interfering channel to an existing channel. Through successive decoding at the destination, the data of both main and interfering sources is decoded. We will demonstrate that when some power difference exists, there is a tradeoff between achieved rate and power efficiency. We will also find the optimum power allocation scenario for this tradeoff. Ultimately we propose a power adaptation scheme that allocates the optimal power to the interfering channel based on an estimation of the main channel's condition. Finally, we generalize our work to allow the possibility of decoding either the secondary source data or the main source data first. We will investigate the performance and delay for each decoding scheme. Since the channels are non-orthogonal, it is possible that for some power allocation scenarios, constellation points get erased. To address this problem we use constellation rotation. The constellation map of the secondary source is rotated to increase the average distance between the points in the constellation (resulting from the superposition of the main and interfering sources constellation.) We will also determine the optimum constellation rotation angle for the interfering source analytically and confirm it with simulations

    Spatially Coupled Turbo-Like Codes

    Get PDF
    The focus of this thesis is on proposing and analyzing a powerful class of codes on graphs---with trellis constraints---that can simultaneously approach capacity and achieve very low error floor. In particular, we propose the concept of spatial coupling for turbo-like code (SC-TC) ensembles and investigate the impact of coupling on the performance of these codes. The main elements of this study can be summarized by the following four major topics. First, we considered the spatial coupling of parallel concatenated codes (PCCs), serially concatenated codes (SCCs), and hybrid concatenated codes (HCCs).We also proposed two extensions of braided convolutional codes (BCCs) to higher coupling memories. Second, we investigated the impact of coupling on the asymptotic behavior of the proposed ensembles in term of the decoding thresholds. For that, we derived the exact density evolution (DE) equations of the proposed SC-TC ensembles over the binary erasure channel. Using the DE equations, we found the thresholds of the coupled and uncoupled ensembles under belief propagation (BP) decoding for a wide range of rates. We also computed the maximum a-posteriori (MAP) thresholds of the underlying uncoupled ensembles. Our numerical results confirm that TCs have excellent MAP thresholds, and for a large enough coupling memory, the BP threshold of an SC-TC ensemble improves to the MAP threshold of the underlying TC ensemble. This phenomenon is called threshold saturation and we proved its occurrence for SC-TCs by use of a proof technique based on the potential function of the ensembles.Third, we investigated and discussed the performance of SC-TCs in the finite length regime. We proved that under certain conditions the minimum distance of an SC-TCs is either larger or equal to that of its underlying uncoupled ensemble. Based on this fact, we performed a weight enumerator (WE) analysis for the underlying uncoupled ensembles to investigate the error floor performance of the SC-TC ensembles. We computed bounds on the error rate performance and minimum distance of the TC ensembles. These bounds indicate very low error floor for SCC, HCC, and BCC ensembles, and show that for HCC, and BCC ensembles, the minimum distance grows linearly with the input block length.The results from the DE and WE analysis demonstrate that the performance of TCs benefits from spatial coupling in both waterfall and error floor regions. While uncoupled TC ensembles with close-to-capacity performance exhibit a high error floor, our results show that SC-TCs can simultaneously approach capacity and achieve very low error floor.Fourth, we proposed a unified ensemble of TCs that includes all the considered TC classes. We showed that for each of the original classes of TCs, it is possible to find an equivalent ensemble by proper selection of the design parameters in the unified ensemble. This unified ensemble not only helps us to understand the connections and trade-offs between the TC ensembles but also can be considered as a bridge between TCs and generalized low-density parity check codes

    Soft-Decoding-Based Strategies for Relay and Interference Channels: Analysis and Achievable Rates Using LDPC Codes

    Full text link
    We provide a rigorous mathematical analysis of two communication strategies: soft decode-and-forward (soft-DF) for relay channels, and soft partial interference-cancelation (soft-IC) for interference channels. Both strategies involve soft estimation, which assists the decoding process. We consider LDPC codes, not because of their practical benefits, but because of their analytic tractability, which enables an asymptotic analysis similar to random coding methods of information theory. Unlike some works on the closely-related demodulate-and-forward, we assume non-memoryless, code-structure-aware estimation. With soft-DF, we develop {\it simultaneous density evolution} to bound the decoding error probability at the destination. This result applies to erasure relay channels. In one variant of soft-DF, the relay applies Wyner-Ziv coding to enhance its communication with the destination, borrowing from compress-and-forward. To analyze soft-IC, we adapt existing techniques for iterative multiuser detection, and focus on binary-input additive white Gaussian noise (BIAWGN) interference channels. We prove that optimal point-to-point codes are unsuitable for soft-IC, as well as for all strategies that apply partial decoding to improve upon single-user detection (SUD) and multiuser detection (MUD), including Han-Kobayashi (HK).Comment: Accepted to the IEEE Transactions on Information Theory. This is a major revision of a paper originally submitted in August 201

    Hybrid ARQ with parallel and serial concatenated convolutional codes for next generation wireless communications

    Get PDF
    This research focuses on evaluating the currently used FEC encoding-decoding schemes and improving the performance of error control systems by incorporating these schemes in a hybrid FEC-ARQ environment. Beginning with an overview of wireless communications and the various ARQ protocols, the thesis provides an in-depth explanation of convolutional encoding and Viterbi decoding, turbo (PCCC) and serial concatenated convolutional (SCCC) encoding with their respective MAP decoding strategies.;A type-II hybrid ARQ scheme with SCCCs is proposed for the first time and is a major contribution of this thesis. A vast improvement is seen in the BER performance of the successive individual FEC schemes discussed above. Also, very high throughputs can be achieved when these schemes are incorporated in an adaptive type-II hybrid ARQ system.;Finally, the thesis discusses the equivalence of the PCCCs and the SCCCs and proposes a technique to generate a hybrid code using both schemes

    From Polar to Reed-Muller Codes:Unified Scaling, Non-standard Channels, and a Proven Conjecture

    Get PDF
    The year 2016, in which I am writing these words, marks the centenary of Claude Shannon, the father of information theory. In his landmark 1948 paper "A Mathematical Theory of Communication", Shannon established the largest rate at which reliable communication is possible, and he referred to it as the channel capacity. Since then, researchers have focused on the design of practical coding schemes that could approach such a limit. The road to channel capacity has been almost 70 years long and, after many ideas, occasional detours, and some rediscoveries, it has culminated in the description of low-complexity and provably capacity-achieving coding schemes, namely, polar codes and iterative codes based on sparse graphs. However, next-generation communication systems require an unprecedented performance improvement and the number of transmission settings relevant in applications is rapidly increasing. Hence, although Shannon's limit seems finally close at hand, new challenges are just around the corner. In this thesis, we trace a road that goes from polar to Reed-Muller codes and, by doing so, we investigate three main topics: unified scaling, non-standard channels, and capacity via symmetry. First, we consider unified scaling. A coding scheme is capacity-achieving when, for any rate smaller than capacity, the error probability tends to 0 as the block length becomes increasingly larger. However, the practitioner is often interested in more specific questions such as, "How much do we need to increase the block length in order to halve the gap between rate and capacity?". We focus our analysis on polar codes and develop a unified framework to rigorously analyze the scaling of the main parameters, i.e., block length, rate, error probability, and channel quality. Furthermore, in light of the recent success of a list decoding algorithm for polar codes, we provide scaling results on the performance of list decoders. Next, we deal with non-standard channels. When we say that a coding scheme achieves capacity, we typically consider binary memoryless symmetric channels. However, practical transmission scenarios often involve more complicated settings. For example, the downlink of a cellular system is modeled as a broadcast channel, and the communication on fiber links is inherently asymmetric. We propose provably optimal low-complexity solutions for these settings. In particular, we present a polar coding scheme that achieves the best known rate region for the broadcast channel, and we describe three paradigms to achieve the capacity of asymmetric channels. To do so, we develop general coding "primitives", such as the chaining construction that has already proved to be useful in a variety of communication problems. Finally, we show how to achieve capacity via symmetry. In the early days of coding theory, a popular paradigm consisted in exploiting the structure of algebraic codes to devise practical decoding algorithms. However, proving the optimality of such coding schemes remained an elusive goal. In particular, the conjecture that Reed-Muller codes achieve capacity dates back to the 1960s. We solve this open problem by showing that Reed-Muller codes and, in general, codes with sufficient symmetry are capacity-achieving over erasure channels under optimal MAP decoding. As the proof does not rely on the precise structure of the codes, we are able to show that symmetry alone guarantees optimal performance

    Cellular, Wide-Area, and Non-Terrestrial IoT: A Survey on 5G Advances and the Road Towards 6G

    Full text link
    The next wave of wireless technologies is proliferating in connecting things among themselves as well as to humans. In the era of the Internet of things (IoT), billions of sensors, machines, vehicles, drones, and robots will be connected, making the world around us smarter. The IoT will encompass devices that must wirelessly communicate a diverse set of data gathered from the environment for myriad new applications. The ultimate goal is to extract insights from this data and develop solutions that improve quality of life and generate new revenue. Providing large-scale, long-lasting, reliable, and near real-time connectivity is the major challenge in enabling a smart connected world. This paper provides a comprehensive survey on existing and emerging communication solutions for serving IoT applications in the context of cellular, wide-area, as well as non-terrestrial networks. Specifically, wireless technology enhancements for providing IoT access in fifth-generation (5G) and beyond cellular networks, and communication networks over the unlicensed spectrum are presented. Aligned with the main key performance indicators of 5G and beyond 5G networks, we investigate solutions and standards that enable energy efficiency, reliability, low latency, and scalability (connection density) of current and future IoT networks. The solutions include grant-free access and channel coding for short-packet communications, non-orthogonal multiple access, and on-device intelligence. Further, a vision of new paradigm shifts in communication networks in the 2030s is provided, and the integration of the associated new technologies like artificial intelligence, non-terrestrial networks, and new spectra is elaborated. Finally, future research directions toward beyond 5G IoT networks are pointed out.Comment: Submitted for review to IEEE CS&
    corecore