21 research outputs found

    CRC-Aided Belief Propagation List Decoding of Polar Codes

    Full text link
    Although iterative decoding of polar codes has recently made huge progress based on the idea of permuted factor graphs, it still suffers from a non-negligible performance degradation when compared to state-of-the-art CRC-aided successive cancellation list (CA-SCL) decoding. In this work, we show that iterative decoding of polar codes based on the belief propagation list (BPL) algorithm can approach the error-rate performance of CA-SCL decoding and, thus, can be efficiently used for decoding the standardized 5G polar codes. Rather than only utilizing the cyclic redundancy check (CRC) as a stopping condition (i.e., for error-detection), we also aim to benefit from the error-correction capabilities of the outer CRC code. For this, we develop two distinct soft-decision CRC decoding algorithms: a Bahl-Cocke-Jelinek-Raviv (BCJR)-based approach and a sum product algorithm (SPA)-based approach. Further, an optimized selection of permuted factor graphs is analyzed and shown to reduce the decoding complexity significantly. Finally, we benchmark the proposed CRC-aided belief propagation list (CA-BPL) to state-of-the-art 5G polar codes under CA-SCL decoding and, thereby, showcase an error-rate performance not just close to the CA-SCL but also close to the maximum likelihood (ML) bound as estimated by ordered statistic decoding (OSD).Comment: Submitted to IEEE for possible publicatio

    Advances in Modeling and Signal Processing for Bit-Patterned Magnetic Recording Channels with Written-In Errors

    Get PDF
    In the past perpendicular magnetic recording on continuous media has served as the storage mechanism for the hard-disk drive (HDD) industry, allowing for growth in areal densities approaching 0.5 Tb/in2. Under the current system design, further increases are limited by the superparamagnetic effect where the medium's thermal energy destabilizes the individual bit domains used for storage. In order to provide for future growth in the area of magnetic recording for disk drives, a number of various technology shifts have been proposed and are currently undergoing considerable research. One promising option involves switching to a discrete medium in the form of individual bit islands, termed bit-patterned magnetic recording (BPMR).When switching from a continuous to a discrete media, the problems encountered become substantial for every aspect of the hard-disk drive design. In this dissertation the complications in modeling and signal processing for bit-patterned magnetic recording are investigated where the write and read processes along with the channel characteristics present considerable challenges. For a target areal density of 4 Tb/in2, the storage process is hindered by media noise, two-dimensional (2D) intersymbol interference (ISI), electronics noise and written-in errors introduced during the write process. Thus there is a strong possibility that BPMR may prove intractable as a future HDD technology at high areal densities because the combined negative effects of the many error sources produces an environment where current signal processing techniques cannot accurately recover the stored data. The purpose here is to exploit advanced methods of detection and error correction to show that data can be effectively recovered from a BPMR channel in the presence of multiple error sources at high areal densities.First a practical model for the readback response of an individual island is established that is capable of representing its 2D nature with a Gaussian pulse. Various characteristics of the readback pulse are shown to emerge as it is subjected to the degradation of 2D media noise. The writing of the bits within a track is also investigated with an emphasis on the write process's ability to inject written-in errors in the data stream resulting from both a loss of synchronization of the write clock and the interaction of the local-scale magnetic fields under the influence of the applied write field.To facilitate data recovery in the presence of BPMR's major degradations, various detection and error-correction methods are utilized. For single-track equalization of the channel output, noise prediction is incorporated to assist detection with increased levels of media noise. With large detrimental amounts of 2D ISI and media noise present in the channel at high areal densities, a 2D approach known as multi-track detection is investigated where multiple tracks are sensed by the read heads and then used to extract information on the target track. For BPMR the output of the detector still possesses the uncorrected written-in errors. Powerful error-correction codes based on finite geometries are employed to help recover the original data stream. Increased error-correction is sought by utilizing two-fold EG codes in combination with a form of automorphism decoding known as auto-diversity. Modifications to the parity-check matrices of the error-correction codes are also investigated for the purpose of attempting more practical applications of the decoding algorithms based on belief propagation. Under the proposed techniques it is shown that effective data recovery is possible at an areal density of 4 Tb/in2 in the presence of all significant error sources except for insertions and deletions. Data recovery from the BPMR channel with insertions and deletions remains an open problem

    Energy-Efficient Decoders of Near-Capacity Channel Codes.

    Full text link
    Channel coding has become essential in state-of-the-art communication and storage systems for ensuring reliable transmission and storage of information. Their goal is to achieve high transmission reliability while keeping the transmit energy consumption low by taking advantage of the coding gain provided by these codes. The lowest total system energy is achieved with a decoder that provides both good coding gain and high energy-efficiency. This thesis demonstrates the VLSI implementation of near-capacity channel decoders using the LDPC, nonbinary LDPC (NB-LDPC) and polar codes with an emphasis of reducing the decode energy. LDPC code is a widely used channel code due to its excellent error-correcting performance. However, memory dominates the power of high-throughput LDPC decoders. Therefore, these memories are replaced with a novel non-refresh embedded DRAM (eDRAM) taking advantage of the deterministic memory access pattern and short access window of the decoding algorithm to trade off retention time for faster access speed. The resulting LDPC decoder with integrated eDRAMs achieves state-of-the-art area- and energy-efficiency. NB-LDPC code achieves better error-correcting performance than LDPC code at the cost of higher decoding complexity. However, the factor graph is simplified, permitting a fully parallel architecture with low wiring overhead. To reduce the dynamic power of the decoder, a fine-grained dynamic clock gating technique is applied based on node-level convergence. This technique greatly reduces dynamic power allowing the decoder to achieve high energy-efficiency while achieving high throughput. The recently invented polar code has a similar error-correcting performance to LDPC code of comparable block length. However, the easy reconfigurability of code rate as well as block length makes it desirable in numerous applications where LDPC is not competitive. In addition, the regular structure and simple processing enables a highly efficient decoder in terms of area and power. Using the belief propagation algorithm with architectural and memory improvements, a polar decoder is demonstrated achieving high throughput and high energy- and area-efficiency. The demonstrated energy-efficient decoders have advanced the state-of-the-art. The decoders will allow the continued reduction of decode energy for the latest communication and storage applications. The developed techniques are widely applicable to designing low-power DSP processors.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108731/1/parkyoun_1.pd

    A White Paper on Broadband Connectivity in 6G

    Get PDF
    Executive Summary This white paper explores the road to implementing broadband connectivity in future 6G wireless systems. Different categories of use cases are considered, from extreme capacity with peak data rates up to 1 Tbps, to raising the typical data rates by orders-of-magnitude, to support broadband connectivity at railway speeds up to 1000 km/h. To achieve these goals, not only the terrestrial networks will be evolved but they will also be integrated with satellite networks, all facilitating autonomous systems and various interconnected structures. We believe that several categories of enablers at the infrastructure, spectrum, and protocol/algorithmic levels are required to realize the intended broadband connectivity goals in 6G. At the infrastructure level, we consider ultra-massive MIMO technology (possibly implemented using holographic radio), intelligent reflecting surfaces, user-centric and scalable cell-free networking, integrated access and backhaul, and integrated space and terrestrial networks. At the spectrum level, the network must seamlessly utilize sub-6 GHz bands for coverage and spatial multiplexing of many devices, while higher bands will be used for pushing the peak rates of point-to-point links. The latter path will lead to THz communications complemented by visible light communications in specific scenarios. At the protocol/algorithmic level, the enablers include improved coding, modulation, and waveforms to achieve lower latencies, higher reliability, and reduced complexity. Different options will be needed to optimally support different use cases. The resource efficiency can be further improved by using various combinations of full-duplex radios, interference management based on rate-splitting, machine-learning-based optimization, coded caching, and broadcasting. Finally, the three levels of enablers must be utilized not only to deliver better broadband services in urban areas, but also to provide full-coverage broadband connectivity must be one of the key outcomes of 6G

    Low-Density Parity-Check Coded High-order Modulation Schemes

    Full text link
    In this thesis, we investigate how to support reliable data transmissions at high speeds in future communication systems, such as 5G/6G, WiFi, satellite, and optical communications. One of the most fundamental problems in these communication systems is how to reliably transmit information with a limited number of resources, such as power and spectral. To obtain high spectral efficiency, we use coded modulation (CM), such as bit-interleaved coded modulation (BICM) and delayed BICM (DBICM). To be specific, BICM is a pragmatic implementation of CM which has been largely adopted in both industry and academia. While BICM approaches CM capacity at high rates, the capacity gap between BICM and CM is still noticeable at lower code rates. To tackle this problem, DBICM, as a variation of BICM, introduces a delay module to create a dependency between multiple codewords, which enables us to exploit extrinsic information from the decoded delayed sub-blocks to improve the detection of the undelayed sub-blocks. Recent work shows that DBICM improves capacity over BICM. In addition, BICM and DBICM schemes protect each bit-channel differently, which is often referred to as the unequal error protection (UEP) property. Therefore, bit mapping designs are important for constructing pragmatic BICM and DBICM. To provide reliable communication, we have jointly designed bit mappings in DBICM and irregular low-density parity-check (LDPC) codes. For practical considerations, spatially coupled LDPC (SC-LDPC) codes have been considered as well. Specifically, we have investigated the joint design of the multi-chain SC-LDPC and the BICM bit mapper. In addition, the design of SC-LDPC codes with improved decoding threshold performance and reduced rate loss has been investigated in this thesis as well. The main body of this thesis consists of three parts. In the first part, considering Gray-labeled square M-ary quadrature amplitude modulation (QAM) constellations, we investigate the optimal delay scheme with the largest spectrum efficiency of DBICM for a fixed maximum number of delayed time slots and a given signal-to-noise ratio. Furthermore, we jointly optimize degree distributions and channel assignments of LDPC codes using protograph-based extrinsic information transfer charts. In addition, we proposed a constrained progressive edge growth-like algorithm to jointly construct LDPC codes and bit mappings for DBICM, taking the capacity of each bit-channel into account. Simulation results demonstrate that the designed LDPC-coded DBICM systems significantly outperform LDPC-coded BICM systems. In the second part, we proposed a windowed decoding algorithm for DBICM, which uses the extrinsic information of both the decoded delayed and undelayed sub-blocks, to improve the detection for all sub-blocks. We show that the proposed windowed decoding significantly outperforms the original decoding, demonstrating the effectiveness of the proposed decoding algorithm. In the third part, we apply multi-chain SC-LDPC to BICM. We investigate various connections for multi-chain SC-LDPC codes and bit mapping designs and analyze the performance of the multi-chain SC-LDPC codes over the equivalent binary erasure channels via density evolution. Numerical results demonstrate the superiority of the proposed design over existing connected-chain ensembles and over single-chain ensembles with the existing bit mapping design

    Polar-Coded OFDM with Index Modulation

    Get PDF
    Polar codes, as the first error-correcting codes with an explicit construction to provably achieve thesymmetric capacity of memoryless channels, which are constructed based on channel polarization, have recently become a primary contender in communication networks for achieving tighter requirements with relatively low complexity. As one of the contributions in this thesis, three modified polar decoding schemes are proposed. These schemes include enhanced versions of successive cancellation-flip (SC-F), belief propagation (BP), and sphere decoding (SD). The proposed SC-F utilizes novel potential incorrect bits selection criteria and stack to improve its error correction performance. Next, to make the decoding performance of BP better, permutation and feedback structure are utilized. Then, in order to reduce the complexity without compromising performance, a SD by using novel decoding strategies according to modified path metric (PM) and radius extension is proposed. Additionally, to solve the problem that BP has redundant iterations, a new stopping criterion based on bit different ratio (BDR) is proposed. According to the simulation results and mathematical proof, all proposed schemes can achieve corresponding performance improvement or complexity reduction compared with existing works. Beside applying polar coding, to achieve a reliable and flexible transmission in a wireless communication system, a modified version of orthogonal frequency division multiplexing (OFDM) modulation based on index modulation, called OFDM-in-phase/quadrature-IM (OFDM-I/Q-IM), is applied. This modulation scheme can simultaneously improve spectral efficiency and bit-error rate (BER) performance with great flexibility in design and implementation. Hence, OFDM-I/Q-IM is considered as a potential candidate in the new generation of cellular networks. As the main contribution in this work, a polar-coded OFDM-I/Q-IM system is proposed. The general design guidelines for overcoming the difficulties associated with the application of polar codes in OFDM-I/Q-IM are presented. In the proposed system, at the transmitter, we employ a random frozen bits appending scheme which not only makes the polar code compatible with OFDM-I/Q-IM but also improves the BER performance of the system. Furthermore, at the receiver, it is shown that the \textit{a posteriori} information for each index provided by the index detector is essential for the iterative decoding of polar codes by the BP algorithm. Simulation results show that the proposed polar-coded OFDM-I/Q-IM system outperforms its OFDM counterpart in terms of BER performance

    Polar codes for distributed source coding

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and The Graduate School of Engineering and Science of Bilkent Univesity, 2014.Thesis (Ph. D.) -- Bilkent University, 2014.Includes bibliographical references leaves 164-170.Polar codes were invented by Arıkan as the first “capacity achieving” codes for binary-input discrete memoryless symmetric channels with low encoding and decoding complexity. The “polarization phenomenon”, which is the underlying principle of polar codes, can be applied to different source and channel coding problems both in single-user and multi-user settings. In this work, polar coding methods for multi-user distributed source coding problems are investigated. First, a restricted version of lossless distributed source coding problem, which is also referred to as the Slepian-Wolf problem, is considered. The restriction is on the distribution of correlated sources. It is shown that if the sources are “binary symmetric” then single-user polar codes can be used to achieve full capacity region without time sharing. Then, a method for two-user polar coding is considered which is used to solve the Slepian-Wolf problem with arbitrary source distributions. This method is also extended to cover multiple-access channel problem which is the dual of Slepian-Wolf problem. Next, two lossy source coding problems in distributed settings are investigated. The first problem is the distributed lossy source coding which is the lossy version of the Slepian-Wolf problem. Although the capacity region of this problem is not known in general, there is a good inner bound called the Berger-Tung inner bound. A polar coding method that can achieve the whole dominant face of the Berger-Tung region is devised. The second problem considered is the multiple description coding problem. The capacity region for this problem is also not known in general. El Gamal-Cover inner bound is the best known bound for this problem. A polar coding method that can achieve any point on the dominant face of El Gamal-Cover region is devised.Önay, SaygunPh.D

    Algorithms for 5G physical layer

    Get PDF
    There is a great activity in the research community towards the investigations of the various aspects of 5G at different protocol layers and parts of the network. Among all, physical layer design plays a very important role to satisfy high demands in terms of data rates, latency, reliability and number of connected devices for 5G deployment. This thesis addresses he latest developments in the physical layer algorithms regarding the channel coding, signal detection, frame synchronization and multiple access technique in the light of 5G use cases. These developments are governed by the requirements of the different use case scenarios that are envisioned to be the driving force in 5G. All chapters from chapter 2 to 5 are developed around the need of physical layer algorithms dedicated to 5G use cases. In brief, this thesis focuses on design, analysis, simulation and he advancement of physical layer aspects such as 1. Reliability based decoding of short length Linear Block Codes (LBCs) with very good properties in terms of minimum hamming istance for very small latency requiring applications. In this context, we enlarge the grid of possible candidates by considering, in particular, short length LBCs (especially extended CH codes) with soft-decision decoding; 2. Efficient synchronization of preamble/postamble in a short bursty frame using modified Massey correlator; 3. Detection of Primary User activity using semiblind spectrum sensing algorithms and analysis of such algorithms under practical imperfections; 4. Design of optimal spreading matrix for a Low Density Spreading (LDS) technique in the context of non-orthogonal multiple access. In such spreading matrix, small number of elements in a spreading sequences are non zero allowing each user to spread its data over small number of chips (tones), thus simplifying the decoding procedure using Message Passing Algorithm (MPA)
    corecore