12 research outputs found

    On a Low-Rate TLDPC Code Ensemble and the Necessary Condition on the Linear Minimum Distance for Sparse-Graph Codes

    Full text link
    This paper addresses the issue of design of low-rate sparse-graph codes with linear minimum distance in the blocklength. First, we define a necessary condition which needs to be satisfied when the linear minimum distance is to be ensured. The condition is formulated in terms of degree-1 and degree-2 variable nodes and of low-weight codewords of the underlying code, and it generalizies results known for turbo codes [8] and LDPC codes. Then, we present a new ensemble of low-rate codes, which itself is a subclass of TLDPC codes [4], [5], and which is designed under this necessary condition. The asymptotic analysis of the ensemble shows that its iterative threshold is situated close to the Shannon limit. In addition to the linear minimum distance property, it has a simple structure and enjoys a low decoding complexity and a fast convergence.Comment: submitted to IEEE Trans. on Communication

    저밀도 부호의 응용: 묶음 지그재그 파운틴 부호와 WOM 부호

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 노종선.This dissertation contains the following two contributions on the applications of sparse codes. Fountain codes Batched zigzag (BZ) fountain codes – Two-phase batched zigzag (TBZ) fountain codes Write-once memory (WOM) codes – WOM codes implemented by rate-compatible low-density generator matrix (RC-LDGM) codes First, two classes of fountain codes, called batched zigzag fountain codes and two-phase batched zigzag fountain codes, are proposed for the symbol erasure channel. At a cost of slightly lengthened code symbols, the involved message symbols in each batch of the proposed codes can be recovered by low complexity zigzag decoding algorithm. Thus, the proposed codes have low buffer occupancy during decoding process. These features are suitable for receivers with limited hardware resources in the broadcasting channel. A method to obtain degree distributions of code symbols for the proposed codes via ripple size evolution is also proposed by taking into account the released code symbols from the batches. It is shown that the proposed codes outperform Luby transform codes and zigzag decodable fountain codes with respect to intermediate recovery rate and coding overhead when message length is short, symbol erasure rate is low, and available buffer size is limited. In the second part of this dissertation, WOM codes constructed by sparse codes are presented. Recently, WOM codes are adopted to NAND flash-based solid-state drive (SSD) in order to extend the lifetime by reducing the number of erasure operations. Here, a new rewriting scheme for the SSD is proposed, which is implemented by multiple binary erasure quantization (BEQ) codes. The corresponding BEQ codes are constructed by RC-LDGM codes. Moreover, by putting RC-LDGM codes together with a page selection method, writing efficiency can be improved. It is verified via simulation that the SSD with proposed rewriting scheme outperforms the SSD without and with the conventional WOM codes for single level cell (SLC) and multi-level cell (MLC) flash memories.1 Introduction 1 1.1 Background 1 1.2 Overview of Dissertation 5 2 Sparse Codes 7 2.1 Linear Block Codes 7 2.2 LDPC Codes 9 2.3 Message Passing Decoder 11 3 New Fountain Codes with Improved Intermediate Recovery Based on Batched Zigzag Coding 13 3.1 Preliminaries 17 3.1.1 Definitions and Notation 17 3.1.2 LT Codes 18 3.1.3 Zigzag Decodable Codes 20 3.1.4 Bit-Level Overhead 22 3.2 New Fountain Codes Based on Batched Zigzag Coding 23 3.2.1 Construction of Shift Matrix 24 3.2.2 Encoding and Decoding of the Proposed BZ Fountain Codes 25 3.2.3 Storage and Computational Complexity 28 3.3 Degree Distribution of BZ Fountain Codes 31 3.3.1 Relation Between Ψ(x)\Psi(x) and Ω(x)\Omega(x) 31 3.3.2 Derivation of Ω(x)\Omega(x) via Ripple Size Evolution 32 3.4 Two-Phase Batched Zigzag Fountain Codes with Additional Memory 40 3.4.1 Code Construction 41 3.4.2 Bit-Level Overhead 46 3.5 Numerical Analysis 49 4 Write-Once Memory Codes Using Rate-Compatible LDGM Codes 60 4.1 Preliminaries 62 4.1.1 NAND Flash Memory 62 4.1.2 Rewriting Schemes for Flash Memory 62 4.1.3 Construction of Rewriting Codes by BEQ Codes 65 4.2 Proposed Rewriting Codes 67 4.2.1 System Model 67 4.2.2 Multi-rate Rewriting Codes 68 4.2.3 Page Selection for Rewriting 70 4.3 RC-LDGM Codes 74 4.4 Numerical Analysis 76 5 Conclusions 80 Bibliography 82 초록 94Docto

    Design of serially-concatenated LDGM codes

    Get PDF
    [Resumen] Since Shannon demonstrated in 1948 the feasibility of achieving an arbitrarily low error probability in a communications system provided that the transmission rate was kept below a certain limit, one of the greatest challenges in the realm of digital communications and, more specifically, in the channel coding field, has been finding codes that are able to approach this limit as much as possible with a reasonable encoding and decoding complexity, However, it was not until 1993, when Berrou et al. presented the turbo codes, that a coding scheme capable of performing at less than 1dB from Shannon's limit with an extremely low error probability was found. The idea on which these codes are based is the iterative decoding of concatenated components that exchange information about the transmitted bits, which is known as the "turbo principle". The generalization of this idea led in 1995 to the rediscovery of LDPC (Low Density Parity Check) codes, proposed for the first time by Gallager in the 60s. LDPC codes are linear block codes with a sparse parity check matrix that are able to surpass the performance of turbo codes with a smaller decoding complexity. However, due to the fact that the generator matrix of general LDPC codes is not sparse, their encoding complexity can be excessively high. LDGM (Low Density Generator Matrix) codes, a particular case of LDPC codes, are codes with a sparse generator matrix, thanks to which they present a lower encoding complexity. However, except for the case of very high rate codes, LDGM codes are "bad", i.e., they have a non-zero error probability that is independent of the code block length. More recently, IRA (Irregular Repeat-Accumulated) codes, consisting of the serial concatenation of a LDGM code and an accumulator, have been proposed. IRA codes are able to get close to the performance of LDPC codes with an encoding complexity similar to that of LDGM codes. In this thesis we explore an alternative to IRA codes consisting in the serial concatenation of two LDGM codes, a scheme that we will denote SCLDGM (Serially-Concatenated Low-Density Generator Matrix). The basic premise of SCLDGM codes is that an inner code of rate close to the desired transmission rate fixes most of the errors, and an external code of rate close to one corrects the few errors that result from decoding the inner code. For any of these schemes to perform as close as possible to the capacity limit it is necessary to determine the code parameters that best fit the channel over which the transmission will be done. The two techniques most commonly used in the literature to optimize LDPC codes are Density Evolution (DE) and EXtrinsic Information Transfer (EXIT) charts, which have been employed to obtain optimized codes that perform at a few tenths of a decibel of the AWGN channel capacity. However, no optimization techniques have been presented for SCLDGM codes, which so far have been designed heuristically and therefore their performance is far from the performance achieved by IRA and LDPC codes. Other of the most important advances that have occurred in recent years is the utilization of multiple antennas at the trasmitter and the receiver, which is known as MIMO (Multiple-Input Multiple-Output) systems. Telatar showed that the channel capacity in these kind of systems scales linearly with the minimum number of transmit and receive antennas, which enables us to achieve spectral efficiencies far greater than with systems with a single transmit and receive antenna (or Single Input Single Output (SISO) systems). This important advantage has attracted a lot of attention from the research community, and has caused that many of the new standards, such as WiMax 802.16e or WiFi 802.11n, as well as future 4G systems are based on MIMO systems. The main problem of MIMO systems is the high complexity of optimum detection, which grows exponentially with the number of transmit antennas and the number of modulation levels. Several suboptimum algorithms have been proposed to reduce this complexity, most notably the SIC-MMSE (Soft-Interference Cancellation Minimum Mean Square Error) and spherical detectors. Another major issue is the high complexity of the channel estimation, due to the large number of coefficients which determine it. There are techniques, such as Maximum-Likelihood-Expectation-Maximization (ML-EM), that have been successfully applied to estimate MIMO channels but, as in the case of detection, they suffer from the problem of a very high complexity when the number of transmit antennas or the size of the constellation increase. The main objective of this work is the study and optimization of SCLDGM codes in SISO and MIMO channels. To this end, we propose an optimization method for SCLDGM codes based on EXIT charts that allow these codes to exceed the performance of IRA codes existing in the literature and get close to the performance of LDPC codes, with the advantage over the latter of a lower encoding complexity. We also propose optimized SCLDGM codes for both spherical and SIC-MMSE suboptimal MIMO detectors, constituting a system that is capable of approaching the capacity limits of MIMO channels with a low complexity encoding, detection and decoding. We analyze the BICM (Bit-Interleaved Coded Modulation) scheme and the concatenation of SCLDGM codes with Space-Time Codes (STC) in ergodic and quasi-static MIMO channels. Furthermore, we explore the combination of these codes with different channel estimation algorithms that will take advantage of the low complexity of the suboptimum detectors to reduce the complexity of the estimation process while keeping a low distance to the capacity limit. Finally, we propose coding schemes for low rates involving the serial concatenation of several LDGM codes, reducing the complexity of recently proposed schemes based on Hadamard codes

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    On Lowering the Error Floor of Short-to-Medium Block Length Irregular Low Density Parity Check Codes

    Get PDF
    Edited version embargoed until 22.03.2019 Full version: Access restricted permanently due to 3rd party copyright restrictions. Restriction set on 22.03.2018 by SE, Doctoral CollegeGallager proposed and developed low density parity check (LDPC) codes in the early 1960s. LDPC codes were rediscovered in the early 1990s and shown to be capacity approaching over the additive white Gaussian noise (AWGN) channel. Subsequently, density evolution (DE) optimized symbol node degree distributions were used to significantly improve the decoding performance of short to medium length irregular LDPC codes. Currently, the short to medium length LDPC codes with the lowest error floor are DE optimized irregular LDPC codes constructed using progressive edge growth (PEG) algorithm modifications which are designed to increase the approximate cycle extrinsic message degrees (ACE) in the LDPC code graphs constructed. The aim of the present work is to find efficient means to improve on the error floor performance published for short to medium length irregular LDPC codes over AWGN channels in the literature. An efficient algorithm for determining the girth and ACE distributions in short to medium length LDPC code Tanner graphs has been proposed. A cyclic PEG (CPEG) algorithm which uses an edge connections sequence that results in LDPC codes with improved girth and ACE distributions is presented. LDPC codes with DE optimized/’good’ degree distributions which have larger minimum distances and stopping distances than previously published for LDPC codes of similar length and rate have been found. It is shown that increasing the minimum distance of LDPC codes lowers their error floor performance over AWGN channels; however, there are threshold minimum distances values above which there is no further lowering of the error floor performance. A minimum local girth (edge skipping) (MLG (ES)) PEG algorithm is presented; the algorithm controls the minimum local girth (global girth) connected in the Tanner graphs of LDPC codes constructed by forfeiting some edge connections. A technique for constructing optimal low correlated edge density (OED) LDPC codes based on modified DE optimized symbol node degree distributions and the MLG (ES) PEG algorithm modification is presented. OED rate-½ (n, k)=(512, 256) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. Similarly, consequent to an improved symbol node degree distribution, rate ½ (n, k)=(1024, 512) LDPC codes have been shown to have lower error floor over the AWGN channel than previously published for LDPC codes of similar length and rate. An improved BP/SPA (IBP/SPA) decoder, obtained by making two simple modifications to the standard BP/SPA decoder, has been shown to result in an unprecedented generalized improvement in the performance of short to medium length irregular LDPC codes under iterative message passing decoding. The superiority of the Slepian Wolf distributed source coding model over other distributed source coding models based on LDPC codes has been shown

    Bit-Wise Decoders for Coded Modulation and Broadcast Coded Slotted ALOHA

    Get PDF
    This thesis deals with two aspects of wireless communications. The first aspect is about efficient point-to-point data transmission. To achieve high spectral efficiency, coded modulation, which is a concatenation of higher order modulation with error correction coding, is used. Bit-interleaved coded modulation (BICM) is a pragmatic approach to coded modulation, where soft information on encoded bits is calculated at the receiver and passed to a bit-wise decoder. Soft information is usually obtained in the form of log-likelihood ratios (also known as L-values), calculated using the max-log approximation. In this thesis, we analyze bit-wise decoders for pulse-amplitude modulation (PAM) constellations over the additive white Gaussian noise (AWGN) channel when the max-log approximation is used for calculating L-values. First, we analyze BICM systems from an information theoretic perspective. We prove that the max-log approximation causes information loss for all PAM constellations and labelings with the exception of a symmetric 4-PAM constellation labeled with a Gray code. We then analyze how the max-log approximation affects the generalized mutual information (GMI), which is an achievable rate for a standard BICM decoder. Second, we compare the performance of the standard BICM decoder with that of the ML decoder. We show that, when the signal-to-noise ratio (SNR) goes to infinity, the loss in terms of pairwise error probability is bounded by 1.25 dB for any two codewords. The analysis further shows that the loss is zero for a wide range of linear codes. The second aspect of wireless communications treated in this thesis is multiple channel access. Our main objective here is to provide reliable message exchange between nodes in a wireless ad hoc network with stringent delay constraints. To that end, we propose an uncoordinated medium access control protocol, termed all-to-all broadcast coded slotted ALOHA (B-CSA), that exploits coding over packets at the transmitter side and successive interference cancellation at the receiver side. The protocol resembles low-density parity-check codes and can be analyzed using the theory of codes on graphs. The packet loss rate performance of the protocol exhibits a threshold behavior with distinct error floor and waterfall regions. We derive a tight error floor approximation that is used for the optimization of the protocol. We also show how the error floor approximation can be used to design protocols for networks, where users have different reliability requirements. We use B-CSA in vehicular networks and show that it outperforms carrier sense multiple access currently adopted as the MAC protocol for vehicular communications. Finally, we investigate the possibility of establishing a handshake in vehicular networks by means of B-CSA

    Satellite Communications

    Get PDF
    This study is motivated by the need to give the reader a broad view of the developments, key concepts, and technologies related to information society evolution, with a focus on the wireless communications and geoinformation technologies and their role in the environment. Giving perspective, it aims at assisting people active in the industry, the public sector, and Earth science fields as well, by providing a base for their continued work and thinking

    A complex systems approach to education in Switzerland

    Get PDF
    The insights gained from the study of complex systems in biological, social, and engineered systems enables us not only to observe and understand, but also to actively design systems which will be capable of successfully coping with complex and dynamically changing situations. The methods and mindset required for this approach have been applied to educational systems with their diverse levels of scale and complexity. Based on the general case made by Yaneer Bar-Yam, this paper applies the complex systems approach to the educational system in Switzerland. It confirms that the complex systems approach is valid. Indeed, many recommendations made for the general case have already been implemented in the Swiss education system. To address existing problems and difficulties, further steps are recommended. This paper contributes to the further establishment complex systems approach by shedding light on an area which concerns us all, which is a frequent topic of discussion and dispute among politicians and the public, where billions of dollars have been spent without achieving the desired results, and where it is difficult to directly derive consequences from actions taken. The analysis of the education system's different levels, their complexity and scale will clarify how such a dynamic system should be approached, and how it can be guided towards the desired performance
    corecore