9 research outputs found

    저밀도 부호의 응용: 묶음 지그재그 파운틴 부호와 WOM 부호

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 노종선.This dissertation contains the following two contributions on the applications of sparse codes. Fountain codes Batched zigzag (BZ) fountain codes – Two-phase batched zigzag (TBZ) fountain codes Write-once memory (WOM) codes – WOM codes implemented by rate-compatible low-density generator matrix (RC-LDGM) codes First, two classes of fountain codes, called batched zigzag fountain codes and two-phase batched zigzag fountain codes, are proposed for the symbol erasure channel. At a cost of slightly lengthened code symbols, the involved message symbols in each batch of the proposed codes can be recovered by low complexity zigzag decoding algorithm. Thus, the proposed codes have low buffer occupancy during decoding process. These features are suitable for receivers with limited hardware resources in the broadcasting channel. A method to obtain degree distributions of code symbols for the proposed codes via ripple size evolution is also proposed by taking into account the released code symbols from the batches. It is shown that the proposed codes outperform Luby transform codes and zigzag decodable fountain codes with respect to intermediate recovery rate and coding overhead when message length is short, symbol erasure rate is low, and available buffer size is limited. In the second part of this dissertation, WOM codes constructed by sparse codes are presented. Recently, WOM codes are adopted to NAND flash-based solid-state drive (SSD) in order to extend the lifetime by reducing the number of erasure operations. Here, a new rewriting scheme for the SSD is proposed, which is implemented by multiple binary erasure quantization (BEQ) codes. The corresponding BEQ codes are constructed by RC-LDGM codes. Moreover, by putting RC-LDGM codes together with a page selection method, writing efficiency can be improved. It is verified via simulation that the SSD with proposed rewriting scheme outperforms the SSD without and with the conventional WOM codes for single level cell (SLC) and multi-level cell (MLC) flash memories.1 Introduction 1 1.1 Background 1 1.2 Overview of Dissertation 5 2 Sparse Codes 7 2.1 Linear Block Codes 7 2.2 LDPC Codes 9 2.3 Message Passing Decoder 11 3 New Fountain Codes with Improved Intermediate Recovery Based on Batched Zigzag Coding 13 3.1 Preliminaries 17 3.1.1 Definitions and Notation 17 3.1.2 LT Codes 18 3.1.3 Zigzag Decodable Codes 20 3.1.4 Bit-Level Overhead 22 3.2 New Fountain Codes Based on Batched Zigzag Coding 23 3.2.1 Construction of Shift Matrix 24 3.2.2 Encoding and Decoding of the Proposed BZ Fountain Codes 25 3.2.3 Storage and Computational Complexity 28 3.3 Degree Distribution of BZ Fountain Codes 31 3.3.1 Relation Between Ψ(x)\Psi(x) and Ω(x)\Omega(x) 31 3.3.2 Derivation of Ω(x)\Omega(x) via Ripple Size Evolution 32 3.4 Two-Phase Batched Zigzag Fountain Codes with Additional Memory 40 3.4.1 Code Construction 41 3.4.2 Bit-Level Overhead 46 3.5 Numerical Analysis 49 4 Write-Once Memory Codes Using Rate-Compatible LDGM Codes 60 4.1 Preliminaries 62 4.1.1 NAND Flash Memory 62 4.1.2 Rewriting Schemes for Flash Memory 62 4.1.3 Construction of Rewriting Codes by BEQ Codes 65 4.2 Proposed Rewriting Codes 67 4.2.1 System Model 67 4.2.2 Multi-rate Rewriting Codes 68 4.2.3 Page Selection for Rewriting 70 4.3 RC-LDGM Codes 74 4.4 Numerical Analysis 76 5 Conclusions 80 Bibliography 82 초록 94Docto

    On hard-decision forward error correction with application to high-throughput fiber-optic communications

    Get PDF
    The advent of the Internet not only changed the communication methods significantly, but also the life-style of the human beings. The number of Internet users has grown exponentially in the last decade, and the number of users exceeded 3.4 billion in 2016. Fiber links serve as the Internet backbone, hence, the fast grow of the Internet network and the sheer of new applications is highly driven by advances in optical communications. The emergence of coherent optical systems has led to a more efficient use of the available spectrum compared to traditional on-off keying transmission, and has made it possible to increase the supported data rates. To achieve high spectral efficiencies and improve the transmission reach, coding in combination with a higher order modulation, a scheme known as coded modulation (CM), has become indispensable in fiber-optic communications. In the recent years, graph-based codes such as low-density parity-check codes and soft decision decoding (SDD) have been adopted for long-haul coherent optical systems. SDD yields very high net coding gains but at the expense of a relatively high decoding complexity, which brings implementation challenges at very high data rates. Hard decision decoding (HDD) is an appealing alternative that reduces the decoding complexity. This motivates the focus of this thesis on forward error correction (FEC) with HDD for high-throughput, low power fiber-optic communications.In this thesis, we start by studying the performance bounds of HDD. In particular, we derive achievable information rates (AIRs) for CM with HDD for both bit-wise and symbol-wise decoding, and show that bit-wise HDD yields significantly higher AIRs. We also design nonbinary staircase codes using density evolution. Finite length simulation results of binary and nonbinary staircase codes corroborate the conclusions arising from the AIR analysis, i.e., for HDD binary codes are preferable. Then, we consider probabilistic shaping. In particular, we extend the probabilistic amplitude shaping (PAS) scheme recently introduced by B\uf6cherer et al. to HDD based on staircase codes. Finally, we focus on new decoding algorithms for product-like codes to close the gap between HDD and SDD, while keeping the decoding complexity low. In particular, we propose three novel decoding algorithms for product-like codes based on assisting the HDD with some level of soft information. The proposed algorithms provide a clear performance-complexity tradeoff. In particular, we show that up to roughly half of the gap between SDD and HDD can be closed with limited complexity increase with respect to HDD

    Graph-based techniques for compression and reconstruction of sparse sources

    Get PDF
    The main goal of this thesis is to develop lossless compression schemes for analog and binary sources. All the considered compression schemes have as common feature that the encoder can be represented by a graph, so they can be studied employing tools from modern coding theory. In particular, this thesis is focused on two compression problems: the group testing and the noiseless compressed sensing problems. Although both problems may seem unrelated, in the thesis they are shown to be very close. Furthermore, group testing has the same mathematical formulation as non-linear binary source compression schemes that use the OR operator. In this thesis, the similarities between these problems are exploited. The group testing problem is aimed at identifying the defective subjects of a population with as few tests as possible. Group testing schemes can be divided into two groups: adaptive and non-adaptive group testing schemes. The former schemes generate tests sequentially and exploit the partial decoding results to attempt to reduce the overall number of tests required to label all members of the population, whereas non-adaptive schemes perform all the test in parallel and attempt to label as many subjects as possible. Our contributions to the group testing problem are both theoretical and practical. We propose a novel adaptive scheme aimed to efficiently perform the testing process. Furthermore, we develop tools to predict the performance of both adaptive and non-adaptive schemes when the number of subjects to be tested is large. These tools allow to characterize the performance of adaptive and non-adaptive group testing schemes without simulating them. The goal of the noiseless compressed sensing problem is to retrieve a signal from its lineal projection version in a lower-dimensional space. This can be done only whenever the amount of null components of the original signal is large enough. Compressed sensing deals with the design of sampling schemes and reconstruction algorithms that manage to reconstruct the original signal vector with as few samples as possible. In this thesis we pose the compressed sensing problem within a probabilistic framework, as opposed to the classical compression sensing formulation. Recent results in the state of the art show that this approach is more efficient than the classical one. Our contributions to noiseless compressed sensing are both theoretical and practical. We deduce a necessary and sufficient matrix design condition to guarantee that the reconstruction is lossless. Regarding the design of practical schemes, we propose two novel reconstruction algorithms based on message passing over the sparse representation of the matrix, one of them with very low computational complexity.El objetivo principal de la tesis es el desarrollo de esquemas de compresión sin pérdidas para fuentes analógicas y binarias. Los esquemas analizados tienen en común la representación del compresor mediante un grafo; esto ha permitido emplear en su estudio las herramientas de codificación modernas. Más concretamente la tesis estudia dos problemas de compresión en particular: el diseño de experimentos de testeo comprimido de poblaciones (de sangre, de presencia de elementos contaminantes, secuenciado de ADN, etcétera) y el muestreo comprimido de señales reales en ausencia de ruido. A pesar de que a primera vista parezcan problemas totalmente diferentes, en la tesis mostramos que están muy relacionados. Adicionalmente, el problema de testeo comprimido de poblaciones tiene una formulación matemática idéntica a los códigos de compresión binarios no lineales basados en puertas OR. En la tesis se explotan las similitudes entre todos estos problemas. Existen dos aproximaciones al testeo de poblaciones: el testeo adaptativo y el no adaptativo. El primero realiza los test de forma secuencial y explota los resultados parciales de estos para intentar reducir el número total de test necesarios, mientras que el segundo hace todos los test en bloque e intenta extraer el máximo de datos posibles de los test. Nuestras contribuciones al problema de testeo comprimido han sido tanto teóricas como prácticas. Hemos propuesto un nuevo esquema adaptativo para realizar eficientemente el proceso de testeo. Además hemos desarrollado herramientas que permiten predecir el comportamiento tanto de los esquemas adaptativos como de los esquemas no adaptativos cuando el número de sujetos a testear es elevado. Estas herramientas permiten anticipar las prestaciones de los esquemas de testeo sin necesidad de simularlos. El objetivo del muestreo comprimido es recuperar una señal a partir de su proyección lineal en un espacio de menor dimensión. Esto sólo es posible si se asume que la señal original tiene muchas componentes que son cero. El problema versa sobre el diseño de matrices y algoritmos de reconstrucción que permitan implementar esquemas de muestreo y reconstrucción con un número mínimo de muestras. A diferencia de la formulación clásica de muestreo comprimido, en esta tesis se ha empleado un modelado probabilístico de la señal. Referencias recientes en la literatura demuestran que este enfoque permite conseguir esquemas de compresión y descompresión más eficientes. Nuestras contribuciones en el campo de muestreo comprimido de fuentes analógicas dispersas han sido también teóricas y prácticas. Por un lado, la deducción de la condición necesaria y suficiente que debe garantizar la matriz de muestreo para garantizar que se puede reconstruir unívocamente la secuencia de fuente. Por otro lado, hemos propuesto dos algoritmos, uno de ellos de baja complejidad computacional, que permiten reconstruir la señal original basados en paso de mensajes entre los nodos de la representación gráfica de la matriz de proyección.Postprint (published version

    MIMO Systems

    Get PDF
    In recent years, it was realized that the MIMO communication systems seems to be inevitable in accelerated evolution of high data rates applications due to their potential to dramatically increase the spectral efficiency and simultaneously sending individual information to the corresponding users in wireless systems. This book, intends to provide highlights of the current research topics in the field of MIMO system, to offer a snapshot of the recent advances and major issues faced today by the researchers in the MIMO related areas. The book is written by specialists working in universities and research centers all over the world to cover the fundamental principles and main advanced topics on high data rates wireless communications systems over MIMO channels. Moreover, the book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Radio Communications

    Get PDF
    In the last decades the restless evolution of information and communication technologies (ICT) brought to a deep transformation of our habits. The growth of the Internet and the advances in hardware and software implementations modified our way to communicate and to share information. In this book, an overview of the major issues faced today by researchers in the field of radio communications is given through 35 high quality chapters written by specialists working in universities and research centers all over the world. Various aspects will be deeply discussed: channel modeling, beamforming, multiple antennas, cooperative networks, opportunistic scheduling, advanced admission control, handover management, systems performance assessment, routing issues in mobility conditions, localization, web security. Advanced techniques for the radio resource management will be discussed both in single and multiple radio technologies; either in infrastructure, mesh or ad hoc networks
    corecore