483 research outputs found

    비신뢰 경로 검색 기법을 이용한 저밀도 패리티 체크 부호를 위한 저복잡도 복호 기법 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·컴퓨터공학부, 2017. 8. 노종선.This dissertation contains the following contributions on the low-complexity decoding schemes of LDPC codes. Two-stage decoding scheme for LDPC codes – A new stopping criterion for LDPC codes – A new decoding scheme for LDPC codes with unreliable path search Parallel unreliable path search algorithm Analysis of two-stage decoding schemes – Validity and complexity analysis First, a new two-stage decoding scheme for low-density parity check (LDPC) codes to lower the error-floor is proposed. The proposed decoding scheme consists of the conventional belief propagation (BP) decoding algorithm as the first-stage decoding and the re-decodings with manipulated log-likelihood ratios (LLRs) of variable nodes as the second-stage decoding. In the first-stage decoding, an early stopping criterion is proposed for early detection of decoding failure and the candidate set of the variable nodes is determined, which can be partly included in the small trapping sets. In the second-stage decoding, the scores of the variable nodes in the candidate set are computed by the proposed unreliable path search algorithm and the variable nodes are sorted in ascending order by their scores for the re-decoding trials. Each re-decoding trial is performed by BP decoding algorithm with manipulated LLR of a selected variable node in the candidate set one at a time with the second early stopping criterion. Secondly, the parallel unreliable path search algorithm is proposed for practical application to the proposed unreliable path search algorithm. In order to reduce the decoding delay and computational complexity, an efficient method for the search algorithm based on the parallel message-passing algorithm in the LDPC decoding is proposed. The parallel unreliable path search algorithm significantly reduces the additional complexity without extra hardware requirements. Finally, the validity and the complexity analysis of the proposed unreliable path search algorithm is presented. The proposed algorithm effectively finds the variable nodes in small trapping sets much more faster than the previous random selection method. Also, it is verified that the additional complexity of the parallel unreliable path search algorithm is less than that of one iteration of iterative decoders.Abstract i Contents iii List of Tables v List of Figures vi 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Overview of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Overview of LDPC Codes 9 2.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Decoding of LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Analysis of LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.1 Density Evolution . . . . . . . . . . . . . . . . . . . . . . . 15 2.3.2 Mean Evolution . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Quasi-Cyclic LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . 19 2.5 Error-Floor and Trapping Sets . . . . . . . . . . . . . . . . . . . . . 21 3 A New Two-Stage Decoding Scheme with Unreliable Path Search 23 3.1 Overview of The Proposed Two-Stage Decoding Scheme . . . . . . . 26 3.2 First-Stage Decoding with the First Early Stopping Criterion . . . . . 27 3.3 Second-Stage Decoding with Unreliable Path Search Algorithm . . . 36 3.3.1 Scoring by Unreliable Path Search Algorithm . . . . . . . . . 37 3.3.2 LLR Manipulation and Re-decoding with the Second Early Stopping Criterion . . . . . . . . . . . . . . . . . . . . . . . 42 4 Parallel Unreliable Path Search Algorithm 44 4.1 Description of Parallel Unreliable Path Search Algorithm . . . . . . . 44 4.2 Scoring by Parallel Unreliable Path Search Algorithm . . . . . . . . . 48 5 Analysis of the Unreliable Path Search Algorithm 51 5.1 Validity of the Unreliable Path Search Algorithm . . . . . . . . . . . 51 5.2 Complexity Analysis of the Unreliable Path Search Algorithm . . . . 56 6 Simulation Results 59 7 Conclusions 65 Abstract (In Korean) 73Docto

    Architectures for soft-decision decoding of non-binary codes

    Full text link
    En esta tesis se estudia el dise¿no de decodificadores no-binarios para la correcci'on de errores en sistemas de comunicaci'on modernos de alta velocidad. El objetivo es proponer soluciones de baja complejidad para los algoritmos de decodificaci'on basados en los c'odigos de comprobaci'on de paridad de baja densidad no-binarios (NB-LDPC) y en los c'odigos Reed-Solomon, con la finalidad de implementar arquitecturas hardware eficientes. En la primera parte de la tesis se analizan los cuellos de botella existentes en los algoritmos y en las arquitecturas de decodificadores NB-LDPC y se proponen soluciones de baja complejidad y de alta velocidad basadas en el volteo de s'¿mbolos. En primer lugar, se estudian las soluciones basadas en actualizaci'on por inundaci 'on con el objetivo de obtener la mayor velocidad posible sin tener en cuenta la ganancia de codificaci'on. Se proponen dos decodificadores diferentes basados en clipping y t'ecnicas de bloqueo, sin embargo, la frecuencia m'axima est'a limitada debido a un exceso de cableado. Por este motivo, se exploran algunos m'etodos para reducir los problemas de rutado en c'odigos NB-LDPC. Como soluci'on se propone una arquitectura basada en difusi'on parcial para algoritmos de volteo de s'¿mbolos que mitiga la congesti'on por rutado. Como las soluciones de actualizaci 'on por inundaci'on de mayor velocidad son sub-'optimas desde el punto de vista de capacidad de correci'on, decidimos dise¿nar soluciones para la actualizaci'on serie, con el objetivo de alcanzar una mayor velocidad manteniendo la ganancia de codificaci'on de los algoritmos originales de volteo de s'¿mbolo. Se presentan dos algoritmos y arquitecturas de actualizaci'on serie, reduciendo el 'area y aumentando de la velocidad m'axima alcanzable. Por 'ultimo, se generalizan los algoritmos de volteo de s'¿mbolo y se muestra como algunos casos particulares puede lograr una ganancia de codificaci'on cercana a los algoritmos Min-sum y Min-max con una menor complejidad. Tambi'en se propone una arquitectura eficiente, que muestra que el 'area se reduce a la mitad en comparaci'on con una soluci'on de mapeo directo. En la segunda parte de la tesis, se comparan algoritmos de decodificaci'on Reed- Solomon basados en decisi'on blanda, concluyendo que el algoritmo de baja complejidad Chase (LCC) es la soluci'on m'as eficiente si la alta velocidad es el objetivo principal. Sin embargo, los esquemas LCC se basan en la interpolaci'on, que introduce algunas limitaciones hardware debido a su complejidad. Con el fin de reducir la complejidad sin modificar la capacidad de correcci'on, se propone un esquema de decisi'on blanda para LCC basado en algoritmos de decisi'on dura. Por 'ultimo se dise¿na una arquitectura eficiente para este nuevo esquemaGarcía Herrero, FM. (2013). Architectures for soft-decision decoding of non-binary codes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33753TESISPremiad

    On performance analysis and implementation issues of iterative decoding for graph based codes

    Get PDF
    There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation. A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure. Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length. The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency

    Polar coding for optical wireless communication

    Get PDF

    Polar coding for optical wireless communication

    Get PDF

    CONVERGENCE IMPROVEMENT OF ITERATIVE DECODERS

    Get PDF
    Iterative decoding techniques shaked the waters of the error correction and communications field in general. Their amazing compromise between complexity and performance offered much more freedom in code design and made highly complex codes, that were being considered undecodable until recently, part of almost any communication system. Nevertheless, iterative decoding is a sub-optimum decoding method and as such, it has attracted huge research interest. But the iterative decoder still hides many of its secrets, as it has not been possible yet to fully describe its behaviour and its cost function. This work presents the convergence problem of iterative decoding from various angles and explores methods for reducing any sub-optimalities on its operation. The decoding algorithms for both LDPC and turbo codes were investigated and aspects that contribute to convergence problems were identified. A new algorithm was proposed, capable of providing considerable coding gain in any iterative scheme. Moreover, it was shown that for some codes the proposed algorithm is sufficient to eliminate any sub-optimality and perform maximum likelihood decoding. Its performance and efficiency was compared to that of other convergence improvement schemes. Various conditions that can be considered critical to the outcome of the iterative decoder were also investigated and the decoding algorithm of LDPC codes was followed analytically to verify the experimental results

    Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms

    Full text link
    Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This survey article reviews and categorizes decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory. Published July 201

    Design and implementation of decoders for error correction in high-speed communication systems

    Full text link
    This thesis is focused on the design and implementation of binary low-density parity-check (LDPC) code decoders for high-speed modern communication systems. The basic of LDPC codes and the performance and bottlenecks, in terms of complexity and hardware efficiency, of the main soft-decision and hard-decision decoding algorithms (such as Min-Sum, Optimized 2-bit Min-Sum and Reliability-based iterative Majority-Logic) are analyzed. The complexity and performance of those algorithms are improved to allow efficient hardware architectures. A new decoding algorithm called One-Minimum Min-Sum is proposed. It reduces considerably the complexity of the check node update equations of the Min-Sum algorithm. The second minimum is estimated from the first minimum value by a means of a linear approximation that allows a dynamic adjustment. The Optimized 2-bit Min-Sum algorithm is modified to initialize it with the complete LLR values and to introduce the extrinsic information in the messages sent from the variable nodes. Its variable node equation is reformulated to reduce its complexity. Both algorithms were tested for the (2048,1723) RS-based LDPC code and (16129,15372) LDPC code using an FPGA-based hardware emulator. They exhibit BER performance very close to Min-Sum algorithm and do not introduce early error-floor. In order to show the hardware advantages of the proposed algorithms, hardware decoders were implemented in a 90 nm CMOS process and FPGA devices based on two types of architectures: full-parallel and partial-parallel one with horizontal layered schedule. The results show that the decoders are more area-time efficient than other published decoders and that the low-complexity of the Modified Optimized 2-bit Min-Sum allows the implementation of 10 Gbps decoders in current FPGA devices. Finally, a new hard-decision decoding algorithm, the Historical-Extrinsic Reliability-Based Iterative Decoder, is presented. This algorithm introduces the new idea of considering hard-decision votes as soft-decision to compute the extrinsic information of previous iterations. It is suitable for high-rate codes and improves the BER performance of the previous RBI-MLGD algorithms, with similar complexity.Esta tesis se ha centrado en el diseño e implementación de decodificadores binarios basados en códigos de comprobación de paridad de baja densidad (LDPC) válidos para los sistemas de comunicación modernos de alta velocidad. Los conceptos básicos de códigos LDPC, sus prestaciones y cuellos de botella, en términos de complejidad y eficiencia hardware, fueron analizados para los principales algoritmos de decisión soft y decisión hard (como Min-Sum, Optimized 2-bit Min-Sum y Reliability-based iterative Majority-Logic). La complejidad y prestaciones de estos algoritmos se han mejorado para conseguir arquitecturas hardware eficientes. Se ha propuesto un nuevo algoritmo de decodificación llamado One-Minimum Min-Sum. Éste reduce considerablemente la complejidad de las ecuaciones de actualización del nodo de comprobación del algoritmo Min-Sum. El segundo mínimo se ha estimado a partir del valor del primer mínimo por medio de una aproximación lineal, la cuál permite un ajuste dinámico. El algoritmo Optimized 2-bit Min-Sum se ha modificado para ser inicializado con los valores LLR e introducir la información extrínseca en los mensajes enviados desde los nodos variables. La ecuación del nodo variable de este algoritmo ha sido reformulada para reducir su complejidad. Ambos algoritmos fueron probados para el código (2048,1723) RS-based LDPC y para el código (16129,15372) LDPC utilizando un emulador hardware implementado en un dispositivo FPGA. Éstos han alcanzado unas prestaciones de BER muy cercanas a las del algoritmo Min-Sum evitando, además, la aparición temprana del fenómeno denominado suelo del error. Con el objetivo de mostrar las ventajas hardware de los algoritmos propuestos, los decodificadores se implementaron en hardware utilizando tecnología CMOS de 90 nm y en dispositivos FPGA basados en dos tipos de arquitecturas: completamente paralela y parcialmente paralela utilizando el método de actualización por capas horizontales. Los resultados muestran que los decodificadores propuestos e implementados son más eficientes en área-tiempo que otros decodificadores publicados y que la baja complejidad del algoritmo Modified Optimized 2-bit Min-Sum permite la implementación de decodificadores en los dispositivos FPGA actuales consiguiendo una tasa de 10 Gbps. Finalmente, se ha presentado un nuevo algoritmo de decodificación de decisión hard, el Historical-Extrinsic Reliability-Based Iterative Decoder. Este algoritmo introduce la nueva idea de considerar los votos de decisión hard como decisión soft para calcular la información extrínseca de iteracions anteriores. Este algoritmo es adecuado para códigos de alta velocidad y mejora el rendimiento BER de los algoritmos RBI-MLGD anteriores, con una complejidad similar.Aquesta tesi s'ha centrat en el disseny i implementació de descodificadors binaris basats en codis de comprovació de paritat de baixa densitat (LDPC) vàlids per als sistemes de comunicació moderns d'alta velocitat. Els conceptes bàsics de codis LDPC, les seues prestacions i colls de botella, en termes de complexitat i eficiència hardware, van ser analitzats pels principals algoritmes de decisió soft i decisió hard (com el Min-Sum, Optimized 2-bit Min-Sum y Reliability-based iterative Majority-Logic). La complexitat i prestacions d'aquests algoritmes s'han millorat per aconseguir arquitectures hardware eficients. S'ha proposat un nou algoritme de descodificació anomenat One-Minimum Min-Sum. Aquest redueix considerablement la complexitat de les equacions d'actualització del node de comprovació del algoritme Min-Sum. El segon mínim s'ha estimat a partir del valor del primer mínim per mitjà d'una aproximació lineal, la qual permet un ajust dinàmic. L'algoritme Optimized 2-bit Min-Sum s'ha modificat per ser inicialitzat amb els valors LLR i introduir la informació extrínseca en els missatges enviats des dels nodes variables. L'equació del node variable d'aquest algoritme ha sigut reformulada per reduir la seva complexitat. Tots dos algoritmes van ser provats per al codi (2048,1723) RS-based LDPC i per al codi (16129,15372) LDPC utilitzant un emulador hardware implementat en un dispositiu FPGA. Aquests han aconseguit unes prestacions BER molt properes a les del algoritme Min-Sum evitant, a més, l'aparició primerenca del fenomen denominat sòl de l'error. Per tal de mostrar els avantatges hardware dels algoritmes proposats, els descodificadors es varen implementar en hardware utilitzan una tecnologia CMOS d'uns 90 nm i en dispositius FPGA basats en dos tipus d'arquitectures: completament paral·lela i parcialment paral·lela utilitzant el mètode d'actualització per capes horitzontals. Els resultats mostren que els descodificadors proposats i implementats són més eficients en àrea-temps que altres descodificadors publicats i que la baixa complexitat del algoritme Modified Optimized 2-bit Min-Sum permet la implementació de decodificadors en els dispositius FPGA actuals obtenint una taxa de 10 Gbps. Finalment, s'ha presentat un nou algoritme de descodificació de decisió hard, el Historical-Extrinsic Reliability-Based Iterative Decoder. Aquest algoritme presenta la nova idea de considerar els vots de decisió hard com decisió soft per calcular la informació extrínseca d'iteracions anteriors. Aquest algoritme és adequat per als codis d'alta taxa i millora el rendiment BER dels algoritmes RBI-MLGD anteriors, amb una complexitat similar.Català Pérez, JM. (2017). Design and implementation of decoders for error correction in high-speed communication systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86152TESI

    Detection and Removal of Cycles in LDPC Codes

    Get PDF
    Information technology, at present has thrived to great aspects and every day more beneficiary of this blessing is being connected to the modern invention and technology. With the swift growth of communication networks, there has been a high demand for efficient and reliable digital transmission and data storage system. LDPC is one of the channel codes for error correcting that have been developed for competent systems that require higher reliability. For low-end devices requiring a limited battery or computational power, low complexity decoders are useful. For LDPC codes, the presence of short cycle in the parity-check matrix lower the decoding threshold making it less efficient. In this research, a method has been developed from an existing algorithm that finds out the exact position of potential bits forming cycle-4 in the parity check matrix that might create decoding failure after transmission in binary erasure channel. Once the short cycles are detected, it can be removed by puncturing method to obtain capacity achieving codes. The code obtained by the method has a threshold 0.42 and rate 0.5, which is asymptotically close to the mother code. Simulations show that for less number of iterations and in the presence of same channel erasure the decoder block error probability close to 10-6 is achievable. As no other additional decoding algorithm is employed, the proposed scheme does not add additional computational complexity to the decoder. Furthermore, as an extension to the method a scheme has been proposed to generate rate-compatible LDPC codes using 0.5 rate regular code with puncturing method varying puncturing fractions. By the proposed method of generating different rate code, same amount of information can be sent with less parity
    corecore