214 research outputs found

    Error-correction coding for high-density magnetic recording channels.

    Get PDF
    Finally, a promising algorithm which combines RS decoding algorithm with LDPC decoding algorithm together is investigated, and a reduced-complexity modification has been proposed, which not only improves the decoding performance largely, but also guarantees a good performance in high signal-to-noise ratio (SNR), in which area an error floor is experienced by LDPC codes.The soft-decision RS decoding algorithms and their performance on magnetic recording channels have been researched, and the algorithm implementation and hardware architecture issues have been discussed. Several novel variations of KV algorithm such as soft Chase algorithm, re-encoded Chase algorithm and forward recursive algorithm have been proposed. And the performance of nested codes using RS and LDPC codes as component codes have been investigated for bursty noise magnetic recording channels.Future high density magnetic recoding channels (MRCs) are subject to more noise contamination and intersymbol interference, which make the error-correction codes (ECCs) become more important. Recent research of replacement of current Reed-Solomon (RS)-coded ECC systems with low-density parity-check (LDPC)-coded ECC systems obtains a lot of research attention due to the large decoding gain for LDPC-coded systems with random noise. In this dissertation, systems aim to maintain the RS-coded system using recent proposed soft-decision RS decoding techniques are investigated and the improved performance is presented

    Architectures for soft-decision decoding of non-binary codes

    Full text link
    En esta tesis se estudia el dise¿no de decodificadores no-binarios para la correcci'on de errores en sistemas de comunicaci'on modernos de alta velocidad. El objetivo es proponer soluciones de baja complejidad para los algoritmos de decodificaci'on basados en los c'odigos de comprobaci'on de paridad de baja densidad no-binarios (NB-LDPC) y en los c'odigos Reed-Solomon, con la finalidad de implementar arquitecturas hardware eficientes. En la primera parte de la tesis se analizan los cuellos de botella existentes en los algoritmos y en las arquitecturas de decodificadores NB-LDPC y se proponen soluciones de baja complejidad y de alta velocidad basadas en el volteo de s'¿mbolos. En primer lugar, se estudian las soluciones basadas en actualizaci'on por inundaci 'on con el objetivo de obtener la mayor velocidad posible sin tener en cuenta la ganancia de codificaci'on. Se proponen dos decodificadores diferentes basados en clipping y t'ecnicas de bloqueo, sin embargo, la frecuencia m'axima est'a limitada debido a un exceso de cableado. Por este motivo, se exploran algunos m'etodos para reducir los problemas de rutado en c'odigos NB-LDPC. Como soluci'on se propone una arquitectura basada en difusi'on parcial para algoritmos de volteo de s'¿mbolos que mitiga la congesti'on por rutado. Como las soluciones de actualizaci 'on por inundaci'on de mayor velocidad son sub-'optimas desde el punto de vista de capacidad de correci'on, decidimos dise¿nar soluciones para la actualizaci'on serie, con el objetivo de alcanzar una mayor velocidad manteniendo la ganancia de codificaci'on de los algoritmos originales de volteo de s'¿mbolo. Se presentan dos algoritmos y arquitecturas de actualizaci'on serie, reduciendo el 'area y aumentando de la velocidad m'axima alcanzable. Por 'ultimo, se generalizan los algoritmos de volteo de s'¿mbolo y se muestra como algunos casos particulares puede lograr una ganancia de codificaci'on cercana a los algoritmos Min-sum y Min-max con una menor complejidad. Tambi'en se propone una arquitectura eficiente, que muestra que el 'area se reduce a la mitad en comparaci'on con una soluci'on de mapeo directo. En la segunda parte de la tesis, se comparan algoritmos de decodificaci'on Reed- Solomon basados en decisi'on blanda, concluyendo que el algoritmo de baja complejidad Chase (LCC) es la soluci'on m'as eficiente si la alta velocidad es el objetivo principal. Sin embargo, los esquemas LCC se basan en la interpolaci'on, que introduce algunas limitaciones hardware debido a su complejidad. Con el fin de reducir la complejidad sin modificar la capacidad de correcci'on, se propone un esquema de decisi'on blanda para LCC basado en algoritmos de decisi'on dura. Por 'ultimo se dise¿na una arquitectura eficiente para este nuevo esquemaGarcía Herrero, FM. (2013). Architectures for soft-decision decoding of non-binary codes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33753TESISPremiad

    PARALLEL SUBSPACE SUBCODES OF REED-SOLOMON CODES FOR MAGNETIC RECORDING CHANNELS

    Get PDF
    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code lower the error-floor at high signal-to-noise ratio (SNR) at the price of a reduced coding gain and a less sharp waterfall region at lower SNR. This architecture fails to deal with the error floor problem when the number of errors caused by multiple dominant trapping sets is beyond the error correction capability of the outer RS code. The ultimate goal of a sharper waterfall at the low SNR region and a lower error floor at high SNR can be approached by introducing a parallel subspace subcode RS (SSRS) code (PSSRS) to replace the conventional RS code. In this new LDPC+PSSRS system, the PSSRS code can help localize and partially destroy the most dominant trapping sets. With the proposed iterative parallel local decoding algorithm, the LDPC decoder can correct the remaining errors by itself. The contributions of this work are: 1) We propose a PSSRS code with parallel local SSRS structure and a three-level decoding architecture, which enables a trade off between performance and complexity; 2) We propose a new LDPC+PSSRS system with a new iterative parallel local decoding algorithm with a 0.5dB+ gain over the conventional two-level system. Its performance for 4K-byte sectors is close to the multiple LDPC-only architectures for perpendicular magneticxviiirecording channels; 3) We develop a new decoding concept that changes the major role of the RS code from error correcting to a "partial" trapping set destroyer

    Fast syndrome-based Chase decoding of binary BCH codes through Wu list decoding

    Full text link
    We present a new fast Chase decoding algorithm for binary BCH codes. The new algorithm reduces the complexity in comparison to a recent fast Chase decoding algorithm for Reed--Solomon (RS) codes by the authors (IEEE Trans. IT, 2022), by requiring only a single Koetter iteration per edge of the decoding tree. In comparison to the fast Chase algorithms presented by Kamiya (IEEE Trans. IT, 2001) and Wu (IEEE Trans. IT, 2012) for binary BCH codes, the polynomials updated throughout the algorithm of the current paper typically have a much lower degree. To achieve the complexity reduction, we build on a new isomorphism between two solution modules in the binary case, and on a degenerate case of the soft-decision (SD) version of the Wu list decoding algorithm. Roughly speaking, we prove that when the maximum list size is 11 in Wu list decoding of binary BCH codes, assigning a multiplicity of 11 to a coordinate has the same effect as flipping this coordinate in a Chase-decoding trial. The solution-module isomorphism also provides a systematic way to benefit from the binary alphabet for reducing the complexity in bounded-distance hard-decision (HD) decoding. Along the way, we briefly develop the Groebner-bases formulation of the Wu list decoding algorithm for binary BCH codes, which is missing in the literature

    High Performance Decoder Architectures for Error Correction Codes

    Get PDF
    Due to the rapid development of the information industry, modern communication and storage systems require much higher data rates and reliability to server various demanding applications. However, these systems suffer from noises from the practical channels. Various error correction codes (ECCs), such as Reed-Solomon (RS) codes, convolutional codes, turbo codes, Low-Density Parity-Check (LDPC) codes and so on, have been adopted in lots of current standards. With the increasing data rate, the research of more advanced ECCs and the corresponding efficient decoders will never stop.Binary LDPC codes have been adopted in lots of modern communication and storage applications due their superior error performance and efficient hardware decoder implementations. Non-binary LDPC (NB-LDPC) codes are an important extension of traditional binary LDPC codes. Compared with its binary counterpart, NB-LDPC codes show better error performance under short to moderate block lengths and higher order modulations. Moreover, NB-LDPC codes have lower error floor than binary LDPC codes. In spite of the excellent error performance, it is hard for current communication and storage systems to adopt NB-LDPC codes due to complex decoding algorithms and decoder architectures. In terms of hardware implementation, current NB-LDPC decoders need much larger area and achieve much lower data throughput.Besides the recently proposed NB-LDPC codes, polar codes, discovered by Ar{\i}kan, appear as a very promising candidate for future communication and storage systems. Polar codes are considered as a major breakthrough in recent coding theory society. Polar codes are proved to be capacity achieving codes over binary input symmetric memoryless channels. Besides, polar codes can be decoded by the successive cancelation (SC) algorithm with of complexity of O(Nlog2N)\mathcal{O}(N\log_2 N), where NN is the block length. The main sticking point of polar codes to date is that their error performance under short to moderate block lengths is inferior compared with LDPC codes or turbo codes. The list decoding technique can be used to improve the error performance of SC algorithms at the cost higher computational and memory complexities. Besides, the hardware implementation of current SC based decoders suffer from long decoding latency which is unsuitable for modern high speed communications.ECCs also find their applications in improving the reliability of network coding. Random linear network coding is an efficient technique for disseminating information in networks, but it is highly susceptible to errors. K\ {o}tter-Kschischang (KK) codes and Mahdavifar-Vardy (MV) codes are two important families of subspace codes that provide error control in noncoherent random linear network coding. List decoding has been used to decode MV codes beyond half distance. Existing hardware implementations of the rank metric decoder for KK codes suffer from limited throughput, long latency and high area complexity. The interpolation-based list decoding algorithm for MV codes still has high computational complexity, and its feasibility for hardware implementations has not been investigated.In this exam, we present efficient decoding algorithms and hardware decoder architectures for NB-LDPC codes, polar codes, KK and MV codes. For NB-LDPC codes, an efficient shuffled decoder architecture is presented to reduce the number of average iterations and improve the throughput. Besides, a fully parallel decoder architecture for NB-LDPC codes with short or moderate block lengths is also presented. Our fully parallel decoder architecture achieves much higher throughput and area efficiency compared with the state-of-art NB-LDPC decoders. For polar codes, a memory efficient list decoder architecture is first presented. Based on our reduced latency list decoding algorithm for polar codes, a high throughput list decoder architecture is also presented. At last, we present efficient decoder architectures for both KK and MV codes

    Advanced Coding Techniques with Applications to Storage Systems

    Get PDF
    This dissertation considers several coding techniques based on Reed-Solomon (RS) and low-density parity-check (LDPC) codes. These two prominent families of error-correcting codes have attracted a great amount of interest from both theorists and practitioners and have been applied in many communication scenarios. In particular, data storage systems have greatly benefited from these codes in improving the reliability of the storage media. The first part of this dissertation presents a unified framework based on rate-distortion (RD) theory to analyze and optimize multiple decoding trials of RS codes. Finding the best set of candidate decoding patterns is shown to be equivalent to a covering problem which can be solved asymptotically by RD theory. The proposed approach helps understand the asymptotic performance-versus-complexity trade-off of these multiple-attempt decoding algorithms and can be applied to a wide range of decoders and error models. In the second part, we consider spatially-coupled (SC) codes, or terminated LDPC convolutional codes, over intersymbol-interference (ISI) channels under joint iterative decoding. We empirically observe the phenomenon of threshold saturation whereby the belief-propagation (BP) threshold of the SC ensemble is improved to the maximum a posteriori (MAP) threshold of the underlying ensemble. More specifically, we derive a generalized extrinsic information transfer (GEXIT) curve for the joint decoder that naturally obeys the area theorem and estimate the MAP and BP thresholds. We also conjecture that SC codes due to threshold saturation can universally approach the symmetric information rate of ISI channels. In the third part, a similar analysis is used to analyze the MAP thresholds of LDPC codes for several multiuser systems, namely a noisy Slepian-Wolf problem and a multiple access channel with erasures. We provide rigorous analysis and derive upper bounds on the MAP thresholds which are shown to be tight in some cases. This analysis is a first step towards proving threshold saturation for these systems which would imply SC codes with joint BP decoding can universally approach the entire capacity region of the corresponding systems

    Advanced channel coding techniques using bit-level soft information

    Get PDF
    In this dissertation, advanced channel decoding techniques based on bit-level soft information are studied. Two main approaches are proposed: bit-level probabilistic iterative decoding and bit-level algebraic soft-decision (list) decoding (ASD). In the first part of the dissertation, we first study iterative decoding for high density parity check (HDPC) codes. An iterative decoding algorithm, which uses the sum product algorithm (SPA) in conjunction with a binary parity check matrix adapted in each decoding iteration according to the bit-level reliabilities is proposed. In contrast to the common belief that iterative decoding is not suitable for HDPC codes, this bit-level reliability based adaptation procedure is critical to the conver-gence behavior of iterative decoding for HDPC codes and it significantly improves the iterative decoding performance of Reed-Solomon (RS) codes, whose parity check matrices are in general not sparse. We also present another iterative decoding scheme for cyclic codes by randomly shifting the bit-level reliability values in each iteration. The random shift based adaptation can also prevent iterative decoding from getting stuck with a significant complexity reduction compared with the reliability based parity check matrix adaptation and still provides reasonable good performance for short-length cyclic codes. In the second part of the dissertation, we investigate ASD for RS codes using bit-level soft information. In particular, we show that by carefully incorporating bit¬level soft information in the multiplicity assignment and the interpolation step, ASD can significantly outperform conventional hard decision decoding (HDD) for RS codes with a very small amount of complexity, even though the kernel of ASD is operating at the symbol-level. More importantly, the performance of the proposed bit-level ASD can be tightly upper bounded for practical high rate RS codes, which is in general not possible for other popular ASD schemes. Bit-level soft-decision decoding (SDD) serves as an efficient way to exploit the potential gain of many classical codes, and also facilitates the corresponding per-formance analysis. The proposed bit-level SDD schemes are potential and feasible alternatives to conventional symbol-level HDD schemes in many communication sys-tems
    corecore