1,638 research outputs found

    Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes

    Get PDF
    In this paper, we present an iterative soft-decision decoding algorithm for Reed-Solomon codes offering both complexity and performance advantages over previously known decoding algorithms. Our algorithm is a list decoding algorithm which combines two powerful soft decision decoding techniques which were previously regarded in the literature as competitive, namely, the Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation based on adaptive parity check matrices, recently proposed by Jiang and Narayanan. Building on the Jiang-Narayanan algorithm, we present a belief-propagation based algorithm with a significant reduction in computational complexity. We introduce the concept of using a belief-propagation based decoder to enhance the soft-input information prior to decoding with an algebraic soft-decision decoder. Our algorithm can also be viewed as an interpolation multiplicity assignment scheme for algebraic soft-decision decoding of Reed-Solomon codes.Comment: Submitted to IEEE for publication in Jan 200

    A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    Get PDF
    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation

    Architectures for soft-decision decoding of non-binary codes

    Full text link
    En esta tesis se estudia el dise¿no de decodificadores no-binarios para la correcci'on de errores en sistemas de comunicaci'on modernos de alta velocidad. El objetivo es proponer soluciones de baja complejidad para los algoritmos de decodificaci'on basados en los c'odigos de comprobaci'on de paridad de baja densidad no-binarios (NB-LDPC) y en los c'odigos Reed-Solomon, con la finalidad de implementar arquitecturas hardware eficientes. En la primera parte de la tesis se analizan los cuellos de botella existentes en los algoritmos y en las arquitecturas de decodificadores NB-LDPC y se proponen soluciones de baja complejidad y de alta velocidad basadas en el volteo de s'¿mbolos. En primer lugar, se estudian las soluciones basadas en actualizaci'on por inundaci 'on con el objetivo de obtener la mayor velocidad posible sin tener en cuenta la ganancia de codificaci'on. Se proponen dos decodificadores diferentes basados en clipping y t'ecnicas de bloqueo, sin embargo, la frecuencia m'axima est'a limitada debido a un exceso de cableado. Por este motivo, se exploran algunos m'etodos para reducir los problemas de rutado en c'odigos NB-LDPC. Como soluci'on se propone una arquitectura basada en difusi'on parcial para algoritmos de volteo de s'¿mbolos que mitiga la congesti'on por rutado. Como las soluciones de actualizaci 'on por inundaci'on de mayor velocidad son sub-'optimas desde el punto de vista de capacidad de correci'on, decidimos dise¿nar soluciones para la actualizaci'on serie, con el objetivo de alcanzar una mayor velocidad manteniendo la ganancia de codificaci'on de los algoritmos originales de volteo de s'¿mbolo. Se presentan dos algoritmos y arquitecturas de actualizaci'on serie, reduciendo el 'area y aumentando de la velocidad m'axima alcanzable. Por 'ultimo, se generalizan los algoritmos de volteo de s'¿mbolo y se muestra como algunos casos particulares puede lograr una ganancia de codificaci'on cercana a los algoritmos Min-sum y Min-max con una menor complejidad. Tambi'en se propone una arquitectura eficiente, que muestra que el 'area se reduce a la mitad en comparaci'on con una soluci'on de mapeo directo. En la segunda parte de la tesis, se comparan algoritmos de decodificaci'on Reed- Solomon basados en decisi'on blanda, concluyendo que el algoritmo de baja complejidad Chase (LCC) es la soluci'on m'as eficiente si la alta velocidad es el objetivo principal. Sin embargo, los esquemas LCC se basan en la interpolaci'on, que introduce algunas limitaciones hardware debido a su complejidad. Con el fin de reducir la complejidad sin modificar la capacidad de correcci'on, se propone un esquema de decisi'on blanda para LCC basado en algoritmos de decisi'on dura. Por 'ultimo se dise¿na una arquitectura eficiente para este nuevo esquemaGarcía Herrero, FM. (2013). Architectures for soft-decision decoding of non-binary codes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33753TESISPremiad

    AONT-LT: a Data Protection Scheme for Cloud and Cooperative Storage Systems

    Full text link
    We propose a variant of the well-known AONT-RS scheme for dispersed storage systems. The novelty consists in replacing the Reed-Solomon code with rateless Luby transform codes. The resulting system, named AONT-LT, is able to improve the performance by dispersing the data over an arbitrarily large number of storage nodes while ensuring limited complexity. The proposed solution is particularly suitable in the case of cooperative storage systems. It is shown that while the AONT-RS scheme requires the adoption of fragmentation for achieving widespread distribution, thus penalizing the performance, the new AONT-LT scheme can exploit variable length codes which allow to achieve very good performance and scalability.Comment: 6 pages, 8 figures, to be presented at the 2014 High Performance Computing & Simulation Conference (HPCS 2014) - Workshop on Security, Privacy and Performance in Cloud Computin

    Media-Based MIMO: A New Frontier in Wireless Communications

    Full text link
    The idea of Media-based Modulation (MBM), is based on embedding information in the variations of the transmission media (channel state). This is in contrast to legacy wireless systems where data is embedded in a Radio Frequency (RF) source prior to the transmit antenna. MBM offers several advantages vs. legacy systems, including "additivity of information over multiple receive antennas", and "inherent diversity over a static fading channel". MBM is particularly suitable for transmitting high data rates using a single transmit and multiple receive antennas (Single Input-Multiple Output Media-Based Modulation, or SIMO-MBM). However, complexity issues limit the amount of data that can be embedded in the channel state using a single transmit unit. To address this shortcoming, the current article introduces the idea of Layered Multiple Input-Multiple Output Media-Based Modulation (LMIMO-MBM). Relying on a layered structure, LMIMO-MBM can significantly reduce both hardware and algorithmic complexities, as well as the training overhead, vs. SIMO-MBM. Simulation results show excellent performance in terms of Symbol Error Rate (SER) vs. Signal-to-Noise Ratio (SNR). For example, a 4×164\times 16 LMIMO-MBM is capable of transmitting 3232 bits of information per (complex) channel-use, with SER 105 \simeq 10^{-5} at Eb/N03.5E_b/N_0\simeq -3.5dB (or SER 104 \simeq 10^{-4} at Eb/N0=4.5E_b/N_0=-4.5dB). This performance is achieved using a single transmission and without adding any redundancy for Forward-Error-Correction (FEC). This means, in addition to its excellent SER vs. energy/rate performance, MBM relaxes the need for complex FEC structures, and thereby minimizes the transmission delay. Overall, LMIMO-MBM provides a promising alternative to MIMO and Massive MIMO for the realization of 5G wireless networks.Comment: 26 pages, 11 figures, additional examples are given to further explain the idea of Media-Based Modulation. Capacity figure adde

    Architecture for time or transform domain decoding of reed-solomon codes

    Get PDF
    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message

    EVENODD: An Efficient Scheme for Tolerating Double Disk Failures in RAID Architectures

    Get PDF
    We present a novel method, that we call EVENODD, for tolerating up to two disk failures in RAID architectures. EVENODD employs the addition of only two redundant disks and consists of simple exclusive-OR computations. This redundant storage is optimal, in the sense that two failed disks cannot be retrieved with less than two redundant disks. A major advantage of EVENODD is that it only requires parity hardware, which is typically present in standard RAID-5 controllers. Hence, EVENODD can be implemented on standard RAID-5 controllers without any hardware changes. The most commonly used scheme that employes optimal redundant storage (i.e., two extra disks) is based on Reed-Solomon (RS) error-correcting codes. This scheme requires computation over finite fields and results in a more complex implementation. For example, we show that the complexity of implementing EVENODD in a disk array with 15 disks is about 50% of the one required when using the RS scheme. The new scheme is not limited to RAID architectures: it can be used in any system requiring large symbols and relatively short codes, for instance, in multitrack magnetic recording. To this end, we also present a decoding algorithm for one column (track) in error

    A Simplified Min-Sum Decoding Algorithm for Non-Binary LDPC Codes

    Full text link
    Non-binary low-density parity-check codes are robust to various channel impairments. However, based on the existing decoding algorithms, the decoder implementations are expensive because of their excessive computational complexity and memory usage. Based on the combinatorial optimization, we present an approximation method for the check node processing. The simulation results demonstrate that our scheme has small performance loss over the additive white Gaussian noise channel and independent Rayleigh fading channel. Furthermore, the proposed reduced-complexity realization provides significant savings on hardware, so it yields a good performance-complexity tradeoff and can be efficiently implemented.Comment: Partially presented in ICNC 2012, International Conference on Computing, Networking and Communications. Accepted by IEEE Transactions on Communication
    corecore