400 research outputs found

    Architectures for soft-decision decoding of non-binary codes

    Full text link
    En esta tesis se estudia el dise¿no de decodificadores no-binarios para la correcci'on de errores en sistemas de comunicaci'on modernos de alta velocidad. El objetivo es proponer soluciones de baja complejidad para los algoritmos de decodificaci'on basados en los c'odigos de comprobaci'on de paridad de baja densidad no-binarios (NB-LDPC) y en los c'odigos Reed-Solomon, con la finalidad de implementar arquitecturas hardware eficientes. En la primera parte de la tesis se analizan los cuellos de botella existentes en los algoritmos y en las arquitecturas de decodificadores NB-LDPC y se proponen soluciones de baja complejidad y de alta velocidad basadas en el volteo de s'¿mbolos. En primer lugar, se estudian las soluciones basadas en actualizaci'on por inundaci 'on con el objetivo de obtener la mayor velocidad posible sin tener en cuenta la ganancia de codificaci'on. Se proponen dos decodificadores diferentes basados en clipping y t'ecnicas de bloqueo, sin embargo, la frecuencia m'axima est'a limitada debido a un exceso de cableado. Por este motivo, se exploran algunos m'etodos para reducir los problemas de rutado en c'odigos NB-LDPC. Como soluci'on se propone una arquitectura basada en difusi'on parcial para algoritmos de volteo de s'¿mbolos que mitiga la congesti'on por rutado. Como las soluciones de actualizaci 'on por inundaci'on de mayor velocidad son sub-'optimas desde el punto de vista de capacidad de correci'on, decidimos dise¿nar soluciones para la actualizaci'on serie, con el objetivo de alcanzar una mayor velocidad manteniendo la ganancia de codificaci'on de los algoritmos originales de volteo de s'¿mbolo. Se presentan dos algoritmos y arquitecturas de actualizaci'on serie, reduciendo el 'area y aumentando de la velocidad m'axima alcanzable. Por 'ultimo, se generalizan los algoritmos de volteo de s'¿mbolo y se muestra como algunos casos particulares puede lograr una ganancia de codificaci'on cercana a los algoritmos Min-sum y Min-max con una menor complejidad. Tambi'en se propone una arquitectura eficiente, que muestra que el 'area se reduce a la mitad en comparaci'on con una soluci'on de mapeo directo. En la segunda parte de la tesis, se comparan algoritmos de decodificaci'on Reed- Solomon basados en decisi'on blanda, concluyendo que el algoritmo de baja complejidad Chase (LCC) es la soluci'on m'as eficiente si la alta velocidad es el objetivo principal. Sin embargo, los esquemas LCC se basan en la interpolaci'on, que introduce algunas limitaciones hardware debido a su complejidad. Con el fin de reducir la complejidad sin modificar la capacidad de correcci'on, se propone un esquema de decisi'on blanda para LCC basado en algoritmos de decisi'on dura. Por 'ultimo se dise¿na una arquitectura eficiente para este nuevo esquemaGarcía Herrero, FM. (2013). Architectures for soft-decision decoding of non-binary codes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33753TESISPremiad

    Reed-Solomon turbo product codes for optical communications: from code optimization to decoder design

    No full text
    International audienceTurbo product codes (TPCs) are an attractive solution to improve link budgets and reduce systems costs by relaxing the requirements on expensive optical devices in high capacity optical transport systems. In this paper, we investigate the use of Reed-Solomon (RS) turbo product codes for 40 Gbps transmission over optical transport networks and 10 Gbps transmission over passive optical networks. An algorithmic study is first performed in order to design RS TPCs that are compatible with the performance requirements imposed by the two applications. Then, a novel ultrahigh-speed parallel architecture for turbo decoding of product codes is described. A comparison with binary Bose-Chaudhuri-Hocquenghem (BCH) TPCs is performed. The results show that high-rate RS TPCs offer a better complexity/performance tradeoff than BCH TPCs for low-cost Gbps fiber optic communications

    Error-correction coding for high-density magnetic recording channels.

    Get PDF
    Finally, a promising algorithm which combines RS decoding algorithm with LDPC decoding algorithm together is investigated, and a reduced-complexity modification has been proposed, which not only improves the decoding performance largely, but also guarantees a good performance in high signal-to-noise ratio (SNR), in which area an error floor is experienced by LDPC codes.The soft-decision RS decoding algorithms and their performance on magnetic recording channels have been researched, and the algorithm implementation and hardware architecture issues have been discussed. Several novel variations of KV algorithm such as soft Chase algorithm, re-encoded Chase algorithm and forward recursive algorithm have been proposed. And the performance of nested codes using RS and LDPC codes as component codes have been investigated for bursty noise magnetic recording channels.Future high density magnetic recoding channels (MRCs) are subject to more noise contamination and intersymbol interference, which make the error-correction codes (ECCs) become more important. Recent research of replacement of current Reed-Solomon (RS)-coded ECC systems with low-density parity-check (LDPC)-coded ECC systems obtains a lot of research attention due to the large decoding gain for LDPC-coded systems with random noise. In this dissertation, systems aim to maintain the RS-coded system using recent proposed soft-decision RS decoding techniques are investigated and the improved performance is presented

    Comparison of code rate and transmit diversity in MIMO systems

    Get PDF
    A thesis submitted in ful lment of the requirements for the degree of Master of Science in the Centre of Excellence in Telecommunications and Software School of Electrical and Information Engineering, March 2016In order to compare low rate error correcting codes to MIMO schemes with transmit diversity, two systems with the same throughput are compared. A VBLAST MIMO system with (15; 5) Reed-Solomon coding is compared to an Alamouti MIMO system with (15; 10) Reed-Solomon coding. The latter is found to perform signi cantly better, indicating that transmit diversity is a more e ective technique for minimising errors than reducing the code rate. The Guruswami-Sudan/Koetter-Vardy soft decision decoding algorithm was implemented to allow decoding beyond the conventional error correcting bound of RS codes and VBLAST was adapted to provide reliability information. Analysis is also performed to nd the optimal code rate when using various MIMO systems.MT201

    PARALLEL SUBSPACE SUBCODES OF REED-SOLOMON CODES FOR MAGNETIC RECORDING CHANNELS

    Get PDF
    Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code lower the error-floor at high signal-to-noise ratio (SNR) at the price of a reduced coding gain and a less sharp waterfall region at lower SNR. This architecture fails to deal with the error floor problem when the number of errors caused by multiple dominant trapping sets is beyond the error correction capability of the outer RS code. The ultimate goal of a sharper waterfall at the low SNR region and a lower error floor at high SNR can be approached by introducing a parallel subspace subcode RS (SSRS) code (PSSRS) to replace the conventional RS code. In this new LDPC+PSSRS system, the PSSRS code can help localize and partially destroy the most dominant trapping sets. With the proposed iterative parallel local decoding algorithm, the LDPC decoder can correct the remaining errors by itself. The contributions of this work are: 1) We propose a PSSRS code with parallel local SSRS structure and a three-level decoding architecture, which enables a trade off between performance and complexity; 2) We propose a new LDPC+PSSRS system with a new iterative parallel local decoding algorithm with a 0.5dB+ gain over the conventional two-level system. Its performance for 4K-byte sectors is close to the multiple LDPC-only architectures for perpendicular magneticxviiirecording channels; 3) We develop a new decoding concept that changes the major role of the RS code from error correcting to a "partial" trapping set destroyer

    Advanced Modulation and Coding Technology Conference

    Get PDF
    The objectives, approach, and status of all current LeRC-sponsored industry contracts and university grants are presented. The following topics are covered: (1) the LeRC Space Communications Program, and Advanced Modulation and Coding Projects; (2) the status of four contracts for development of proof-of-concept modems; (3) modulation and coding work done under three university grants, two small business innovation research contracts, and two demonstration model hardware development contracts; and (4) technology needs and opportunities for future missions

    The hybrid list decoding and Chase-like algorithm of Reed-Solomon codes.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005Reed-Solomon (RS) codes are powerful error-correcting codes that can be found in a wide variety of digital communications and digital data-storage systems. Classical hard decoder of RS code can correct t = (dmin -1) /2 errors where dmin = (n - k+ 1) is the minimum distance of the codeword, n is the length of codeword and k is the dimension of codeword. Maximum likelihood decoding (MLD) performs better than the classical decoding and therefore how to approach the performance of the MLD with less complexity is a subject which has been researched extensively. Applying the bit reliability obtained from channel to the conventional decoding algorithm is always an efficient technique to approach the performance of MLD, although the exponential increase of complexity is always concomitant. It is definite that more enhancement of performance can be achieved if we apply the bit reliability to enhanced algebraic decoding algorithm that is more powerful than conventional decoding algorithm. In 1997 Madhu Sudan, building on previous work of Welch-Berlekamp, and others, discovered a polynomial-time algorithm for decoding low-rate Reed- Solomon codes beyond the classical error-correcting bound t = (dmin -1) /2. Two years later Guruswami and Sudan published a significantly improved version of Sudan's algorithm (GS), but these papers did not focus on devising practical implementation. The other authors, Kotter, Roth and Ruckenstein, were able to find realizations for the key steps in the GS algorithm, thus making the GS algorithm a practical instrument in transmission systems. The Gross list algorithm, which is a simplified one with less decoding complexity realized by a reencoding scheme, is also taken into account in this dissertation. The fundamental idea of the GS algorithm is to take advantage of an interpolation step to get an interpolation polynomial produced by support symbols, received symbols and their corresponding multiplicities. After that the GS algorithm implements a factorization step to find the roots of the interpolation polynomial. After comparing the reliability of these codewords which are from the output of factorization, the GS algorithm outputs the most likely one. The support set, received set and multiplicity set are created by Koetter Vardy (KV) front end algorithm. In the GS list decoding algorithm, the number of errors that can be corrected increases to tcs = n - 1 - lJ (k - 1) n J. It is easy to show that the GS list decoding algorithm is capable of correcting more errors than a conventional decoding algorithm. In this dissertation, we present two hybrid list decoding and Chase-like algorithms. We apply the Chase algorithms to the KV soft-decision front end. Consequently, we are able to provide a more reliable input to the KV list algorithm. In the application of Chase-like algorithm, we take two conditions into consideration, so that the floor cannot occur and more coding gains are possible. With an increase of the bits that are chosen by the Chase algorithm, the complexity of the hybrid algorithm increases exponentially. To solve this problem an adaptive algorithm is applied to the hybrid algorithm based on the fact that as signal-to-noise ratio (SNR) increases the received bits are more reliable, and not every received sequence needs to create the fixed number of test error patterns by the Chase algorithm. We set a threshold according to the given SNR and utilize it to finally decide which unreliable bits are picked up by Chase algorithm. However, the performance of the adaptive hybrid algorithm at high SNRs decreases as the complexity decreases. It means that the adaptive algorithm is not a sufficient mechanism for eliminating the redundant test error patterns. The performance of the adaptive hybrid algorithm at high SNRs motivates us to find out another way to reduce the complexity without loss of performance. We would consider the two following problems before dealing with the problem on hand. One problem is: can we find a terminative condition to decide which generated candidate codeword is the most likely codeword for received sequence before all candidates of received set are tested? Another one is: can we eliminate the test error patterns that cannot create more likely codewords than the generated codewords? In our final algorithm, an optimality lemma coming from the Kaneko algorithm is applied to solve the first problem and the second problem is solved by a ruling out scheme for the reduced list decoding algorithm. The Gross list algorithm is also applied in our final hybrid algorithm. After the two problems have been solved, the final hybrid algorithm has performance comparable with the hybrid algorithm combined the KV list decoding algorithm and the Chase algorithm but much less complexity at high SNRs

    Algorithm/Architecture Co-Design for Low-Power Neuromorphic Computing

    Full text link
    The development of computing systems based on the conventional von Neumann architecture has slowed down in the past decade as complementary metal-oxide-semiconductor (CMOS) technology scaling becomes more and more difficult. To satisfy the ever-increasing demands in computing power, neuromorphic computing has emerged as an attractive alternative. This dissertation focuses on developing learning algorithm, hardware architecture, circuit components, and design methodologies for low-power neuromorphic computing that can be employed in various energy-constrained applications. A top-down approach is adopted in this research. Starting from the algorithm-architecture co-design, a hardware-friendly learning algorithm is developed for spiking neural networks (SNNs). The possibility of estimating gradients from spike timings is explored. The learning algorithm is developed for the ease of hardware implementation, as well as the compatibility with many well-established learning techniques developed for classic artificial neural networks (ANNs). An SNN hardware equipped with the proposed on-chip learning algorithm is implemented in CMOS technology. In this design, two unique features of SNNs, the event-driven computation and the inferring with a progressive precision, are leveraged to reduce the energy consumption. In addition to low-power SNN hardware, accelerators for ANNs are also presented to accelerate the adaptive dynamic programing algorithm. An efficient and flexible single-instruction-multiple-data architecture is proposed to exploit the inherent data-level parallelism in the inference and learning of ANNs. In addition, the accelerator is augmented with a virtual update technique, which helps improve the throughput and energy efficiency remarkably. Lastly, two techniques in the architecture-circuit level are introduced to mitigate the degraded reliability of the memory system in a neuromorphic hardware owing to the aggressively-scaled supply voltage and integration density. The first method uses on-chip feedback to compensate for the process variation and the second technique improves the throughput and energy efficiency of a conventional error-correction method.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144149/1/zhengn_1.pd
    • …
    corecore