83 research outputs found

    Modeling and Energy Optimization of LDPC Decoder Circuits with Timing Violations

    Full text link
    This paper proposes a "quasi-synchronous" design approach for signal processing circuits, in which timing violations are permitted, but without the need for a hardware compensation mechanism. The case of a low-density parity-check (LDPC) decoder is studied, and a method for accurately modeling the effect of timing violations at a high level of abstraction is presented. The error-correction performance of code ensembles is then evaluated using density evolution while taking into account the effect of timing faults. Following this, several quasi-synchronous LDPC decoder circuits based on the offset min-sum algorithm are optimized, providing a 23%-40% reduction in energy consumption or energy-delay product, while achieving the same performance and occupying the same area as conventional synchronous circuits.Comment: To appear in IEEE Transactions on Communication

    Architectures for soft-decision decoding of non-binary codes

    Full text link
    En esta tesis se estudia el dise¿no de decodificadores no-binarios para la correcci'on de errores en sistemas de comunicaci'on modernos de alta velocidad. El objetivo es proponer soluciones de baja complejidad para los algoritmos de decodificaci'on basados en los c'odigos de comprobaci'on de paridad de baja densidad no-binarios (NB-LDPC) y en los c'odigos Reed-Solomon, con la finalidad de implementar arquitecturas hardware eficientes. En la primera parte de la tesis se analizan los cuellos de botella existentes en los algoritmos y en las arquitecturas de decodificadores NB-LDPC y se proponen soluciones de baja complejidad y de alta velocidad basadas en el volteo de s'¿mbolos. En primer lugar, se estudian las soluciones basadas en actualizaci'on por inundaci 'on con el objetivo de obtener la mayor velocidad posible sin tener en cuenta la ganancia de codificaci'on. Se proponen dos decodificadores diferentes basados en clipping y t'ecnicas de bloqueo, sin embargo, la frecuencia m'axima est'a limitada debido a un exceso de cableado. Por este motivo, se exploran algunos m'etodos para reducir los problemas de rutado en c'odigos NB-LDPC. Como soluci'on se propone una arquitectura basada en difusi'on parcial para algoritmos de volteo de s'¿mbolos que mitiga la congesti'on por rutado. Como las soluciones de actualizaci 'on por inundaci'on de mayor velocidad son sub-'optimas desde el punto de vista de capacidad de correci'on, decidimos dise¿nar soluciones para la actualizaci'on serie, con el objetivo de alcanzar una mayor velocidad manteniendo la ganancia de codificaci'on de los algoritmos originales de volteo de s'¿mbolo. Se presentan dos algoritmos y arquitecturas de actualizaci'on serie, reduciendo el 'area y aumentando de la velocidad m'axima alcanzable. Por 'ultimo, se generalizan los algoritmos de volteo de s'¿mbolo y se muestra como algunos casos particulares puede lograr una ganancia de codificaci'on cercana a los algoritmos Min-sum y Min-max con una menor complejidad. Tambi'en se propone una arquitectura eficiente, que muestra que el 'area se reduce a la mitad en comparaci'on con una soluci'on de mapeo directo. En la segunda parte de la tesis, se comparan algoritmos de decodificaci'on Reed- Solomon basados en decisi'on blanda, concluyendo que el algoritmo de baja complejidad Chase (LCC) es la soluci'on m'as eficiente si la alta velocidad es el objetivo principal. Sin embargo, los esquemas LCC se basan en la interpolaci'on, que introduce algunas limitaciones hardware debido a su complejidad. Con el fin de reducir la complejidad sin modificar la capacidad de correcci'on, se propone un esquema de decisi'on blanda para LCC basado en algoritmos de decisi'on dura. Por 'ultimo se dise¿na una arquitectura eficiente para este nuevo esquemaGarcía Herrero, FM. (2013). Architectures for soft-decision decoding of non-binary codes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/33753TESISPremiad

    Raptor Codes for BIAWGN Channel: SNR Mismatch and the Optimality of the Inner and Outer Rates

    Get PDF
    Fountain codes are a class of rateless codes with two interesting properties, first, they can generate potentially limitless numbers of encoded symbols given a finite set of source symbols, and second, the source symbols can be recovered from any subset of encoded symbols with cardinality greater than the number of source symbols. Raptor codes are the first implementation of fountain codes with linear complexity and vanishing error floors on noisy channels. Raptor codes are designed by the serial concatenation of an inner Luby trans-form (LT) code, the first practical realization of fountain codes, and an outer low-density parity-check (LDPC) code. Raptor codes were designed to operate on the binary erasure channel (BEC), however, since their invention they received considerable attention in or-der to improve their performance on noisy channels, and especially additive white Gaussiannoise (AWGN) channels. This dissertation considers two issues that face Raptor codes on the binary input additive white Gaussian noise (BIAWGN) channel: inaccurate estimation of signal to noise ratio (SNR) and the optimality of inner and outer rates. First, for codes that use a belief propagation algorithm (BPA) in decoding, such as Raptor codes on the BIAWGN channel, accurate estimation of the channel SNR is crucial to achieving optimal performance by the decoder. A difference between the estimated SNR and the actual channel SNR is known as signal to noise ratio mismatch (SNRM). Using asymptomatic analysis and simulation, we show the degrading effects of SNRM on Raptor codes and observe that if the mismatch is large enough, it can cause the decoding to fail. Using the discretized density evolution (DDE) algorithm with the modifications required to simulate the asymptotic performance in the case of SNRM, we determine the decoding threshold of Raptor codes for different values of SNRM ratio. Determining the threshold under SNRM enables us to quantify its effects which in turn can be used to reach important conclusions about the effects of SNRM on Raptor codes. Also, it can be used to compare Raptor codes with different designs in terms of their tolerance to SNRM. Based on the threshold response to SNRM, we observe that SNR underestimation is slightly less detrimental to Raptor codes than SNR overestimation for lower levels of mismatch ratio, however, as the mismatch increases, underestimation becomes more detrimental. Further, it can help estimate the tolerance of a Raptor code, with certain code parameters when transmitted at some SNR value, to SNRM. Or equivalently, help estimate the SNR needed for a given code to achieve a certain level of tolerance to SNRM. Using our observations about the performance of Raptor codes under SNRM, we propose an optimization method to design output degree distributions of the LT part that can be used to construct Raptor codes with more tolerance to high levels of SNRM. Second, we study the effects of choosing different values of inner and outer code rate pairs on the decoding threshold and performance of Raptor codes on the BIAWGN channel. For concatenated codes such as Raptor codes, given any instance of the overall code rate R, different inner (Ri) and outer (Ro) code rate combinations can be used to share the available redundancy as long asR=RiRo. Determining the optimal inner and outer rate pair can improve the threshold and performance of Raptor codes. Using asymptotic analysis, we show the effect of the rate pair choice on the threshold of Raptor codes on the BIAWGN channel and how the optimal rate pair is decided. We also show that Raptor codes with different output degree distributions can have different optimal rate pairs, therefore, by identifying the optimal rate pair we can further improve the performance and avoid suboptimal use of the code. We make the observation that as the outer rate of Raptor codes increases the potential of achieving better threshold increases, and provide the reason why the optimal outer rate of Raptor codes cannot occur at lower values. Finally, we present an optimization method that considers the optimality of the inner and outer rates in designing the output degree distribution of the inner LT part of Raptor codes. The designed distributions show improvement in both the decoding threshold and performance compared to other code designs that do not consider the optimality of the inner and outer rates

    Design and Analysis of GFDM-Based Wireless Communication Systems

    Get PDF
    Le multiplexage généralisé par répartition en fréquence (GFDM), une méthode de traitement par blocs de modulation multiporteuses non orthogonales, est une candidate prometteuse pour les technologies de forme d'onde pour les systèmes sans fil au-delà de la cinquième génération (5G). La capacité du GFDM à ajuster de manière flexible la taille du bloc et le type de filtres de mise en forme des impulsions en fait une méthode appropriée pour répondre à plusieurs exigences importantes, comme une faible latence, un faible rayonnement hors bande (OOB) et des débits de données élevés. En appliquant aux systèmes GFDM la technique des systèmes à entrées multiples et sorties multiples (MIMO), la technique de MIMO massif ou des codes de contrôle de parité à faible densité (LDPC), il est possible d'améliorer leurs performances. Par conséquent, l'étude de ces systèmes combinés sont d'une grande importance théorique et pratique. Dans cette thèse, nous étudions les systèmes de communication sans fil basés sur le GFDM en considérant trois aspects. Tout d'abord, nous dérivons une borne d'union sur le taux d'erreur sur les bits (BER) pour les systèmes MIMO-GFDM, technique qui est basée sur des probabilités d'erreur par paires exactes (PEP). La PEP exacte est calculée en utilisant la fonction génératrice de moments(MGF) pour les détecteurs à maximum de vraisemblance (ML). La corrélation spatiale entre les antennes et les erreurs d'estimation de canal sont prises en compte dans l'environnement de canal étudié. Deuxièmement, les estimateurs et les précodeurs de canal de faible complexité basés sur une expansion polynomiale sont proposés pour les systèmes MIMO-GFDM massifs. Des pilotes sans interférence sont utilisés pour l'estimation du canal basée sur l'erreur quadratique moyenne minimale(MMSE) pour lutter contre l'influence de la non-orthogonalité entre les sous-porteuses dans le GFDM. La complexité de calcul cubique peut être réduite à une complexité d'ordre au carré en utilisant la technique d'expansion polynomiale pour approximer les inverses de matrices dans l'estimation MMSE conventionnelle et le précodage. De plus, nous calculons les limites de performance en termes d'erreur quadratique moyenne (MSE) pour les estimateurs proposés, ce qui peut être un outil utile pour prédire la performance des estimateurs dans la région de Eₛ/N₀ élevé. Une borne inférieure de Cramér-Rao(CRLB) est dérivée pour notre modèle de système et agit comme une référence pour les estimateurs. La complexité de calcul des estimateurs de canal proposés et des précodeurs et les impacts du degré du polynôme sont également étudiés. Enfin, nous analysons les performances de la probabilité d'erreur des systèmes GFDM combinés aux codes LDPC. Nous dérivons d'abord les expressions du ratio de vraisemblance logarithmique (LLR) initiale qui sont utilisées dans le décodeur de l'algorithme de somme de produits (SPA). Ensuite, basé sur le seuil de décodage, nous estimons le taux d'erreur de trame (FER) dans la région de bas E[indice b]/N₀ en utilisant le BER observé pour modéliser les variations du canal. De plus, une borne inférieure du FER du système est également proposée basée sur des ensembles absorbants. Cette borne inférieure peut agir comme une estimation du FER dans la région de E[indice b]/N₀ élevé si l'ensemble absorbant utilisé est dominant et que sa multiplicité est connue. La quantification a également un impact important sur les performances du FER et du BER. Des codes LDPC basés sur un tableau et construit aléatoirement sont utilisés pour supporter les analyses de performances. Pour ces trois aspects, des simulations et des calculs informatiques sont effectués pour obtenir des résultats numériques connexes, qui vérifient les méthodes proposées.8 372162\u a Generalized frequency division multiplexing (GFDM) is a block-processing based non-orthogonal multi-carrier modulation scheme, which is a promising candidate waveform technology for beyond fifth-generation (5G) wireless systems. The ability of GFDM to flexibly adjust the block size and the type of pulse-shaping filters makes it a suitable scheme to meet several important requirements, such as low latency, low out-of-band (OOB) radiation and high data rates. Applying the multiple-input multiple-output (MIMO) technique, the massive MIMO technique, or low-density parity-check (LDPC) codes to GFDM systems can further improve the systems performance. Therefore, the investigation of such combined systems is of great theoretical and practical importance. This thesis investigates GFDM-based wireless communication systems from the following three aspects. First, we derive a union bound on the bit error rate (BER) for MIMO-GFDM systems, which is based on exact pairwise error probabilities (PEPs). The exact PEP is calculated using the moment-generating function (MGF) for maximum likelihood (ML) detectors. Both the spatial correlation between antennas and the channel estimation errors are considered in the investigated channel environment. Second, polynomial expansion-based low-complexity channel estimators and precoders are proposed for massive MIMO-GFDM systems. Interference-free pilots are used in the minimum mean square error (MMSE) channel estimation to combat the influence of non-orthogonality between subcarriers in GFDM. The cubic computational complexity can be reduced to square order by using the polynomial expansion technique to approximate the matrix inverses in the conventional MMSE estimation and precoding. In addition, we derive performance limits in terms of the mean square error (MSE) for the proposed estimators, which can be a useful tool to predict estimators performance in the high Eₛ/N₀ region. A Cramér-Rao lower bound (CRLB) is derived for our system model and acts as a benchmark for the estimators. The computational complexity of the proposed channel estimators and precoders, and the impacts of the polynomial degree are also investigated. Finally, we analyze the error probability performance of LDPC coded GFDM systems. We first derive the initial log-likelihood ratio (LLR) expressions that are used in the sum-product algorithm (SPA) decoder. Then, based on the decoding threshold, we estimate the frame error rate (FER) in the low E[subscript b]/N₀ region by using the observed BER to model the channel variations. In addition, a lower bound on the FER of the system is also proposed based on absorbing sets. This lower bound can act as an estimate of the FER in the high E[subscript b]/N₀ region if the absorbing set used is dominant and its multiplicity is known. The quantization scheme also has an important impact on the FER and BER performances. Randomly constructed and array-based LDPC codes are used to support the performance analyses. For all these three aspects, software-based simulations and calculations are carried out to obtain related numerical results, which verify our proposed methods

    Algorithm Development and VLSI Implementation of Energy Efficient Decoders of Polar Codes

    Get PDF
    With its low error-floor performance, polar codes attract significant attention as the potential standard error correction code (ECC) for future communication and data storage. However, the VLSI implementation complexity of polar codes decoders is largely influenced by its nature of in-series decoding. This dissertation is dedicated to presenting optimal decoder architectures for polar codes. This dissertation addresses several structural properties of polar codes and key properties of decoding algorithms that are not dealt with in the prior researches. The underlying concept of the proposed architectures is a paradigm that simplifies and schedules the computations such that hardware is simplified, latency is minimized and bandwidth is maximized. In pursuit of the above, throughput centric successive cancellation (TCSC) and overlapping path list successive cancellation (OPLSC) VLSI architectures and express journey BP (XJBP) decoders for the polar codes are presented. An arbitrary polar code can be decomposed by a set of shorter polar codes with special characteristics, those shorter polar codes are referred to as constituent polar codes. By exploiting the homogeneousness between decoding processes of different constituent polar codes, TCSC reduces the decoding latency of the SC decoder by 60% for codes with length n = 1024. The error correction performance of SC decoding is inferior to that of list successive cancellation decoding. The LSC decoding algorithm delivers the most reliable decoding results; however, it consumes most hardware resources and decoding cycles. Instead of using multiple instances of decoding cores in the LSC decoders, a single SC decoder is used in the OPLSC architecture. The computations of each path in the LSC are arranged to occupy the decoder hardware stages serially in a streamlined fashion. This yields a significant reduction of hardware complexity. The OPLSC decoder has achieved about 1.4 times hardware efficiency improvement compared with traditional LSC decoders. The hardware efficient VLSI architectures for TCSC and OPLSC polar codes decoders are also introduced. Decoders based on SC or LSC algorithms suffer from high latency and limited throughput due to their serial decoding natures. An alternative approach to decode the polar codes is belief propagation (BP) based algorithm. In BP algorithm, a graph is set up to guide the beliefs propagated and refined, which is usually referred to as factor graph. BP decoding algorithm allows decoding in parallel to achieve much higher throughput. XJBP decoder facilitates belief propagation by utilizing the specific constituent codes that exist in the conventional factor graph, which results in an express journey (XJ) decoder. Compared with the conventional BP decoding algorithm for polar codes, the proposed decoder reduces the computational complexity by about 40.6%. This enables an energy-efficient hardware implementation. To further explore the hardware consumption of the proposed XJBP decoder, the computations scheduling is modeled and analyzed in this dissertation. With discussions on different hardware scenarios, the optimal scheduling plans are developed. A novel memory-distributed micro-architecture of the XJBP decoder is proposed and analyzed to solve the potential memory access problems of the proposed scheduling strategy. The register-transfer level (RTL) models of the XJBP decoder are set up for comparisons with other state-of-the-art BP decoders. The results show that the power efficiency of BP decoders is improved by about 3 times

    New Algorithms for High-Throughput Decoding with Low-Density Parity-Check Codes using Fixed-Point SIMD Processors

    Get PDF
    Most digital signal processors contain one or more functional units with a single-instruction, multiple-data architecture that supports saturating fixed-point arithmetic with two or more options for the arithmetic precision. The processors designed for the highest performance contain many such functional units connected through an on-chip network. The selection of the arithmetic precision provides a trade-off between the task-level throughput and the quality of the output of many signal-processing algorithms, and utilization of the interconnection network during execution of the algorithm introduces a latency that can also limit the algorithm\u27s throughput. In this dissertation, we consider the turbo-decoding message-passing algorithm for iterative decoding of low-density parity-check codes and investigate its performance in parallel execution on a processor of interconnected functional units employing fast, low-precision fixed-point arithmetic. It is shown that the frequent occurrence of saturation when 8-bit signed arithmetic is used severely degrades the performance of the algorithm compared with decoding using higher-precision arithmetic. A technique of limiting the magnitude of certain intermediate variables of the algorithm, the extrinsic values, is proposed and shown to eliminate most occurrences of saturation, resulting in performance with 8-bit decoding nearly equal to that achieved with higher-precision decoding. We show that the interconnection latency can have a significant detrimental effect of the throughput of the turbo-decoding message-passing algorithm, which is illustrated for a type of high-performance digital signal processor known as a stream processor. Two alternatives to the standard schedule of message-passing and parity-check operations are proposed for the algorithm. Both alternatives markedly reduce the interconnection latency, and both result in substantially greater throughput than the standard schedule with no increase in the probability of error

    VLSI algorithms and architectures for non-binary-LDPC decoding

    Full text link
    Tesis por compendio[EN] This thesis studies the design of low-complexity soft-decision Non-Binary Low-Density Parity-Check (NB-LDPC) decoding algorithms and their corresponding hardware architectures suitable for decoding high-rate codes at high throughput (hundreds of Mbps and Gbps). In the first part of the thesis the main aspects concerning to the NB-LDPC codes are analyzed, including a study of the main bottlenecks of conventional softdecision decoding algorithms (Q-ary Sum of Products (QSPA), Extended Min-Sum (EMS), Min-Max and Trellis-Extended Min-Sum (T-EMS)) and their corresponding hardware architectures. Despite the limitations of T-EMS algorithm (high complexity in the Check Node (CN) processor, wiring congestion due to the high number of exchanged messages between processors and the inability to implement decoders over high-order Galois fields due to the high decoder complexity), it was selected as starting point for this thesis due to its capability to reach high-throughput. Taking into account the identified limitations of the T-EMS algorithm, the second part of the thesis includes six papers with the results of the research made in order to mitigate the T-EMS disadvantages, offering solutions that reduce the area, the latency and increase the throughput compared to previous proposals from literature without sacrificing coding gain. Specifically, five low-complexity decoding algorithms are proposed, which introduce simplifications in different parts of the decoding process. Besides, five complete decoder architectures are designed and implemented on a 90nm Complementary Metal-Oxide-Semiconductor (CMOS) technology. The results show an achievement in throughput higher than 1Gbps and an area less than 10 mm2. The increase in throughput is 120% and the reduction in area is 53% compared to previous implementations of T-EMS, for the (837,726) NB-LDPC code over GF(32). The proposed decoders reduce the CN area, latency, wiring between CN and Variable Node (VN) processor and the number of storage elements required in the decoder. Considering that these proposals improve both area and speed, the efficiency parameter (Mbps / Million NAND gates) is increased in almost five times compared to other proposals from literature. The improvements in terms of area allow us to implement NB-LDPC decoders over high-order fields which had not been possible until now due to the highcomplexity of decoders previously proposed in literature. Therefore, we present the first post-place and route report for high-rate codes over high-order fields higher than Galois Field (GF)(32). For example, for the (1536,1344) NB-LDPC code over GF(64) the throughput is 1259Mbps occupying an area of 28.90 mm2. On the other hand, a decoder architecture is implemented on a Field Programmable Gate Array (FPGA) device achieving 630 Mbps for the high-rate (2304,2048) NB-LDPC code over GF(16). To the best knowledge of the author, these results constitute the highest ones presented in literature for similar codes and implemented on the same technologies.[ES] En esta tesis se aborda el estudio del diseño de algoritmos de baja complejidad para la decodificación de códigos de comprobación de paridad de baja densidad no binarios (NB-LDPC) y sus correspondientes arquitecturas apropiadas para decodificar códigos de alta tasa a altas velocidades (cientos de Mbps y Gbps). En la primera parte de la tesis los principales aspectos concernientes a los códigos NB-LDPC son analizados, incluyendo un estudio de los principales cuellos de botella presentes en los algoritmos de decodificación convencionales basados en decisión blanda (QSPA, EMS, Min-Max y T-EMS) y sus correspondientes arquitecturas hardware. A pesar de las limitaciones del algoritmo T-EMS (alta complejidad en el procesador del nodo de chequeo de paridad (CN), congestión en el rutado debido al intercambio de mensajes entre procesadores y la incapacidad de implementar decodificadores para campos de Galois de orden elevado debido a la elevada complejidad), éste fue seleccionado como punto de partida para esta tesis debido a su capacidad para alcanzar altas velocidades. Tomando en cuenta las limitaciones identificadas en el algoritmo T-EMS, la segunda parte de la tesis incluye seis artículos con los resultados de la investigación realizada con la finalidad de mitigar las desventajas del algoritmo T-EMS, ofreciendo soluciones que reducen el área, la latencia e incrementando la velocidad comparado con propuestas previas de la literatura sin sacrificar la ganancia de codificación. Especificamente, cinco algoritmos de decodificación de baja complejidad han sido propuestos, introduciendo simplificaciones en diferentes partes del proceso de decodificación. Además, arquitecturas completas de decodificadores han sido diseñadas e implementadas en una tecnologia CMOS de 90nm consiguiéndose una velocidad mayor a 1Gbps con un área menor a 10 mm2, aumentando la velocidad en 120% y reduciendo el área en 53% comparado con previas implementaciones del algoritmo T-EMS para el código (837,726) implementado sobre campo de Galois GF(32). Las arquitecturas propuestas reducen el área del CN, latencia, número de mensajes intercambiados entre el nodo de comprobación de paridad (CN) y el nodo variable (VN) y el número de elementos de almacenamiento en el decodificador. Considerando que estas propuestas mejoran tanto el área comola velocidad, el parámetro de eficiencia (Mbps / Millones de puertas NAND) se ha incrementado en casi cinco veces comparado con otras propuestas de la literatura. Las mejoras en términos de área nos ha permitido implementar decodificadores NBLDPC sobre campos de Galois de orden elevado, lo cual no habia sido posible hasta ahora debido a la alta complejidad de los decodificadores anteriormente propuestos en la literatura. Por lo tanto, en esta tesis se presentan los primeros resultados incluyendo el emplazamiento y rutado para códigos de alta tasa sobre campos finitos de orden mayor a GF(32). Por ejemplo, para el código (1536,1344) sobre GF(64) la velocidad es 1259 Mbps ocupando un área de 28.90 mm2. Por otro lado, una arquitectura de decodificador ha sido implementada en un dispositivo FPGA consiguiendo 660 Mbps de velocidad para el código de alta tasa (2304,2048) sobre GF(16). Estos resultados constituyen, según el mejor conocimiento del autor, los mayores presentados en la literatura para códigos similares implementados para las mismas tecnologías.[CA] En esta tesi s'aborda l'estudi del disseny d'algoritmes de baixa complexitat per a la descodificació de codis de comprovació de paritat de baixa densitat no binaris (NB-LDPC), i les seues corresponents arquitectures per a descodificar codis d'alta taxa a altes velocitats (centenars de Mbps i Gbps). En la primera part de la tesi els principals aspectes concernent als codis NBLDPC són analitzats, incloent un estudi dels principals colls de botella presents en els algoritmes de descodificació convencionals basats en decisió blana (QSPA, EMS, Min-Max i T-EMS) i les seues corresponents arquitectures. A pesar de les limitacions de l'algoritme T-EMS (alta complexitat en el processador del node de revisió de paritat (CN), congestió en el rutat a causa de l'intercanvi de missatges entre processadors i la incapacitat d'implementar descodificadors per a camps de Galois d'orde elevat a causa de l'elevada complexitat), este va ser seleccionat com a punt de partida per a esta tesi degut a la seua capacitat per a aconseguir altes velocitats. Tenint en compte les limitacions identificades en l'algoritme T-EMS, la segona part de la tesi inclou sis articles amb els resultats de la investigació realitzada amb la finalitat de mitigar els desavantatges de l'algoritme T-EMS, oferint solucions que redueixen l'àrea, la latència i incrementant la velocitat comparat amb propostes prèvies de la literatura sense sacrificar el guany de codificació. Específicament, s'han proposat cinc algoritmes de descodificació de baixa complexitat, introduint simplificacions en diferents parts del procés de descodificació. A més, s'han dissenyat arquitectures completes de descodificadors i s'han implementat en una tecnologia CMOS de 90nm aconseguint-se una velocitat major a 1Gbps amb una àrea menor a 10 mm2, augmentant la velocitat en 120% i reduint l'àrea en 53% comparat amb prèvies implementacions de l'algoritme T-EMS per al codi (837,726) implementat sobre camp de Galois GF(32). Les arquitectures proposades redueixen l'àrea del CN, la latència, el nombre de missatges intercanviats entre el node de comprovació de paritat (CN) i el node variable (VN) i el nombre d'elements d'emmagatzemament en el descodificador. Considerant que estes propostes milloren tant l'àrea com la velocitat, el paràmetre d'eficiència (Mbps / Milions deportes NAND) s'ha incrementat en quasi cinc vegades comparat amb altres propostes de la literatura. Les millores en termes d'àrea ens ha permès implementar descodificadors NBLDPC sobre camps de Galois d'orde elevat, la qual cosa no havia sigut possible fins ara a causa de l'alta complexitat dels descodificadors anteriorment proposats en la literatura. Per tant, nosaltres presentem els primers reports després de l'emplaçament i rutat per a codis d'alta taxa sobre camps finits d'orde major a GF(32). Per exemple, per al codi (1536,1344) sobre GF(64) la velocitat és 1259 Mbps ocupant una àrea de 28.90 mm2. D'altra banda, una arquitectura de descodificador ha sigut implementada en un dispositiu FPGA aconseguint 660 Mbps de velocitat per al codi d'alta taxa (2304,2048) sobre GF(16). Estos resultats constitueixen, per al millor coneixement de l'autor, els millors presentats en la literatura per a codis semblants implementats per a les mateixes tecnologies.Lacruz Jucht, JO. (2016). VLSI algorithms and architectures for non-binary-LDPC decoding [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/73266TESISCompendi
    corecore