2,320 research outputs found

    Ultrafast Codes for Multiple Adjacent Error Correction and Double Error Detection

    Full text link
    (c) 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.[EN] Reliable computer systems employ error control codes (ECCs) to protect information from errors. For example, memories are frequently protected using single error correction-double error detection (SEC-DED) codes. ECCs are traditionally designed to minimize the number of redundant bits, as they are added to each word in the whole memory. Nevertheless, using an ECC introduces encoding and decoding latencies, silicon area usage and power consumption. In other computer units, these parameters should be optimized, and redundancy would be less important. For example, protecting registers against errors remains a major concern for deep sub-micron systems due to technology scaling. In this case, an important requirement for register protection is to keep encoding and decoding latencies as short as possible. Ultrafast error control codes achieve very low delays, independently of the word length, increasing the redundancy. This paper summarizes previous works on Ultrafast codes (SEC and SEC-DED), and proposes new codes combining double error detection and adjacent error correction. We have implemented, synthesized and compared different Ultrafast codes with other state-of-the-art fast codes. The results show the validity of the approach, achieving low latencies and a good balance with silicon area and power consumption.This work was supported in part by the Spanish Government under Project TIN2016-81075-R, and in part by the Primeros Proyectos de Investigacion, Vicerrectorado de Investigacion, Innovacion y Transferencia de la Universitat Politecnica de Valencia (UPV), Valencia, Spain, under Project PAID-06-18 20190032.Saiz-Adalid, L.; Gracia-Morán, J.; Gil Tomás, DA.; Baraza Calvo, JC.; Gil, P. (2019). Ultrafast Codes for Multiple Adjacent Error Correction and Double Error Detection. IEEE Access. 7:151131-151143. https://doi.org/10.1109/ACCESS.2019.2947315S151131151143

    Fixed-latency system for high-speed serial transmission between FPGA devices with Forward Error Correction

    Get PDF
    This paper presents the design of a compact pro-tocol for fixed-latency, high-speed, reliable, serial transmissionbetween simple field-programmable gate arrays (FPGA) devices.Implementation of the project aims to delineate word boundaries,provide randomness to the electromagnetic interference (EMI)generated by the electrical transitions, allow for clock recov-ery and maintain direct current (DC) balance. An orthogonalconcatenated coding scheme is used for correcting transmissionerrors using modified Bose–Chaudhuri–Hocquenghem (BCH)code capable of correcting all single bit errors and most ofthe double-adjacent errors. As a result all burst errors of alength up to 31 bits, and some of the longer group errors,are corrected within 256 bits long packet. The efficiency of theproposed solution equals 46.48%, as 119 out of 256 bits arefully available to the user. The design has been implementedand tested on Xilinx Kintex UltraScale+ KCU116 Evaluation Kitwith a data rate of 28.2 Gbps. Sample latency analysis has alsobeen performed so that user could easily carry out calculationsfor different transmission speed. The main advancement of thework is the use of modified BCH(15, 11) code that leads to higherror correction capabilities for burst errors and user friendlypacket length

    Quantum memories based on engineered dissipation

    Full text link
    Storing quantum information for long times without disruptions is a major requirement for most quantum information technologies. A very appealing approach is to use self-correcting Hamiltonians, i.e. tailoring local interactions among the qubits such that when the system is weakly coupled to a cold bath the thermalization process takes a long time. Here we propose an alternative but more powerful approach in which the coupling to a bath is engineered, so that dissipation protects the encoded qubit against more general kinds of errors. We show that the method can be implemented locally in four dimensional lattice geometries by means of a toric code, and propose a simple 2D set-up for proof of principle experiments.Comment: 6 +8 pages, 4 figures, Includes minor corrections updated references and aknowledgement

    Designs for increasing reliability while reducing energy and increasing lifetime

    Get PDF
    In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020. However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter. On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically. Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems. Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc. In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes

    Quantum multipartite entangled states, classical and quantum error correction

    Get PDF
    Studying entanglement is essential for our understanding of such diverse areas as high-energy physics, condensed matter physics, and quantum optics. Moreover, entanglement allows us to surpass classical physics and technologies enabling better information processing, computation, and improved metrology. Recently, entanglement also played a prominent role in characterizing and simulating quantum many-body states and in this way deepened our understanding of quantum matter. While bipartite entanglement is well understood, multipartite entanglement is much richer and leads to stronger contradictions with classical physics. Among all possible entangled states, a special class of states has attracted attention for a wide range of tasks. These states are called k-uniform states and are pure multipartite quantum states of n parties and local dimension q with the property that all of their reductions to k parties are maximally mixed. Operationally, in a k-uniform state any subset of at most k parties is maximally entangled with the rest. The k = bn/2c-uniform states are called absolutely maximally entangled because they are maximally entangled along any splitting of the n parties into two groups. These states find applications in several protocols and, in particular, are the building blocks of quantum error correcting codes with a holographic geometry, which has provided valuable insight into the connections between quantum information theory and conformal field theory. Their properties and the applications are however intriguing, as we know little about them: when they exist, how to construct them, how they relate to other multipartite entangled states, such as graph states, or how they connect under local operations and classical communication. With this motivation in mind, in this thesis we first study the properties of k-uniform states and then present systematic methods to construct closed-form expressions of them. The structure of our methods proves to be particularly fruitful in understanding the structure of these quantum states, their graph-state representation and classification under local operations and classical communication. We also construct several examples of absolutely maximally entangled states whose existence was open so far. Finally, we explore a new family of quantum error correcting codes that generalize and improve the link between classical error correcting codes, multipartite entangled states, and the stabilizer formalism. The results of this thesis can have a role in characterizing and studying the following three topics: multipartite entanglement, classical error correcting codes and quantum error correcting codes. The multipartite entangled states can provide a link to find different resources for quantum information processing tasks and quantify entanglement. Constructing two sets of highly entangled multipartite states, it is important to know if they are equivalent under local operations and classical communication. By understanding which states belong to the same class of quantum resource, one may discuss the role they play in some certain quantum information tasks like quantum key distribution, teleportation and constructing optimum quantum error correcting codes. They can also be used to explore the connection between the Antide Sitter/Conformal Field Theory holographic correspondence and quantum error correction, which will then allow us to construct better quantum error correcting codes. At the same time, their roles in the characterization of quantum networks will be essential to design functional networks, robust against losses and local noise.El estudio del entrelazamiento cuántico es esencial para la comprensión de diversas áreas como la óptica cuántica, la materia condensada e incluso la física de altas energías. Además, el entrelazamiento nos permite superar la física y tecnologías clásicas llevando a una mejora en el procesado de la información, la computación y la metrología. Recientemente se ha descubierto que el entrelazamiento desarrolla un papel central en la caracterización y simulación de sistemas cuánticos de muchos cuerpos, de esta manera facilitando nuestra comprensión de la materia cuántica. Mientras que se tiene un buen conocimiento del entrelazamiento en estados puros bipartitos, nuestra comprensión del caso de muchas partes es mucho más limitada, a pesar de que sea un escenario más rico y que presenta un contraste más fuerte con la física clásica. De entre todos los posibles estados entrelazados, una clase especial ha llamado la atención por su amplia gama de aplicaciones. Estos estados se llaman k-uniformes y son los estados multipartitos de n cuerpos con dimensión local q con la propiedad de que todas las reducciones a k cuerpos son máximamente desordenadas. Operacionalmente, en un estado k-uniforme cualquier subconjunto de hasta k cuerpos está máximamente entrelazado con el resto. Los estados k = n/2 -uniformes se llaman estados absolutamente máximamente entrelazados porque son máximamente entrelazados respecto a cualquier partición de los n cuerpos en dos grupos. Estos estados encuentran aplicaciones en varios protocolos y, en particular, forman los elementos de base para la construcción de los códigos de corrección de errores cuánticos con geometría holográfica, los cuales han aportado intuición importante sobre la conexión entre la teoría de la información cuántica y la teoría conforme de campos. Las propiedades y aplicaciones de estos estados son intrigantes porque conocemos poco sobre las mismas: cuándo existen, cómo construirlos, cómo se relacionan con otros estados con entrelazamiento multipartito, cómo los estados grafo, o como se relacionan mediante operaciones locales y comunicación clásica. Con esta motivación en mente, en esta tesis primero estudiamos las propiedades de los estados k-uniformes y luego presentamos métodos sistemáticos para construir expresiones cerradas de los mismos. La naturaleza de nuestros métodos resulta ser muy útil para entender la estructura de estos estados cuánticos, su representación como estados grafo y su clasificación bajo operaciones locales y comunicación clásica. También construimos varios ejemplos de estados absolutamente máximamente entrelazados, cuya existencia era desconocida. Finalmente, exploramos una nueva familia de códigos de corrección de errores cuánticos que generalizan y mejoran la conexión entre los códigos de corrección de errores clásicos, los estados entrelazados multipartitos y el formalismo de estabilizadores. Los resultados de esta tesis pueden desarrollar un papel importante en la caracterización y el estudio de las tres siguientes áreas: entrelazamiento multipartito, códigos de corrección de errores clásicos y códigos de corrección de errores cuánticos. Los estados de entrelazamiento multipartito pueden aportar una conexión para encontrar diferentes recursos para tareas de procesamiento de la información cuántica y cuantificación del entrelazamiento. Al construir dos conjuntos de estados multipartitos altamente entrelazados, es importante saber si son equivalentes entre operaciones locales y comunicación clásica. Entendiendo qué estados pertenecen a la misma clase de recurso cuántico, se puede discutir qué papel desempeñan en ciertas tareas de información cuántica, como la distribución de claves criptográficas cuánticas, la teleportación y la construcción de códigos de corrección de errores cuánticos óptimos. También se pueden usar para explorar la conexión entre la correspondencia holográfica Anti-de Sitter/Conformal Field Theory y códigos de corrección de errores cuánticos, que nos permitiría construir mejores códigos de corrección de errores. A la vez, su papel en la caracterización de redes cuánticas será esencial en el diseño de redes funcionales, robustas ante pérdidas y ruidos locales

    Near-capacity fixed-rate and rateless channel code constructions

    No full text
    Fixed-rate and rateless channel code constructions are designed for satisfying conflicting design tradeoffs, leading to codes that benefit from practical implementations, whilst offering a good bit error ratio (BER) and block error ratio (BLER) performance. More explicitly, two novel low-density parity-check code (LDPC) constructions are proposed; the first construction constitutes a family of quasi-cyclic protograph LDPC codes, which has a Vandermonde-like parity-check matrix (PCM). The second construction constitutes a specific class of protograph LDPC codes, which are termed as multilevel structured (MLS) LDPC codes. These codes possess a PCM construction that allows the coexistence of both pseudo-randomness as well as a structure requiring a reduced memory. More importantly, it is also demonstrated that these benefits accrue without any compromise in the attainable BER/BLER performance. We also present the novel concept of separating multiple users by means of user-specific channel codes, which is referred to as channel code division multiple access (CCDMA), and provide an example based on MLS LDPC codes. In particular, we circumvent the difficulty of having potentially high memory requirements, while ensuring that each user’s bits in the CCDMA system are equally protected. With regards to rateless channel coding, we propose a novel family of codes, which we refer to as reconfigurable rateless codes, that are capable of not only varying their code-rate but also to adaptively modify their encoding/decoding strategy according to the near-instantaneous channel conditions. We demonstrate that the proposed reconfigurable rateless codes are capable of shaping their own degree distribution according to the nearinstantaneous requirements imposed by the channel, but without any explicit channel knowledge at the transmitter. Additionally, a generalised transmit preprocessing aided closed-loop downlink multiple-input multiple-output (MIMO) system is presented, in which both the channel coding components as well as the linear transmit precoder exploit the knowledge of the channel state information (CSI). More explicitly, we embed a rateless code in a MIMO transmit preprocessing scheme, in order to attain near-capacity performance across a wide range of channel signal-to-ratios (SNRs), rather than only at a specific SNR. The performance of our scheme is further enhanced with the aid of a technique, referred to as pilot symbol assisted rateless (PSAR) coding, whereby a predetermined fraction of pilot bits is appropriately interspersed with the original information bits at the channel coding stage, instead of multiplexing pilots at the modulation stage, as in classic pilot symbol assisted modulation (PSAM). We subsequently demonstrate that the PSAR code-aided transmit preprocessing scheme succeeds in gleaning more information from the inserted pilots than the classic PSAM technique, because the pilot bits are not only useful for sounding the channel at the receiver but also beneficial for significantly reducing the computational complexity of the rateless channel decoder

    Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Design, Implementation and Evaluation of a Low Redundant Error Correction Code

    Full text link
    [EN] The continuous raise in the integration scale of CMOS technology has provoked an augment in the fault rate. Particularly, computer memory is affected by Single Cell Upsets (SCU) and Multiple Cell Upsets (MCU). A common method to tolerate errors in this element is the use of Error Correction Codes (ECC). The addition of an ECC introduces a series of overheads: silicon area, power consumption and delay overheads of encoding and decoding circuits, as well as several extra bits added to allow detecting and/or correcting errors. ECC can be designed with different parameters in mind: low redundancy, low delay, error coverage, etc. The idea of this paper is to study the effects produced when adding an ECC to a microprocessor with respect to overheads. Usually, ECC with different characteristics are continuously proposed. However, a great quantity of these proposals only present the ECC, not showing its behavior when using them in a microprocessor. In this work, we present the design of an ECC whose main characteristic is a low number of code bits (low redundancy). Then, we study the overhead this ECC introduces. Firstly, we show a study of silicon area, delay and power consumption of encoder and decoder circuits, and secondly, how the addition of this ECC affects to a RISC microprocessor.© 2021 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Gracia-Morán, J.; Saiz-Adalid, L.; Baraza-Calvo, J.; Gil Tomás, DA.; Gil, P. (2021). Design, Implementation and Evaluation of a Low Redundant Error Correction Code. IEEE Latin America Transactions. 19(11):1903-1911. https://doi.org/10.1109/TLA.2021.947562419031911191

    Combined Time, Frecuency and Space Diversity in Multimedia Mobile Broadcasting Systems

    Full text link
    El uso combinado de diversidad en el dominio temporal, frecuencial y espacial constituye una valiosa herramienta para mejorar la recepción de servicios de difusión móviles. Gracias a la mejora conseguida por las técnicas de diversidad es posible extender la cobertura de los servicios móviles además de reducir la infraestructura de red. La presente tesis investiga el uso de técnicas de diversidad para la provisión de servicios móviles en la familia europea de sistemas de difusión terrestres estandarizada por el prpoyecto DVB (Digital Video Broadcasting). Esto incluye la primera y segunda generación de sistemas DVB-T (Terrestrial), DVB-NGH (Handheld), y DVB-T2 (Terrestrial 2nd generation), así como el sistema de siguiente generación DVB-NGH. No obstante, el estudio llevado a cabo en la tesis es genérico y puede aplicarse a futuras evoluciones de estándares como el japonés ISDB-T o el americano ATSC. Las investigaciones realizadas dentro del contexto de DVB-T, DVB-H y DVBT2 tienen como objetivo la transmisión simultánea de servicios fijos y móviles en redes terrestres. Esta Convergencia puede facilitar la introducción de servicios móviles de TB debido a la reutilización de espectro, contenido e infraestructura. De acuerdo a los resultados, la incorporación de entrelazado temporal en la capa física para diversidad temporal, y de single-input multiple-output (SIMO) para diversidad espacial, son esenciales para el rendimiento de sistemas móviles de difusión. A pesar de que las técnicas upper later FEC (UL-FEC) pueden propocionar diversidad temporal en sistemas de primera generación como DVB-T y DVB-H, requieren la transmisión de paridad adicional y no son útiles para la recepción estática. El análisis en t�ñerminos de link budjget revela que las técnicas de diversidad noson suficientes para facilitar la provision de servicios móviles en redes DVB-T y DVB-T2 planificadas para recepción fija. Sin embargo, el uso de diversidad en redes planificadas para recepción portableGozálvez Serrano, D. (2012). Combined Time, Frecuency and Space Diversity in Multimedia Mobile Broadcasting Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16273Palanci
    corecore