172 research outputs found

    A ERROR-CORRECTION ROUTINE FOR DETECTION OF SIGNIFICANCE

    Get PDF
    Recently, the amount of errors affecting several memory cell has elevated considerably. The suggested parallel SEC-DAEC decoder continues to be implemented in High-density lipoprotein and mapped to some TSMC 65-nm technology library using Synopsys Design Compiler. The standard SEC and SEC-DAEC decoders are also carried out to show the advantages of the brand new decoder. The cost compensated for that low decoding time is the fact that generally, the codes aren't optimal when it comes to memory overhead and wish more parity check bits. It's because the scaling from the memory cells and it is forecasted to develop further. This is dependent on the observation the cells impacted by an MCU are physically close. Interleaving, however, includes a cost because it complicates the memory design. Research for multibit ECCs has centered on lowering the decoding latency as oftentimes, the standard decoders are serial and wish several clock cycles. The suggested decoder continues to be implemented in hardware description language and mapped to some 65-nm technology to exhibit its benefits. The primary contribution of the brief would be to enable a quick and efficient parallel correction from the double and single-adjacent errors. The present SEC-DAEC decoders act like SEC decoders but they have to check even the syndrome values that correspond double-adjacent errors. This involves roughly doubling the amount of comparisons. The suggested SEC-DAEC decoder needs a less circuit area than both traditional SEC-DAEC decoder as well as an SEC decoder

    Decoding techniques and a modulation scheme for band-limited communications

    Get PDF

    Single parity check product codes

    Get PDF
    Questo elaborato presenta ed analizza nel dettaglio i Single Parity Check Product Codes; vengono inizialmente esaminati i concetti alla base di product codes generici, per poi passare ad una focalizzazione sui Single Parity Check Product Codes e sulla relativa codifica e decodifica. Verranno trattate nel dettaglio la codifica concatenata e la decodifica iterativa, nozioni fondamentali per i product codes, nonché le prestazioni degli SPCPC su canale binario simmetrico e canale AWG

    Exploration and Analysis of Combinations of Hamming Codes in 32-bit Memories

    Full text link
    Reducing the threshold voltage of electronic devices increases their sensitivity to electromagnetic radiation dramatically, increasing the probability of changing the memory cells' content. Designers mitigate failures using techniques such as Error Correction Codes (ECCs) to maintain information integrity. Although there are several studies of ECC usage in spatial application memories, there is still no consensus in choosing the type of ECC as well as its organization in memory. This work analyzes some configurations of the Hamming codes applied to 32-bit memories in order to use these memories in spatial applications. This work proposes the use of three types of Hamming codes: Ham(31,26), Ham(15,11), and Ham(7,4), as well as combinations of these codes. We employed 36 error patterns, ranging from one to four bit-flips, to analyze these codes. The experimental results show that the Ham(31,26) configuration, containing five bits of redundancy, obtained the highest rate of simple error correction, almost 97\%, with double, triple, and quadruple error correction rates being 78.7\%, 63.4\%, and 31.4\%, respectively. While an ECC configuration encompassed four Ham(7.4), which uses twelve bits of redundancy, only fixes 87.5\% of simple errors

    Resource optimization for fault-tolerant quantum computing

    Get PDF
    In this thesis we examine a variety of techniques for reducing the resources required for fault-tolerant quantum computation. First, we show how to simplify universal encoded computation by using only transversal gates and standard error correction procedures, circumventing existing no-go theorems. We then show how to simplify ancilla preparation, reducing the cost of error correction by more than a factor of four. Using this optimized ancilla preparation, we develop improved techniques for proving rigorous lower bounds on the noise threshold. Additional overhead can be incurred because quantum algorithms must be translated into sequences of gates that are actually available in the quantum computer. In particular, arbitrary single-qubit rotations must be decomposed into a discrete set of fault-tolerant gates. We find that by using a special class of non-deterministic circuits, the cost of decomposition can be reduced by as much as a factor of four over state-of-the-art techniques, which typically use deterministic circuits. Finally, we examine global optimization of fault-tolerant quantum circuits under physical connectivity constraints. We adapt techniques from VLSI in order to minimize time and space usage for computations in the surface code, and we develop a software prototype to demonstrate the potential savings.Comment: 231 pages, Ph.D. thesis, University of Waterlo

    802.11 Payload Iterative decoding between multiple transmission attempts

    Get PDF
    Abstract. The institute of electrical and electronics engineers (IEEE) 802.11 standard specifies widely used technology for wireless local area networks (WLAN). Standard specifies high-performance physical and media access control (MAC) layers for a distributed network but lacks an effective hybrid automatic repeat request (HARQ). Currently, the standard specifies forward error correction (FEC), error detection (ED), and automatic repeat request (ARQ), but in case of decoding errors, the previously transmitted information is not used when decoding the retransmitted packet. This is called Type 1 HARQ. Type 1 HARQ uses received energy inefficiently, but the simple implementation makes it an attractive solution. Unfortunately, research applying more sophisticated HARQ schemes on top of IEEE 802.11 is limited. In this Master’s Thesis, a novel HARQ technology based on packet retransmissions that can be decoded in a turbo-like manner, keeping as much as possible compatibility with vanilla 802.11, is proposed. The proposed technology is simulated with both the IEEE 802.11 code and with the robust, efficient and smart communication in unpredictable environments (RESCUE) code. An additional interleaver is added before the convolutional encoder in the proposed technology, interleaving either the whole frame or only the payload to enable effective iterative decoding. For received frames, turbo-like iterations are done between initially transmitted packet copy and retransmissions. Results are compared against the non-iterative combining method maximizing signal-to-noise ratio (SNR), maximum ratio combining (MRC). The main design goal for this technology is to maintain compatibility with the 802.11 standard while allowing efficient HARQ. Other design goals are range extension, higher throughput, and better performance in terms of bit error rate (BER) and frame error rate (FER). This technology can be used for range extension at low SNR range and may provide up to 4 dB gain at medium SNR range compared to MRC. At high SNR, technology can reduce the penalty from retransmission allowing higher average modulation and coding scheme (MCS). However, these gains come with the cost of computational complexity from the iterative decoding. The main limiting factors of the proposed technology are decoding errors in the header and the scrambler area, and resource-hungry-processing. In simulations, perfect synchronization and packet detection is assumed, but in reality, especially at low SNR, packet detection and synchronization would be challenging. 802.11 pakettien iteratiivinen dekoodaus lähetysten välillä. Tiivistelmä. IEEE 802.11-standardi määrittelee yleisesti käytetyn teknologian langattomille lähiverkoille. Standardissa määritellään tehokas fyysinen- ja verkkoliityntäkerros hajautetuille verkoille, mutta siitä puuttuu tehokas yhdistetty automaattinen uudelleenlähetys. Nykyisellään standardi määrittelee virheenkorjaavan koodin, virheellisen paketin tunnistuksen sekä automaattisen uudelleenlähetyksen, mutta aikaisemmin lähetetyn paketin informaatiota ei käytetä hyväksi uudelleenlähetystilanteessa. Tämä menetelmä tunnetaan tyypin yksi yhdistettynä automaattisena uudelleenlähetyksenä. Tyypin yksi yhdistetty automaattinen uudelleenlähetys käyttää vastaanotettua signaalia tehottomasti, mutta yksinkertaisuus tekee siitä houkuttelevan vaihtoehdon. Valitettavasti edistyneempien uudelleenlähetysvaihtoehtojen tutkimusta 802.11-standardiin on rajoitetusti. Tässä diplomityössä esitellään uusi yhdistetty uudelleenlähetysteknologia, joka pohjautuu pakettien uudelleenlähetykseen, sallien turbo-tyylisen dekoodaamisen säilyttäen mahdollisimman hyvän taaksepäin yhteensopivuutta alkuperäisen 802.11-standardin kanssa. Tämä teknologia on simuloitu käyttäen sekä 802.11- että nk. RESCUE-virheenkorjauskoodia. Teknologiassa uusi lomittaja on lisätty konvoluutio-enkoodaajan eteen, sallien tehokkaan iteratiivisen dekoodaamisen, lomittaen joko koko paketin tai ainoastaan hyötykuorman. Vastaanotetuille paketeille tehdään turbo-tyyppinen iteraatio alkuperäisen vastaanotetun kopion ja uudelleenlähetyksien välillä. Tuloksia vertaillaan eiiteratiiviseen yhdistämismenetelmään, maksimisuhdeyhdistelyyn, joka maksimoi yhdistetyn signaali-kohinasuhteen. Tärkeimpänä suunnittelutavoitteena tässä työssä on tehokas uudelleenlähetysmenetelmä, joka ylläpitää taaksepäin yhteensopivuutta IEEE 802.11-standardin kanssa. Muita tavoitteita ovat kantaman lisäys, nopeampi yhteys ja matalampi bitti- ja pakettivirhesuhde. Kehitettyä teknologiaa voidaan käyttää kantaman lisäykseen matalan signaalikohinasuhteen vallitessa ja se on jopa 4 dB parempi kohtuullisella signaalikohinasuhteella kuin maksimisuhdeyhdistely. Korkealla signaali-kohinasuhteella teknologiaa voidaan käyttää pienentämään häviötä epäonnistuneesta paketinlähetyksestä ja täten sallien korkeamman modulaatio-koodiasteen käyttämisen. Valitettavasti nämä parannukset tulevat kasvaneen laskennallisen monimutkaisuuden kustannuksella, johtuen iteratiivisesta dekoodaamisesta. Isoimmat rajoittavat tekijät teknologian käytössä ovat dekoodausvirheet otsikossa ja datamuokkaimen siemenessä. Tämän lisäksi käyttöä rajoittaa resurssisyöppö prosessointi. Simulaatioissa oletetaan täydellinen synkronisointi, mutta todellisuudessa, erityisesti matalalla signaali-kohinasuhteella, paketin tunnistus ja synkronointi voivat olla haasteellisia

    Digital VLSI Architectures for Advanced Channel Decoders

    Get PDF
    Error-correcting codes are strongly adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probes. New and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. This work aims to focus on Polar codes, which are a recent class of channel codes with the proven ability to reduce decoding error probability arbitrarily small as the block-length is increased, provided that the code rate is less than the capacity of the channel. This property and the recursive code-construction of this algorithms attracted wide interest from the communications community. Hardware architectures with reduced complexity can efficiently implement a polar codes decoder using either successive cancellation approximation or belief propagation algorithms. The latter offers higher throughput at high signal-to-noise ratio thanks to the inherently parallel decision-making capability of such decoder type. A new analysis on belief propagation scheduling algorithms for polar codes and on interconnection structure of the decoding trellis not covered in literature is also presented. It allowed to achieve an hardware implementation that increase the maximum information throughput under belief propagation decoding while also minimizing the implementation complexity

    Cryptanalysis of the Fuzzy Vault for Fingerprints: Vulnerabilities and Countermeasures

    Get PDF
    Das Fuzzy Vault ist ein beliebter Ansatz, um die Minutien eines menschlichen Fingerabdrucks in einer Sicherheitsanwendung geschützt zu speichern. In dieser Arbeit werden verschiedene Implementationen des Fuzzy Vault für Fingerabdrücke in verschiedenen Angriffsszenarien untersucht. Unsere Untersuchungen und Analysen bestätigen deutlich, dass die größte Schwäche von Implementationen des Fingerabdruck Fuzzy Vaults seine hohe Anfälligkeit gegen False-Accept Angriffe ist. Als Gegenmaßnahme könnten mehrere Finger oder sogar mehrere biometrische Merkmale eines Menschen gleichzeitig verwendet werden. Allerdings besitzen traditionelle Fuzzy Vault Konstruktionen eine wesentliche Schwäche: den Korrelationsangriff. Es ist bekannt, dass das Runden von Minutien auf ein starres System, diese Schwäche beheben. Ausgehend davon schlagen wir eine Implementation vor. Würden nun Parameter traditioneller Konstruktionen übernommen, so würden wir einen signifikanten Verlust an Verifikations-Leistung hinnehmen müssen. In einem Training wird daher eine gute Parameterkonfiguration neu bestimmt. Um den Authentifizierungsaufwand praktikabel zu machen, verwenden wir einen randomisierten Dekodierer und zeigen, dass die erreichbaren Raten vergleichbar mit den Raten einer traditionellen Konstruktion sind. Wir folgern, dass das Fuzzy Vault ein denkbarer Ansatz bleibt, um die schwierige Aufgabe ein kryptographisch sicheres biometrisches Kryptosystem in Zukunft zu implementieren.The fuzzy fingerprint vault is a popular approach to protect a fingerprint's minutiae as a building block of a security application. In this thesis simulations of several attack scenarios are conducted against implementations of the fuzzy fingerprint vault from the literature. Our investigations clearly confirm that the weakest link in the fuzzy fingerprint vault is its high vulnerability to false-accept attacks. Therefore, multi-finger or even multi-biometric cryptosystems should be conceived. But there remains a risk that cannot be resolved by using more biometric information of an individual if features are protected using a traditional fuzzy vault construction: The correlation attack remains a weakness of such constructions. It is known that quantizing minutiae to a rigid system while filling the whole space with chaff makes correlation obsolete. Based on this approach, we propose an implementation. If parameters were adopted from a traditional fuzzy fingerprint vault implementation, we would experience a significant loss in authentication performance. Therefore, we perform a training to determine reasonable parameters for our implementation. Furthermore, to make authentication practical, the decoding procedure is proposed to be randomized. By running a performance evaluation on a dataset generally used, we find that achieving resistance against the correlation attack does not have to be at the cost of authentication performance. Finally, we conclude that fuzzy vault remains a possible construction for helping in solving the challenging task of implementing a cryptographically secure multi-biometric cryptosystem in future

    The Telecommunications and Data Acquisition Report

    Get PDF
    Deep Space Network advanced systems, very large scale integration architecture for decoders, radar interface and control units, microwave time delays, microwave antenna holography, and a radio frequency interference survey are among the topics discussed
    corecore