50 research outputs found

    Dekodovanje kodova sa malom gustinom provera parnosti u prisustvu grešaka u logičkim kolima

    Get PDF
    Sve ve´ca integracija poluprovodniˇckih tehnologija, varijacije nastale usled nesavršenosti procesa proizvodnje, kao zahtevi za smanjenjem napona napajanja cˇine elektronske ured¯aje inherentno nepouzdanim. Agresivno skaliranje napona smanjuje otpornost na šum i dovodi do nepouzdanog rada ured¯aja. Široko je prihvac´ena paradigma prema kojoj se naredne generacije digitalnih elektronskih ured¯aja moraju opremiti logikom za korekciju hardverskih grešaka...Due to huge density integration increase, lower supply voltages, and variations in technological process, complementary metal-oxide-semiconductor (CMOS) and emerging nanoelectronic devices are inherently unreliable. Moreover, the demands for energy efficiency require reduction of energy consumption by several orders of magnitude, which can be done only by aggressive supply voltage scaling. Consequently, the signal levels are much lower and closer to the noise level, which reduces the component noise immunity and leads to unreliable behavior. It is widely accepted that future generations of circuits and systems must be designed to deal with unreliable components..

    Architectural Techniques to Enable Reliable and Scalable Memory Systems

    Get PDF
    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even building the next Exascale supercomputer. To ensure even a bare-minimum level of reliability, present-day solutions tend to have high performance, power and area overheads. Ideally, we would like memory systems to remain robust, scalable, and implementable while keeping the overheads to a minimum. This dissertation describes how simple cross-layer architectural techniques can provide orders of magnitude higher reliability and enable seamless scalability for memory systems while incurring negligible overheads.Comment: PhD thesis, Georgia Institute of Technology (May 2017

    Tailoring surface codes: Improvements in quantum error correction with biased noise

    Get PDF
    For quantum computers to reach their full potential will require error correction. We study the surface code, one of the most promising quantum error correcting codes, in the context of predominantly dephasing (Z-biased) noise, as found in many quantum architectures. We find that the surface code is highly resilient to Y-biased noise, and tailor it to Z-biased noise, whilst retaining its practical features. We demonstrate ultrahigh thresholds for the tailored surface code: ~39% with a realistic bias of = 100, and ~50% with pure Z noise, far exceeding known thresholds for the standard surface code: ~11% with pure Z noise, and ~19% with depolarizing noise. Furthermore, we provide strong evidence that the threshold of the tailored surface code tracks the hashing bound for all biases. We reveal the hidden structure of the tailored surface code with pure Z noise that is responsible for these ultrahigh thresholds. As a consequence, we prove that its threshold with pure Z noise is 50%, and we show that its distance to Z errors, and the number of failure modes, can be tuned by modifying its boundary. For codes with appropriately modified boundaries, the distance to Z errors is O(n) compared to O(n1/2) for square codes, where n is the number of physical qubits. We demonstrate that these characteristics yield a significant improvement in logical error rate with pure Z and Z-biased noise. Finally, we introduce an efficient approach to decoding that exploits code symmetries with respect to a given noise model, and extends readily to the fault-tolerant context, where measurements are unreliable. We use this approach to define a decoder for the tailored surface code with Z-biased noise. Although the decoder is suboptimal, we observe exceptionally high fault-tolerant thresholds of ~5% with bias = 100 and exceeding 6% with pure Z noise. Our results open up many avenues of research and, recent developments in bias-preserving gates, highlight their direct relevance to experiment

    Changing edges in graphical model algorithms

    Get PDF
    Graphical models are used to describe the interactions in structures, such as the nodes in decoding circuits, agents in small-world networks, and neurons in our brains. These structures are often not static and can change over time, resulting in removal of edges, extra nodes, or changes in weights of the links in the graphs. For example, wires in message-passing decoding circuits can be misconnected due to process variation in nanoscale manufacturing or circuit aging, the style of passes among soccer players can change based on the team's strategy, and the connections among neurons can be broken due to Alzheimer's disease. The effects of these changes in graphs can reveal useful information and inspire approaches to understand some challenging problems. In this work, we investigate the dynamic changes of edges in graphs and develop mathematical tools to analyze the effects of these changes by embedding the graphical models in two applications. The first half of the work is about the performance of message-passing LDPC decoders in the presence of permanently and transiently missing connections, which is equivalent to the removal of edges in the codes' graphical representation Tanner graphs. We prove concentration and convergence theorems that validate the use of density evolution performance analysis and conclude that arbitrarily small error probability is not possible for decoders with missing connections. However, we find suitably defined decoding thresholds for communication systems with binary erasure channels under peeling decoding, as well as binary symmetric channels under Gallager A and B decoding. We see that decoding is robust to missing wires, as decoding thresholds degrade smoothly. Surprisingly, we discovered the stochastic facilitation (SF) phenomenon in Gallager B decoders where having more missing connections helps improve the decoding thresholds under some conditions. The second half of the work is about the advantages of the semi-metric property of complex weighted networks. Nodes in graphs represent elements in systems and edges describe the level of interactions among the nodes. A semi-metric edge in a graph, which violates the triangle inequality, indicates that there is another latent relation between the pair of nodes connected by the edge. We show the equivalence between modelling a sporting event using a stochastic Markov chain and an algebraic diffusion process, and we also show that using the algebraic representation to calculate the stationary distribution of a network can preserve the graph's semi-metric property, which is lost in stochastic models. These semi-metric edges can be treated as redundancy and be pruned in the all-pairs shortest-path problems to accelerate computations, which can be applied to more complicated problems such as PageRank. We then further demonstrate the advantages of semi-metricity in graphs by showing that the percentage of semi-metric edges in the interaction graphs of two soccer teams changes linearly with the final score. Interestingly, these redundant edges can be interpreted as a measure of a team's tactics

    Designs for increasing reliability while reducing energy and increasing lifetime

    Get PDF
    In the last decades, the computing technology experienced tremendous developments. For instance, transistors' feature size shrank to half at every two years as consistently from the first time Moore stated his law. Consequently, number of transistors and core count per chip doubles at each generation. Similarly, petascale systems that have the capability of processing more than one billion calculation per second have been developed. As a matter of fact, exascale systems are predicted to be available at year 2020. However, these developments in computer systems face a reliability wall. For instance, transistor feature sizes are getting so small that it becomes easier for high-energy particles to temporarily flip the state of a memory cell from 1-to-0 or 0-to-1. Also, even if we assume that fault-rate per transistor stays constant with scaling, the increase in total transistor and core count per chip will significantly increase the number of faults for future desktop and exascale systems. Moreover, circuit ageing is exacerbated due to increased manufacturing variability and thermal stresses, therefore, lifetime of processor structures are becoming shorter. On the other side, due to the limited power budget of the computer systems such that mobile devices, it is attractive to scale down the voltage. However, when the voltage level scales to beyond the safe margin especially to the ultra-low level, the error rate increases drastically. Nevertheless, new memory technologies such as NAND flashes present only limited amount of nominal lifetime, and when they exceed this lifetime, they can not guarantee storing of the data correctly leading to data retention problems. Due to these issues, reliability became a first-class design constraint for contemporary computing in addition to power and performance. Moreover, reliability even plays increasingly important role when computer systems process sensitive and life-critical information such as health records, financial information, power regulation, transportation, etc. In this thesis, we present several different reliability designs for detecting and correcting errors occurring in processor pipelines, L1 caches and non-volatile NAND flash memories due to various reasons. We design reliability solutions in order to serve three main purposes. Our first goal is to improve the reliability of computer systems by detecting and correcting random and non-predictable errors such as bit flips or ageing errors. Second, we aim to reduce the energy consumption of the computer systems by allowing them to operate reliably at ultra-low voltage level. Third, we target to increase the lifetime of new memory technologies by implementing efficient and low-cost reliability schemes

    Resource optimization for fault-tolerant quantum computing

    Get PDF
    In this thesis we examine a variety of techniques for reducing the resources required for fault-tolerant quantum computation. First, we show how to simplify universal encoded computation by using only transversal gates and standard error correction procedures, circumventing existing no-go theorems. We then show how to simplify ancilla preparation, reducing the cost of error correction by more than a factor of four. Using this optimized ancilla preparation, we develop improved techniques for proving rigorous lower bounds on the noise threshold. Additional overhead can be incurred because quantum algorithms must be translated into sequences of gates that are actually available in the quantum computer. In particular, arbitrary single-qubit rotations must be decomposed into a discrete set of fault-tolerant gates. We find that by using a special class of non-deterministic circuits, the cost of decomposition can be reduced by as much as a factor of four over state-of-the-art techniques, which typically use deterministic circuits. Finally, we examine global optimization of fault-tolerant quantum circuits under physical connectivity constraints. We adapt techniques from VLSI in order to minimize time and space usage for computations in the surface code, and we develop a software prototype to demonstrate the potential savings.Comment: 231 pages, Ph.D. thesis, University of Waterlo
    corecore