5,740 research outputs found

    Fault Secure Encoder and Decoder for NanoMemory Applications

    Get PDF
    Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD) particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT) is dominated by the failure rate of the encoder and decoder. We prove that Euclidean geometry low-density parity-check (EG-LDPC) codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10^(-18) upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 10^(11) bit/cm^2 with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead

    Input-Output Logic based Fault-Tolerant Design Technique for SRAM-based FPGAs

    Full text link
    Effects of radiation on electronic circuits used in extra-terrestrial applications and radiation prone environments need to be corrected. Since FPGAs offer flexibility, the effects of radiation on them need to be studied and robust methods of fault tolerance need to be devised. In this paper a new fault-tolerant design strategy has been presented. This strategy exploits the relation between changes in inputs and the expected change in output. Essentially, it predicts whether or not a change in the output is expected and thereby calculates the error. As a result this strategy reduces hardware and time redundancy required by existing strategies like Duplication with Comparison (DWC) and Triple Modular Redundancy (TMR). The design arising from this strategy has been simulated and its robustness to fault-injection has been verified. Simulations for a 16 bit multiplier show that the new design strategy performs better than the state-of-the-art on critical factors such as hardware redundancy, time redundancy and power consumption.Comment: 7 page

    Universal set of Dynamically Protected Gates for Bipartite Qubit Networks II: Soft Pulse Implementation of the [[5,1,3]] Quantum Error Correcting Code

    Full text link
    We model repetitive quantum error correction (QEC) with the single-error-correcting five-qubit code on a network of individually-controlled qubits with always-on Ising couplings, using our previously designed universal set of quantum gates based on sequences of shaped decoupling pulses. In addition to serving as accurate quantum gates, the sequences also provide dynamical decoupling (DD) of low-frequency phase noise. The simulation involves integrating unitary dynamics of six qubits over the duration of tens of thousands of control pulses, using classical stochastic phase noise as a source of decoherence. The combined DD/QEC protocol dramatically improves the coherence, with the QEC alone responsible for more than an order of magnitude infidelity reduction.Comment: 12 pages, 9 figure

    Soft Realization: a Bio-inspired Implementation Paradigm

    Full text link
    Researchers traditionally solve the computational problems through rigorous and deterministic algorithms called as Hard Computing. These precise algorithms have widely been realized using digital technology as an inherently reliable and accurate implementation platform, either in hardware or software forms. This rigid form of implementation which we refer as Hard Realization relies on strict algorithmic accuracy constraints dictated to digital design engineers. Hard realization admits paying as much as necessary implementation costs to preserve computation precision and determinism throughout all the design and implementation steps. Despite its prior accomplishments, this conventional paradigm has encountered serious challenges with today's emerging applications and implementation technologies. Unlike traditional hard computing, the emerging soft and bio-inspired algorithms do not rely on fully precise and deterministic computation. Moreover, the incoming nanotechnologies face increasing reliability issues that prevent them from being efficiently exploited in hard realization of applications. This article examines Soft Realization, a novel bio-inspired approach to design and implementation of an important category of applications noticing the internal brain structure. The proposed paradigm mitigates major weaknesses of hard realization by (1) alleviating incompatibilities with today's soft and bio-inspired algorithms such as artificial neural networks, fuzzy systems, and human sense signal processing applications, and (2) resolving the destructive inconsistency with unreliable nanotechnologies. Our experimental results on a set of well-known soft applications implemented using the proposed soft realization paradigm in both reliable and unreliable technologies indicate that significant energy, delay, and area savings can be obtained compared to the conventional implementation.Comment: The Imprecise (Approximate) computing and Relaxed Fault Tolerance concept are some but not all instances of the Soft Realization. The soft realization and imprecise computing are first introduced around 2005 as H.R. Mahdiani Phd Thesis proposal. The first imprecise computing paper is published in 2010. This manuscript is written in 2012, submitted to Nature in 2017 and rejected by the editor

    Design of Asynchronous Circuits for High Soft Error Tolerance in Deep Submicron CMOS Circuits

    Get PDF
    As the devices are scaling down, the combinational logic will become susceptible to soft errors. The conventional soft error tolerant methods for soft errors on combinational logic do not provide enough high soft error tolerant capability with reasonably small performance penalty. This paper investigates the feasibility of designing quasi-delay insensitive (QDI) asynchronous circuits for high soft error tolerance. We analyze the behavior of null convention logic (NCL) circuits in the presence of particle strikes, and propose an asynchronous pipeline for soft-error correction and a novel technique to improve the robustness of threshold gates, which are basic components in NCL, against particle strikes by using Schmitt trigger circuit and resizing the feedback transistor. Experimental results show that the proposed threshold gates do not generate soft errors under the strike of a particle within a certain energy range if a proper transistor size is applied. The penalties, such as delay and power consumption, are also presented

    Additively manufacturable micro-mechanical logic gates.

    Get PDF
    Early examples of computers were almost exclusively based on mechanical devices. Although electronic computers became dominant in the past 60 years, recent advancements in three-dimensional micro-additive manufacturing technology provide new fabrication techniques for complex microstructures which have rekindled research interest in mechanical computations. Here we propose a new digital mechanical computation approach based on additively-manufacturable micro-mechanical logic gates. The proposed mechanical logic gates (i.e., NOT, AND, OR, NAND, and NOR gates) utilize multi-stable micro-flexures that buckle to perform Boolean computations based purely on mechanical forces and displacements with no electronic components. A key benefit of the proposed approach is that such systems can be additively fabricated as embedded parts of microarchitected metamaterials that are capable of interacting mechanically with their surrounding environment while processing and storing digital data internally without requiring electric power

    Interconnection Networks for Scalable Quantum Computers

    Full text link
    We show that the problem of communication in a quantum computer reduces to constructing reliable quantum channels by distributing high-fidelity EPR pairs. We develop analytical models of the latency, bandwidth, error rate and resource utilization of such channels, and show that 100s of qubits must be distributed to accommodate a single data communication. Next, we show that a grid of teleportation nodes forms a good substrate on which to distribute EPR pairs. We also explore the control requirements for such a network. Finally, we propose a specific routing architecture and simulate the communication patterns of the Quantum Fourier Transform to demonstrate the impact of resource contention.Comment: To appear in International Symposium on Computer Architecture 2006 (ISCA 2006

    Design and Evaluation of Radiation-Hardened Standard Cell Flip-Flops

    Get PDF
    Use of a standard non-rad-hard digital cell library in the rad-hard design can be a cost-effective solution for space applications. In this paper we demonstrate how a standard non-rad-hard flip-flop, as one of the most vulnerable digital cells, can be converted into a rad-hard flip-flop without modifying its internal structure. We present five variants of a Triple Modular Redundancy (TMR) flip-flop: baseline TMR flip-flop, latch-based TMR flip-flop, True-Single Phase Clock (TSPC) TMR flip-flop, scannable TMR flip-flop and self-correcting TMR flip-flop. For all variants, the multi-bit upsets have been addressed by applying special placement constraints, while the Single Event Transient (SET) mitigation was achieved through the usage of customized SET filters and selection of optimal inverter sizes for the clock and reset trees. The proposed flip-flop variants feature differing performance, thus enabling to choose the optimal solution for every sensitive node in the circuit, according to the predefined design constraints. Several flip-flop designs have been validated on IHP’s 130nm BiCMOS process, by irradiation of custom-designed shift registers. It has been shown that the proposed TMR flip-flops are robust to soft errors with a threshold Linear Energy Transfer (LET) from ( 32.4 (MeV⋅cm2/mg) ) to ( 62.5 (MeV⋅cm2/mg) ), depending on the variant

    Robust control of quantum gates via sequential convex programming

    Full text link
    Resource tradeoffs can often be established by solving an appropriate robust optimization problem for a variety of scenarios involving constraints on optimization variables and uncertainties. Using an approach based on sequential convex programming, we demonstrate that quantum gate transformations can be made substantially robust against uncertainties while simultaneously using limited resources of control amplitude and bandwidth. Achieving such a high degree of robustness requires a quantitative model that specifies the range and character of the uncertainties. Using a model of a controlled one-qubit system for illustrative simulations, we identify robust control fields for a universal gate set and explore the tradeoff between the worst-case gate fidelity and the field fluence. Our results demonstrate that, even for this simple model, there exist a rich variety of control design possibilities. In addition, we study the effect of noise represented by a stochastic uncertainty model.Comment: 13 pages, 3 figures; published versio

    Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms

    Full text link
    When a computational task tolerates a relaxation of its specification or when an algorithm tolerates the effects of noise in its execution, hardware, programming languages, and system software can trade deviations from correct behavior for lower resource usage. We present, for the first time, a synthesis of research results on computing systems that only make as many errors as their users can tolerate, from across the disciplines of computer aided design of circuits, digital system design, computer architecture, programming languages, operating systems, and information theory. Rather than over-provisioning resources at each layer to avoid errors, it can be more efficient to exploit the masking of errors occurring at one layer which can prevent them from propagating to a higher layer. We survey tradeoffs for individual layers of computing systems from the circuit level to the operating system level and illustrate the potential benefits of end-to-end approaches using two illustrative examples. To tie together the survey, we present a consistent formalization of terminology, across the layers, which does not significantly deviate from the terminology traditionally used by research communities in their layer of focus.Comment: 35 page
    • …
    corecore