33 research outputs found

    Multiple bit error correcting architectures over finite fields

    Get PDF
    This thesis proposes techniques to mitigate multiple bit errors in GF arithmetic circuits. As GF arithmetic circuits such as multipliers constitute the complex and important functional unit of a crypto-processor, making them fault tolerant will improve the reliability of circuits that are employed in safety applications and the errors may cause catastrophe if not mitigated. Firstly, a thorough literature review has been carried out. The merits of efficient schemes are carefully analyzed to study the space for improvement in error correction, area and power consumption. Proposed error correction schemes include bit parallel ones using optimized BCH codes that are useful in applications where power and area are not prime concerns. The scheme is also extended to dynamically correcting scheme to reduce decoder delay. Other method that suits low power and area applications such as RFIDs and smart cards using cross parity codes is also proposed. The experimental evaluation shows that the proposed techniques can mitigate single and multiple bit errors with wider error coverage compared to existing methods with lesser area and power consumption. The proposed scheme is used to mask the errors appearing at the output of the circuit irrespective of their cause. This thesis also investigates the error mitigation schemes in emerging technologies (QCA, CNTFET) to compare area, power and delay with existing CMOS equivalent. Though the proposed novel multiple error correcting techniques can not ensure 100% error mitigation, inclusion of these techniques to actual design can improve the reliability of the circuits or increase the difficulty in hacking crypto-devices. Proposed schemes can also be extended to non GF digital circuits

    Design, Analysis and Test of Logic Circuits under Uncertainty.

    Full text link
    Integrated circuits are increasingly susceptible to uncertainty caused by soft errors, inherently probabilistic devices, and manufacturing variability. As device technologies scale, these effects become detrimental to circuit reliability. In order to address this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer that uses functional-simulation signatures to capture error effects, 2) novel design techniques that improve reliability using little area and performance overhead, 3) a matrix-based reliability-analysis framework that captures many types of probabilistic faults, and 4) test-generation/compaction methods aimed at probabilistic faults in logic circuits. SER analysis must account for the main error-masking mechanisms in ICs: logic, timing, and electrical masking. We relate logic masking to node testability of the circuit and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute estability (signal probability and observability). To account for timing masking, we compute error-latching windows (ELWs) from timing analysis information. Electrical masking is incorporated into our estimates through derating factors for gate error probabilities. The SER of a circuit is computed by combining the effects of all three masking mechanisms within our SER analyzer called AnSER. Using AnSER, we develop several low-overhead techniques that increase reliability, including: 1) an SER-aware design method that uses redundancy already present within the circuit, 2) a technique that resynthesizes small logic windows to improve area and reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs. We develop the probabilistic transfer matrix (PTM) modeling framework to analyze effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs) to improve computational efficiency. Several ADD algorithms are developed to extract reliability and error susceptibility information from PTMs representing circuits. We propose new algorithms for circuit testing under probabilistic faults, which require a reformulation of existing test techniques. For instance, a test vector may need to be repeated many times to detect a fault. Also, different vectors detect the same fault with different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd

    Testable Design for Positive Control Flipping Faults in Reversible Circuits

    Get PDF
    Fast computational power is a major concern in every computing system. The advancement of the fabrication process in the present semiconductor technologies provides to accommodate millions of gates per chip and is also capable of reducing the size of the chips. Concurrently, the complex circuit design always leads to high power dissipation and increases the fault rates. Due to these difficulties, researchers explore the reversible logic circuit as an alternative way to implement the low-power circuit design. It is also widely applied in recent technology trends like quantum computing. Analyzing the correct functional behavior of these circuits is an essential requirement in the testing of the circuit. This paper presents a testable design for the k-CNOT based circuit capable of diagnosing the Positive Control Flipping Faults (PCFFs) in reversible circuits. The proposed work shows that generating a single test vector that applies to the constructed design circuit is sufficient for covering the PCFFs in the reversible circuit. Further, the parity-bit operations are augmented to the constructed testable circuit that produces the parity-test pattern to extract the faulty gate location of PCFFs. Various reversible benchmark circuits are used for evaluating the experimental results to establish the correctness of the proposed fault diagnosis technique. Also a comparative analysis is performed with the existing work

    Exploration of Majority Logic Based Designs for Arithmetic Circuits

    Get PDF
    Since its inception, Moore\u27s Law has been a reliable predictor of computational power. This steady increase in computational power has been due to the ability to fit increasing numbers of transistors in a single chip. A consequence of increasing the number of transistors is also increasing the power consumption. The physical properties of CMOS technologies will make this powerwall unavoidable and will result in severe restrictions to future progress and applications. A potential solution to the problem of rising power demands is to investigate alternative low power nanotechnologies for implementing logic circuits. The intrinsic properties of these emerging nanotechnologies result in them being low power in nature when compared to current CMOS technologies. This thesis specifically highlights quantum dot celluar automata (QCA) and nanomagnetic logic (NML) as just two possible technologies. Designs in NML and QCA are explored for simple arithmetic units such as full adders and subtractors. A new multilayer 5-input majority gate design is proposed for use in NML. Designs of reversible adders are proposed which are easily testable for unidirectional stuck at faults

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    Logic Synthesis for Established and Emerging Computing

    Get PDF
    Logic synthesis is an enabling technology to realize integrated computing systems, and it entails solving computationally intractable problems through a plurality of heuristic techniques. A recent push toward further formalization of synthesis problems has shown to be very useful toward both attempting to solve some logic problems exactly--which is computationally possible for instances of limited size today--as well as creating new and more powerful heuristics based on problem decomposition. Moreover, technological advances including nanodevices, optical computing, and quantum and quantum cellular computing require new and specific synthesis flows to assess feasibility and scalability. This review highlights recent progress in logic synthesis and optimization, describing models, data structures, and algorithms, with specific emphasis on both design quality and emerging technologies. Example applications and results of novel techniques to established and emerging technologies are reported

    THEORY, DESIGN, AND SIMULATION OF LINA: A PATH FORWARD FOR QCA-TYPE NANOELECTRONICS

    Get PDF
    The past 50 years have seen exponential advances in digital integrated circuit technologies which has facilitated an explosion of uses and functionality. Although this rate (generally referred to as "Moore's Law") cannot be sustained indefinitely, significant advances will remain possible even after current technologies reach fundamental limits. However if these further advances are to be realized, nanoelectronics designs must be developed that provide significant improvements over, the currently-utilized, complementary metal-oxide semiconductor (CMOS) transistor based integrated circuits. One promising nanoelectronics paradigm to fulfill this function is Quantum-dot Cellular Automata (QCA). QCA provides the possibility of THz switching, molecular scaling, and provides particular applicability for advanced logical constructs such as reversible logic and systolic arrays within the paradigm. These attributes make QCA an exciting prospect; however, current fabrication technology does not exist which allows for the fabrication of reliable electronic QCA circuits which operate at room-temperature. Furthermore, a plausible path to fabrication of circuitry on the very large scale integration (VLSI) level with QCA does not currently exist. This has caused doubts to the viability of the paradigm and questions to its future as a suitable nanoelectronic replacement to CMOS. In order to resolve these issues, research was conducted into a new design which could utilize key attributes of QCA while also providing a means for near-term fabrication of reliable room-temperature circuits and a path forward for VLSI circuits.The result of this research, presented in this dissertation, is the Lattice-based Integrated-signal Nanocellular Automata (LINA) nanoelectronics paradigm. LINA designs are based on QCA and provide the same basic functionality as traditional QCA. LINA also retains the key attributes of THz switching, scalability to the molecular level, and ability to utilize advanced logical constructs which are crucial to the QCA proposals. However, LINA designs also provide significant improvements over traditional QCA. For example, the continuous correction of faults, due to LINA's integrated-signal approach, provides reliability improvements to enable room-temperature operation with cells which are potentially up to 20nm and fault tolerance to layout, patterning, stray-charge, and stuck-at-faults. In terms of fabrication, LINA's lattice-based structure allows precise relative placement through the use of self-assembly techniques seen in current nanoparticle research. LINA also allows for large enough wire and logic structures to enable use of widely available photo-lithographical patterning technologies. These aspects of the LINA designs, along with power, timing, and clocking results, have been verified through the use of new and/or modified simulation tools specifically developed for this purpose. To summarize, the LINA designs and results, presented in this dissertation, provide a path to realization of QCA-type VLSI nanoelectronic circuitry. Furthermore, they offer a renewed viability of the paradigm to replace CMOS and advance computing technologies beyond the next decade

    Synthesis, testing and tolerance in reversible logic

    Get PDF
    In recent years, reversible computing has established itself as a promising research area and emerging technology. This thesis focuses on three important areas of reversible logic, which is an area of reversible computing. Firstly, this thesis proposes a transformation based synthesis approach for realizing conservative reversible functions using SWAP and Fredkin gates. This thesis also proposes ten templates for optimizing SWAP and Fredkin gates-based reversible circuits. Secondly, this thesis proposes an approach for the design of online testable reversible circuits. A reversible circuit composed of NOT, CNOT and Toffoli gates can be made online testable by adding two sets of CNOT gates and a single parity line. Finally, we have proposed an approach to achieve fault tolerance in reversible circuits. A design of a 3-bit reversible majority voter circuit is presented. This voter circuit can be used to design fault tolerant reversible circuits
    corecore