868 research outputs found
Evaluating the reliability of NAND multiplexing with PRISM
Probabilistic-model checking is a formal verification technique for analyzing the reliability and performance of systems exhibiting stochastic behavior. In this paper, we demonstrate the applicability of this approach and, in particular, the probabilistic-model-checking tool PRISM to the evaluation of reliability and redundancy of defect-tolerant systems in the field of computer-aided design. We illustrate the technique with an example due to von Neumann, namely NAND multiplexing. We show how, having constructed a model of a defect-tolerant system incorporating probabilistic assumptions about its defects, it is straightforward to compute a range of reliability measures and investigate how they are affected by slight variations in the behavior of the system. This allows a designer to evaluate, for example, the tradeoff between redundancy and reliability in the design. We also highlight errors in analytically computed reliability bounds, recently published for the same case study
Evaluating the reliability of NAND multiplexing with PRISM
Probabilistic-model checking is a formal verification technique for analyzing the reliability and performance of systems exhibiting stochastic behavior. In this paper, we demonstrate the applicability of this approach and, in particular, the probabilistic-model-checking tool PRISM to the evaluation of reliability and redundancy of defect-tolerant systems in the field of computer-aided design. We illustrate the technique with an example due to von Neumann, namely NAND multiplexing. We show how, having constructed a model of a defect-tolerant system incorporating probabilistic assumptions about its defects, it is straightforward to compute a range of reliability measures and investigate how they are affected by slight variations in the behavior of the system. This allows a designer to evaluate, for example, the tradeoff between redundancy and reliability in the design. We also highlight errors in analytically computed reliability bounds, recently published for the same case study
A simple DNA gate motif for synthesizing large-scale circuits
The prospects of programming molecular systems to perform complex autonomous tasks have motivated research into the design of synthetic biochemical circuits. Of particular interest to us are cell-free nucleic acid systems that exploit non-covalent hybridization and strand displacement
reactions to create cascades that implement digital and analogue circuits. To date, circuits involving at most tens of gates have been demonstrated experimentally. Here, we
propose a simple DNA gate architecture that appears suitable for practical synthesis of large-scale circuits involving possibly thousands of gates
Design, Analysis and Test of Logic Circuits under Uncertainty.
Integrated circuits are increasingly susceptible to uncertainty caused by soft
errors, inherently probabilistic devices, and manufacturing variability. As device technologies
scale, these effects become detrimental to circuit reliability. In order to address
this, we develop methods for analyzing, designing, and testing circuits subject to probabilistic
effects. Our main contributions are: 1) a fast, soft-error rate (SER) analyzer
that uses functional-simulation signatures to capture error effects, 2) novel design techniques
that improve reliability using little area and performance overhead, 3) a matrix-based
reliability-analysis framework that captures many types of probabilistic faults, and
4) test-generation/compaction methods aimed at probabilistic faults in logic circuits.
SER analysis must account for the main error-masking mechanisms in ICs: logic,
timing, and electrical masking. We relate logic masking to node testability of the circuit
and utilize functional-simulation signatures, i.e., partial truth tables, to efficiently compute
estability (signal probability and observability). To account for timing masking, we compute
error-latching windows (ELWs) from timing analysis information. Electrical masking
is incorporated into our estimates through derating factors for gate error probabilities. The
SER of a circuit is computed by combining the effects of all three masking mechanisms
within our SER analyzer called AnSER.
Using AnSER, we develop several low-overhead techniques that increase reliability,
including: 1) an SER-aware design method that uses redundancy already present within
the circuit, 2) a technique that resynthesizes small logic windows to improve area and
reliability, and 3) a post-placement gate-relocation technique that increases timing masking by decreasing ELWs.
We develop the probabilistic transfer matrix (PTM) modeling framework to analyze
effects beyond soft errors. PTMs are compressed into algebraic decision diagrams (ADDs)
to improve computational efficiency. Several ADD algorithms are developed to extract
reliability and error susceptibility information from PTMs representing circuits.
We propose new algorithms for circuit testing under probabilistic faults, which require
a reformulation of existing test techniques. For instance, a test vector may need to be
repeated many times to detect a fault. Also, different vectors detect the same fault with
different probabilities. We develop test generation methods that account for these differences, and integer linear programming (ILP) formulations to optimize test sets.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61584/1/smita_1.pd
Reliable chip design from low powered unreliable components
The pace of technological improvement of the semiconductor market is driven by Moore’s Law, enabling chip transistor density to double every two years. The transistors would continue to decline in cost and size but increase in power. The continuous transistor scaling and extremely lower power constraints in modern Very Large Scale Integrated(VLSI) chips can potentially supersede the benefits of the technology shrinking due to reliability issues. As VLSI technology scales into nanoscale regime, fundamental physical limits are approached, and higher levels of variability, performance degradation, and higher rates of manufacturing defects are experienced. Soft errors, which traditionally affected only the memories, are now also resulting in logic circuit reliability degradation. A solution to these limitations is to integrate reliability assessment techniques into the Integrated Circuit(IC) design flow. This thesis investigates four aspects of reliability driven circuit design: a)Reliability estimation; b) Reliability optimization; c) Fault-tolerant techniques, and d) Delay degradation analysis. To guide the reliability driven synthesis and optimization of combinational circuits, highly accurate probability based reliability estimation methodology christened Conditional Probabilistic Error Propagation(CPEP) algorithm is developed to compute the impact of gate failures on the circuit output. CPEP guides the proposed rewriting based logic optimization algorithm employing local transformations. The main idea behind this methodology is to replace parts of the circuit with functionally equivalent but more reliable counterparts chosen from a precomputed subset of Negation-Permutation-Negation(NPN) classes of 4-variable functions. Cut enumeration and Boolean matching driven by reliability-aware optimization algorithm are used to identify the best possible replacement candidates. Experiments on a set of MCNC benchmark circuits and 8051 functional microcontroller units indicate that the proposed framework can achieve up to 75% reduction of output error probability. On average, about 14% SER reduction is obtained at the expense of very low area overhead of 6.57% that results in 13.52% higher power consumption. The next contribution of the research describes a novel methodology to design fault tolerant circuitry by employing the error correction codes known as Codeword Prediction Encoder(CPE). Traditional fault tolerant techniques analyze the circuit reliability issue from a static point of view neglecting the dynamic errors. In the context of communication and storage, the study of novel methods for reliable data transmission under unreliable hardware is an increasing priority. The idea of CPE is adapted from the field of forward error correction for telecommunications focusing on both encoding aspects and error correction capabilities. The proposed Augmented Encoding solution consists of computing an augmented codeword that contains both the codeword to be transmitted on the channel and extra parity bits. A Computer Aided Development(CAD) framework known as CPE simulator is developed providing a unified platform that comprises a novel encoder and fault tolerant LDPC decoders. Experiments on a set of encoders with different coding rates and different decoders indicate that the proposed framework can correct all errors under specific scenarios. On average, about 1000 times improvement in Soft Error Rate(SER) reduction is achieved. Last part of the research is the Inverse Gaussian Distribution(IGD) based delay model applicable to both combinational and sequential elements for sub-powered circuits. The Probability Density Function(PDF) based delay model accurately captures the delay behavior of all the basic gates in the library database. The IGD model employs these necessary parameters, and the delay estimation accuracy is demonstrated by evaluating multiple circuits. Experiments results indicate that the IGD based approach provides a high matching against HSPICE Monte Carlo simulation results, with an average error less than 1.9% and 1.2% for the 8-bit Ripple Carry Adder(RCA), and 8-bit De-Multiplexer(DEMUX) and Multiplexer(MUX) respectively
ENHANCEMENT OF MARKOV RANDOM FIELD MECHANISM TO ACHIEVE FAULT-TOLERANCE IN NANOSCALE CIRCUIT DESIGN
As the MOSFET dimensions scale down towards nanoscale level, the reliability of
circuits based on these devices decreases. Hence, designing reliable systems using
these nano-devices is becoming challenging. Therefore, a mechanism has to be
devised that can make the nanoscale systems perform reliably using unreliable circuit
components. The solution is fault-tolerant circuit design. Markov Random Field
(MRF) is an effective approach that achieves fault-tolerance in integrated circuit
design. The previous research on this technique suffers from limitations at the design,
simulation and implementation levels. As improvements, the MRF fault-tolerance
rules have been validated for a practical circuit example. The simulation framework is
extended from thermal to a combination of thermal and random telegraph signal
(RTS) noise sources to provide a more rigorous noise environment for the simulation
of circuits build on nanoscale technologies. Moreover, an architecture-level
improvement has been proposed in the design of previous MRF gates. The redesigned
MRF is termed as Improved-MRF.
The CMOS, MRF and Improved-MRF designs were simulated under application
of highly noisy inputs. On the basis of simulations conducted for several test circuits,
it is found that Improved-MRF circuits are 400 whereas MRF circuits are only 10
times more noise-tolerant than the CMOS alternatives. The number of transistors, on
the other hand increased from a factor of 9 to 15 from MRF to Improved-MRF
respectively (as compared to the CMOS). Therefore, in order to provide a trade-off
between reliability and the area overhead required for obtaining a fault-tolerant
circuit, a novel parameter called as ‘Reliable Area Index’ (RAI) is introduced in this
research work. The value of RAI exceeds around 1.3 and 40 times for MRF and
Improved-MRF respectively as compared to CMOS design which makes Improved-
MRF to be still 30 times more efficient circuit design than MRF in terms of
maintaining a suitable trade-off between reliability and area-consumption of the
circuit
Research on failure free systems final report
Failure free systems studies - integrated circuits, majority voted redundancy, and self-repairing system
- …