1,562 research outputs found

    Variant X-Tree Clock Distribution Network and Its Performance Evaluations

    Get PDF

    Toward fast and accurate architecture exploration in a hardware/software codesign flow

    Get PDF

    Limitations and opportunities for wire length prediction in gigascale integration

    Get PDF
    Wires have become a major source of bottleneck in current VLSI designs, and wire length prediction is therefore essential to overcome these bottlenecks. Wire length prediction is broadly classified into two types: macroscopic prediction, which is the prediction of wire length distribution, and microscopic prediction, which is the prediction of individual wire lengths. The objective of this thesis is to develop a clear understanding of limitations to both macroscopic and microscopic a priori, post-placement, pre-routing wire length predictions, and thereby develop better wire length prediction models. Investigations carried out to understand the limitations to macroscopic prediction reveal that, in a given design (i) the variability of the wire length distribution increases with length and (ii) the use of Rent s rule with a constant Rent s exponent p, to calculate the terminal count of a given block size, limits the accuracy of the results from a macroscopic model. Therefore, a new model for the parameter p is developed to more accurately reflect the terminal count of a given block size in placement, and using this, a new more accurate macroscopic model is developed. In addition, a model to predict the variability is also incorporated into the macroscopic model. Studies to understand limitations to microscopic prediction reveal that (i) only a fraction of the wires in a given design are predictable, and these are mostly from shorter nets with smaller degrees and (ii) the current microscopic prediction models are built based on the assumption that a single metric could be used to accurately predict the individual length of all the wires in a design. In this thesis, an alternative microscopic model is developed for the predicting the shorter wires based on a hypothesis that there are multiple metrics that influence the length of the wires. Three different metrics are developed and fitted into a heuristic classification tree framework to provide a unified and more accurate microscopic model.Ph.D.Committee Chair: Dr. Jeff Davis; Committee Member: Dr. James D. Meindl; Committee Member: Dr. Paul Kohl; Committee Member: Dr. Scott Wills; Committee Member: Dr. Sung Kyu Li

    Adaptive Latency Insensitive Protocols

    Get PDF
    Latency-insensitive design copes with excessive delays typical of global wires in current and future IC technologies. It achieves its goal via encapsulation of synchronous logic blocks in wrappers that communicate through a latency-insensitive protocol (LIP) and pipelined interconnects. Previously proposed solutions suffer from an excessive performance penalty in terms of throughput or from a lack of generality. This article presents an adaptive LIP that outperforms previous static implementations, as demonstrated by two relevant cases — a microprocessor and an MPEG encoder — whose components we made insensitive to the latencies of their interconnections through a newly developed wrapper. We also present an informal exposition of the theoretical basis of adaptive LIPs, as well as implementation detail

    Statistical circuit simulations - from ‘atomistic’ compact models to statistical standard cell characterisation

    Get PDF
    This thesis describes the development and application of statistical circuit simulation methodologies to analyse digital circuits subject to intrinsic parameter fluctuations. The specific nature of intrinsic parameter fluctuations are discussed, and we explain the crucial importance to the semiconductor industry of developing design tools which accurately account for their effects. Current work in the area is reviewed, and three important factors are made clear: any statistical circuit simulation methodology must be based on physically correct, predictive models of device variability; the statistical compact models describing device operation must be characterised for accurate transient analysis of circuits; analysis must be carried out on realistic circuit components. Improving on previous efforts in the field, we posit a statistical circuit simulation methodology which accounts for all three of these factors. The established 3-D Glasgow atomistic simulator is employed to predict electrical characteristics for devices aimed at digital circuit applications, with gate lengths from 35 nm to 13 nm. Using these electrical characteristics, extraction of BSIM4 compact models is carried out and their accuracy in performing transient analysis using SPICE is validated against well characterised mixed-mode TCAD simulation results for 35 nm devices. Static d.c. simulations are performed to test the methodology, and a useful analytic model to predict hard logic fault limitations on CMOS supply voltage scaling is derived as part of this work. Using our toolset, the effect of statistical variability introduced by random discrete dopants on the dynamic behaviour of inverters is studied in detail. As devices scaled, dynamic noise margin variation of an inverter is increased and higher output load or input slew rate improves the noise margins and its variation. Intrinsic delay variation based on CV/I delay metric is also compared using ION and IEFF definitions where the best estimate is obtained when considering ION and input transition time variations. Critical delay distribution of a path is also investigated where it is shown non-Gaussian. Finally, the impact of the cell input slew rate definition on the accuracy of the inverter cell timing characterisation in NLDM format is investigated

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results
    • …
    corecore