126 research outputs found

    Fast high-order variation-aware IC interconnect analysis

    Get PDF
    Interconnects constitute a dominant source of circuit delay for modern chip designs. The variations of critical dimensions in modern VLSI technologies lead to variability in interconnect performance that must be fully accounted for in timing verification. However, handling a multitude of inter-die/intra-die variations and assessing their impacts on circuit performance can dramatically complicate the timing analysis. In this thesis, three practical interconnect delay and slew analysis methods are presented to facilitate efficient evaluation of wire performance variability. The first method is described in detail in Chapter III. It harnesses a collection of computationally efficient procedures and closed-form formulas. By doing so, process variations are directly mapped into the variability of the output delay and slew. This method can provide the closed-form formulas of the output delay and slew at any sink node of the interconnect nets fully parameterized, in-process variations. The second method is based on adjoint sensitivity analysis and driving point model. It constructs the driving point model of the driver which drives the interconnect net by using the adjoint sensitivity analysis method. Then the driving point model can be propagated through the interconnect network by using the first method to obtain the closedform formulas of the output delay and slew. The third method is the generalized second-order adjoint sensitivity analysis. We give the mathematical derivation of this method in Chapter V. The theoretical value of this method is it can not only handle this particular variational interconnect delay and slew analysis, but it also provides an avenue for automatical linear network analysis and optimization. The proposed methods not only provide statistical performance evaluations of the interconnect network under analysis but also produce delay and slew expressions parameterized in the underlying process variations in a quadratic parametric form. Experimental results show that superior accuracy can be achieved by our proposed methods

    Efficient Monte Carlo Based Methods for Variability Aware Analysis and Optimization of Digital Circuits.

    Full text link
    Process variability is of increasing concern in modern nanometer-scale CMOS. The suitability of Monte Carlo based algorithms for efficient analysis and optimization of digital circuits under variability is explored in this work. Random sampling based Monte Carlo techniques incur high cost of computation, due to the large sample size required to achieve target accuracy. This motivates the need for intelligent sample selection techniques to reduce the number of samples. As these techniques depend on information about the system under analysis, there is a need to tailor the techniques to fit the specific application context. We propose efficient smart sampling based techniques for timing and leakage power consumption analysis of digital circuits. For the case of timing analysis, we show that the proposed method requires 23.8X fewer samples on average to achieve comparable accuracy as a random sampling approach, for benchmark circuits studied. It is further illustrated that the parallelism available in such techniques can be exploited using parallel machines, especially Graphics Processing Units. Here, we show that SH-QMC implemented on a Multi GPU is twice as fast as a single STA on a CPU for benchmark circuits considered. Next we study the possibility of using such information from statistical analysis to optimize digital circuits under variability, for example to achieve minimum area on silicon though gate sizing while meeting a timing constraint. Though several techniques to optimize circuits have been proposed in literature, it is not clear how much gains are obtained in these approaches specifically through utilization of statistical information. Therefore, an effective lower bound computation technique is proposed to enable efficient comparison of statistical design optimization techniques. It is shown that even techniques which use only limited statistical information can achieve results to within 10% of the proposed lower bound. We conclude that future optimization research should shift focus from use of more statistical information to achieving more efficiency and parallelism to obtain speed ups.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78936/1/tvvin_1.pd

    Multi-objective Digital VLSI Design Optimisation

    Get PDF
    Modern VLSI design's complexity and density has been exponentially increasing over the past 50 years and recently reached a stage within its development, allowing heterogeneous, many-core systems and numerous functions to be integrated into a tiny silicon die. These advancements have revealed intrinsic physical limits of process technologies in advanced silicon technology nodes. Designers and EDA vendors have to handle these challenges which may otherwise result in inferior design quality, even failures, and lower design yields under time-to-market pressure. Multiple or many design objectives and constraints are emerging during the design process and often need to be dealt with simultaneously. Multi-objective evolutionary algorithms show flexible capabilities in maintaining multiple variable components and factors in uncertain environments. The VLSI design process involves a large number of available parameters both from designs and EDA tools. This provides many potential optimisation avenues where evolutionary algorithms can excel. This PhD work investigates the application of evolutionary techniques for digital VLSI design optimisation. Automated multi-objective optimisation frameworks, compatible with industrial design flows and foundry technologies, are proposed to improve solution performance, expand feasible design space, and handle complex physical floorplan constraints through tuning designs at gate-level. Methodologies for enriching standard cell libraries regarding drive strength are also introduced to cooperate with multi-objective optimisation frameworks, e.g., subsequent hill-climbing, providing a richer pool of solutions optimised for different trade-offs. The experiments of this thesis demonstrate that multi-objective evolutionary algorithms, derived from biological inspirations, can assist the digital VLSI design process, in an industrial design context, to more efficiently search for well-balanced trade-off solutions as well as optimised design space coverage. The expanded drive granularity of standard cells can push the performance of silicon technologies with offering improved solutions regarding critical objectives. The achieved optimisation results can better deliver trade-off solutions regarding power, performance and area metrics than using standard EDA tools alone. This has been not only shown for a single circuit solution but also covered the entire standard-tool-produced design space

    Study of spin-scan imaging for outer planets missions

    Get PDF
    The constraints that are imposed on the Outer Planet Missions (OPM) imager design are of critical importance. Imager system modeling analyses define important parameters and systematic means for trade-offs applied to specific Jupiter orbiter missions. Possible image sequence plans for Jupiter missions are discussed in detail. Considered is a series of orbits that allow repeated near encounters with three of the Jovian satellites. The data handling involved in the image processing is discussed, and it is shown that only minimal processing is required for the majority of images for a Jupiter orbiter mission

    Analysis and Design of Resilient VLSI Circuits

    Get PDF
    The reliable operation of Integrated Circuits (ICs) has become increasingly difficult to achieve in the deep sub-micron (DSM) era. With continuously decreasing device feature sizes, combined with lower supply voltages and higher operating frequencies, the noise immunity of VLSI circuits is decreasing alarmingly. Thus, VLSI circuits are becoming more vulnerable to noise effects such as crosstalk, power supply variations and radiation-induced soft errors. Among these noise sources, soft errors (or error caused by radiation particle strikes) have become an increasingly troublesome issue for memory arrays as well as combinational logic circuits. Also, in the DSM era, process variations are increasing at an alarming rate, making it more difficult to design reliable VLSI circuits. Hence, it is important to efficiently design robust VLSI circuits that are resilient to radiation particle strikes and process variations. The work presented in this dissertation presents several analysis and design techniques with the goal of realizing VLSI circuits which are tolerant to radiation particle strikes and process variations. This dissertation consists of two parts. The first part proposes four analysis and two design approaches to address radiation particle strikes. The analysis techniques for the radiation particle strikes include: an approach to analytically determine the pulse width and the pulse shape of a radiation induced voltage glitch in combinational circuits, a technique to model the dynamic stability of SRAMs, and a 3D device-level analysis of the radiation tolerance of voltage scaled circuits. Experimental results demonstrate that the proposed techniques for analyzing radiation particle strikes in combinational circuits and SRAMs are fast and accurate compared to SPICE. Therefore, these analysis approaches can be easily integrated in a VLSI design flow to analyze the radiation tolerance of such circuits, and harden them early in the design flow. From 3D device-level analysis of the radiation tolerance of voltage scaled circuits, several non-intuitive observations are made and correspondingly, a set of guidelines are proposed, which are important to consider to realize radiation hardened circuits. Two circuit level hardening approaches are also presented to harden combinational circuits against a radiation particle strike. These hardening approaches significantly improve the tolerance of combinational circuits against low and very high energy radiation particle strikes respectively, with modest area and delay overheads. The second part of this dissertation addresses process variations. A technique is developed to perform sensitizable statistical timing analysis of a circuit, and thereby improve the accuracy of timing analysis under process variations. Experimental results demonstrate that this technique is able to significantly reduce the pessimism due to two sources of inaccuracy which plague current statistical static timing analysis (SSTA) tools. Two design approaches are also proposed to improve the process variation tolerance of combinational circuits and voltage level shifters (which are used in circuits with multiple interacting power supply domains), respectively. The variation tolerant design approach for combinational circuits significantly improves the resilience of these circuits to random process variations, with a reduction in the worst case delay and low area penalty. The proposed voltage level shifter is faster, requires lower dynamic power and area, has lower leakage currents, and is more tolerant to process variations, compared to the best known previous approach. In summary, this dissertation presents several analysis and design techniques which significantly augment the existing work in the area of resilient VLSI circuit design

    Time-domain optimization of amplifiers based on distributed genetic algorithms

    Get PDF
    Thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the subject of Electrical and Computer EngineeringThe work presented in this thesis addresses the task of circuit optimization, helping the designer facing the high performance and high efficiency circuits demands of the market and technology evolution. A novel framework is introduced, based on time-domain analysis, genetic algorithm optimization, and distributed processing. The time-domain optimization methodology is based on the step response of the amplifier. The main advantage of this new time-domain methodology is that, when a given settling-error is reached within the desired settling-time, it is automatically guaranteed that the amplifier has enough open-loop gain, AOL, output-swing (OS), slew-rate (SR), closed loop bandwidth and closed loop stability. Thus, this simplification of the circuit‟s evaluation helps the optimization process to converge faster. The method used to calculate the step response expression of the circuit is based on the inverse Laplace transform applied to the transfer function, symbolically, multiplied by 1/s (which represents the unity input step). Furthermore, may be applied to transfer functions of circuits with unlimited number of zeros/poles, without approximation in order to keep accuracy. Thus, complex circuit, with several design/optimization degrees of freedom can also be considered. The expression of the step response, from the proposed methodology, is based on the DC bias operating point of the devices of the circuit. For this, complex and accurate device models (e.g. BSIM3v3) are integrated. During the optimization process, the time-domain evaluation of the amplifier is used by the genetic algorithm, in the classification of the genetic individuals. The time-domain evaluator is integrated into the developed optimization platform, as independent library, coded using C programming language. The genetic algorithms have demonstrated to be a good approach for optimization since they are flexible and independent from the optimization-objective. Different levels of abstraction can be optimized either system level or circuit level. Optimization of any new block is basically carried-out by simply providing additional configuration files, e.g. chromosome format, in text format; and the circuit library where the fitness value of each individual of the genetic algorithm is computed. Distributed processing is also employed to address the increasing processing time demanded by the complex circuit analysis, and the accurate models of the circuit devices. The communication by remote processing nodes is based on Message Passing interface (MPI). It is demonstrated that the distributed processing reduced the optimization run-time by more than one order of magnitude. Platform assessment is carried by several examples of two-stage amplifiers, which have been optimized and successfully used, embedded, in larger systems, such as data converters. A dedicated example of an inverter-based self-biased two-stage amplifier has been designed, laid-out and fabricated as a stand-alone circuit and experimentally evaluated. The measured results are a direct demonstration of the effectiveness of the proposed time-domain optimization methodology.Portuguese Foundation for the Science and Technology (FCT

    Modeling and Analysis of Large-Scale On-Chip Interconnects

    Get PDF
    As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted by the increasingly high dimensional process-voltage-temperature (PVT) variations demand much more modeling and analysis efforts than ever before, while the analysis of large scale on-chip interconnects that requires solving tens of millions of unknowns imposes great challenges in computer aided design areas. This dissertation presents new methodologies for addressing the above two important challenging issues for large scale on-chip interconnect modeling and analysis: In the past, the standard statistical circuit modeling techniques usually employ principal component analysis (PCA) and its variants to reduce the parameter dimensionality. Although widely adopted, these techniques can be very limited since parameter dimension reduction is achieved by merely considering the statistical distributions of the controlling parameters but neglecting the important correspondence between these parameters and the circuit performances (responses) under modeling. This dissertation presents a variety of performance-oriented parameter dimension reduction methods that can lead to more than one order of magnitude parameter reduction for a variety of VLSI circuit modeling and analysis problems. The sheer size of present day power/ground distribution networks makes their analysis and verification tasks extremely runtime and memory inefficient, and at the same time, limits the extent to which these networks can be optimized. Given today?s commodity graphics processing units (GPUs) that can deliver more than 500 GFlops (Flops: floating point operations per second). computing power and 100GB/s memory bandwidth, which are more than 10X greater than offered by modern day general-purpose quad-core microprocessors, it is very desirable to convert the impressive GPU computing power to usable design automation tools for VLSI verification. In this dissertation, for the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with very promising performance. Our GPU based network analyzer is capable of solving tens of millions of power grid nodes in just a few seconds. Additionally, with the above GPU based simulation framework, more challenging three-dimensional full-chip thermal analysis can be solved in a much more efficient way than ever before

    Synthesis of variability-tolerant circuits with adaptive clocking

    Get PDF
    Improvements in circuit manufacturing have allowed, along the years, increasingly complex designs. This has been enabled by the miniaturization that circuit components have undergone. But, in recent years, this scaling has shown decreasing benefits as we approach fundamental limits. Furthermore, the decrease in size is nowadays producing an increase in variability: unpredictable differences and changes in the behavior of components. Historically, this has been addressed by establishing guardband margins at the design stage. Nonetheless, as variability grows, the amount of pessimism introduced by these margins is taking an ever-increasing cost on performance and power consumption. In recent years, several approaches have been proposed to lower the impact of variability and reduce margins. One such technique is the substitution of a classical PLL clock by a Ring Oscillator Clock. The design of the Ring Oscillator Clock is done in such a way that its variability is highly correlated to that of the circuit. One of the contributions of this thesis is in the automatic design of such circuits. In particular, we propose a novel method to design digital delay lines with variability-tracking properties. Those designs are also suitable for other purposes, such as bundled-data circuits or performance monitors. The advantage of the proposed technique is based on the exclusive use of cells from a standard cell library, which lowers the design cost and complexity. The other focus of this thesis is on state encoding for asynchronous controllers. One of the main properties of asynchronous circuits is their ability to, implicitly, work under variable conditions. In the near future, this advantage might increase the relevance of this class of circuits. One of the hardest stages for the synthesis of these circuits is the state encoding. This thesis presents a SAT-based algorithm for solving the state encoding at the state level. It is shown, by means of a comprehensive benchmark suite, that results obtained by this technique improve significantly compared to results from similar approaches. Nonetheless, the main limitation of techniques at the state level is the state explosion problem, to which the sequential modeling of concurrency is often subject to. The last contribution of this thesis is a method to process asynchronous circuits in order to allow the use of state-based techniques for large instances. In particular, the process is divided into three stages: projection, signal insertion and re-composition. In the projection step, the behavior of the controller is simplified until the signal insertion can be performed by state-based techniques. Afterwards, the re-composition generalizes the insertion of the signal into the original controller. Experimental results show that this process enables the resolution of large controllers, in the order of 10 6 states, by state-based techniques. At the same time, only a minor impact in solution quality is observed, preserving one of the main advantages for state-based approaches.A lo largo de los años, mejoras en la fabricación de circuitos han permitido diseños cada vez más complejos. Esta tendencia, que ha tenido lugar gracias a la miniaturización de los componentes que forman estos circuitos, recientemente está mostrando beneficios decrecientes a medida que nos acercamos a ciertas limitaciones fundamentales. Además de estos beneficios decrecientes, la reducción en tamaño está produciendo un aumento, cada vez mayor, en la variabilidad: diferencias impredecibles y cambios en el comportamiento de los componentes. Esto se ha compensado históricamente con el uso de márgenes de seguridad en la fase de diseño. No obstante, a medida que la variabilidad crece, la cantidad de pesimismo que estos márgenes introducen está afectando significativamente el coste en rendimiento y consumo energético. En los últimos años se han propuesto diferentes técnicas para limitar el impacto de la variabilidad y reducir márgenes de seguridad. Una de estas técnicas consiste en substituir un reloj PLL clásico por un Ring Oscillator Clock. El diseño de un Ring Oscillator Clock se realiza de manera que su variabilidad este altamente correlacionada con la del circuito. Una de las contribuciones de esta tesis consiste en el diseño automático de estos relojes. Concretamente, se propone un nuevo método para diseñar líneas de retardo digitales (digital delay lines) que tengan como propiedad la capacidad de imitar la variabilidad de un circuito dado. Estos diseños son también apropiados para otros propósitos, tal y como circuitos con ?bundled-data? o monitorizadores de rendimiento. La ventaja del método propuesto con respecto a otras técnicas similares radica en el uso exclusivo de celdas provenientes de una librería de celdas estándar, lo que reduce considerablemente el coste de diseño y su complejidad. Por otro lado, esta tesis también se centra en la codificación de estados de circuitos asíncronos. Una de las principales propiedades de estos circuitos reside en su capacidad implícita para trabajar bajo condiciones de variabilidad. Es previsible que, en un futuro próximo, esta ventaja se vuelva aún más relevante. La síntesis de circuitos asíncronos consta de varias etapas, una de las cuales es la codificación de estados. Este trabajo presenta un algoritmo basado en SAT que permite resolver la codificación de estados a nivel de estado. Mediante el uso de un exhaustivo banco de pruebas, esta tesis muestra como resultados obtenidos por esta técnica mejoran significativamente en comparación con otros métodos similares. A pesar de ello, técnicas que trabajan a nivel de estado tienen como principal limitación el problema conocido como "explosión de estados" que aparece habitualmente cuando se modelan elementos concurrentes de manera secuencial. Así pues, la última contribución de esta tesis es la propuesta de un método para procesar circuitos asíncronos de manera que técnicas a nivel de estado sean usables para instancias grandes. En concreto, el proceso está dividido en tres fases: proyección, inserción de señal y re-composición. En la etapa de proyección, el comportamiento del controlador es simplificado suficientemente como para que la inserción de la señal se pueda realizar con técnicas a nivel de estado. A continuación, la re-composición generaliza esta inserción en el controlador original. Resultados experimentales muestran que este proceso permite la resolución de grandes controladores, del orden de 10^6 estados, mediante el uso de técnicas a nivel de estado. Al mismo tiempo, solo se observa un impacto mínimo en la calidad de las soluciones, preservando una de las mayores ventajas de los métodos a nivel de estado

    Manufacturability Aware Design.

    Full text link
    The aim of this work is to provide solutions that optimize the tradeoffs among design, manufacturability, and cost of ownership posed by technology scaling and sub-wavelength lithography. These solutions may take the form of robust circuit designs, cost-effective resolution technologies, accurate modeling considering process variations, and design rules assessment. We first establish a framework for assessing the impact of process variation on circuit performance, product value and return on investment on alternative processes. Key features include comprehensive modeling and different handling on die-to-die and within-die variation, accurate models of correlations of variation, realistic and quantified projection to future process nodes, and performance sensitivity analysis to improved control of individual device parameter and variation sources. Then we describe a novel minimum cost of correction methodology which determines the level of correction of each layout feature such that the prescribed parametric yield is attained with minimum RET (Resolution Enhancement Technology) cost. This timing driven OPC (Optical Proximity Correction) insertion flow uses a mathematical programming based slack budgeting algorithm to determine OPC level for all polysilicon gate geometries. Designs adopting this methodology show up to 20% MEBES (Manufacturing Electron Beam Exposure System) data volume reduction and 39% OPC runtime improvement. When the systematic correction residual errors become unavoidable, we analyze their impact on a state-of-art microprocessor's speedpath skew. A platform is created for diagnosing and improving OPC quality on gates with specific functionality such as critical gates or matching transistors. Significant changes in full-chip timing analysis indicate the necessity of a post-OPC performance verification design flow. Finally, we quantify the performance, manufacturability and mask cost impact of globally applying several common restrictive design rules. Novel approaches such as locally adapting FDRs (flexible design rules) based on image parameters range, and DRC Plus (preferred design rule enforcement with 2D pattern matching) are also described.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57676/2/jiey_1.pd
    corecore