3,652 research outputs found

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Calculation of Generalized Polynomial-Chaos Basis Functions and Gauss Quadrature Rules in Hierarchical Uncertainty Quantification

    Get PDF
    Stochastic spectral methods are efficient techniques for uncertainty quantification. Recently they have shown excellent performance in the statistical analysis of integrated circuits. In stochastic spectral methods, one needs to determine a set of orthonormal polynomials and a proper numerical quadrature rule. The former are used as the basis functions in a generalized polynomial chaos expansion. The latter is used to compute the integrals involved in stochastic spectral methods. Obtaining such information requires knowing the density function of the random input {\it a-priori}. However, individual system components are often described by surrogate models rather than density functions. In order to apply stochastic spectral methods in hierarchical uncertainty quantification, we first propose to construct physically consistent closed-form density functions by two monotone interpolation schemes. Then, by exploiting the special forms of the obtained density functions, we determine the generalized polynomial-chaos basis functions and the Gauss quadrature rules that are required by a stochastic spectral simulator. The effectiveness of our proposed algorithm is verified by both synthetic and practical circuit examples.Comment: Published by IEEE Trans CAD in May 201

    Technology Independent Synthesis of CMOS Operational Amplifiers

    Get PDF
    Analog circuit design does not enjoy as much automation as its digital counterpart. Analog sizing is inherently knowledge intensive and requires accurate modeling of the different parametric effects of the devices. Besides, the set of constraints in a typical analog design problem is large, involving complex tradeoffs. For these reasons, the task of modeling an analog design problem in a form viable for automation is much more tedious than the digital design. Consequently, analog blocks are still handcrafted intuitively and often become a bottleneck in the integrated circuit design, thereby increasing the time to market. In this work, we address the problem of automatically solving an analog circuit design problem. Specifically, we propose methods to automate the transistor-level sizing of OpAmps. Given the specifications and the netlist of the OpAmp, our methodology produces a design that has the accuracy of the BSIM models used for simulation and the advantage of a quick design time. The approach is based on generating an initial first-order design and then refining it. In principle, the refining approach is a simulated-annealing scheme that uses (i) localized simulations and (ii) convex optimization scheme (COS). The optimal set of input variables for localized simulations has been selected by using techniques from Design of Experiments (DOE). To formulate the design problem as a COS problem, we have used monomial circuit models that are fitted from simulation data. These models accurately predict the performance of the circuit in the proximity of the initial guess. The models can also be used to gain valuable insight into the behavior of the circuit and understand the interrelations between the different performance constraints. A software framework that implements this methodology has been coded in SKILL language of Cadence. The methodology can be applied to design different OpAmp topologies across different technologies. In other words, the framework is both technology independent and topology independent. In addition, we develop a scheme to empirically model the small signal parameters like \u27gm\u27 and \u27gds\u27 of CMOS transistors. The monomial device models are reusable for a given technology and can be used to formulate the OpAmp design problem as a COS problem. The efficacy of the framework has been demonstrated by automatically designing different OpAmp topologies across different technologies. We designed a two-stage OpAmp and a telescopic OpAmp in TSMC025 and AMI016 technologies. Our results show significant (10–15%) improvement in the performance of both the OpAmps in both the technologies. While the methodology has shown encouraging results in the sub-micrometer regime, the effectiveness of the tool has to be investigated in the deep-sub-micron technologies

    IC optimisation using parallel processing and response surface methodology

    Get PDF

    The application of multi-objective robust design methods in ship design

    Get PDF
    When designing large complex vessels, the evaluation of a particular design can be both complicated and time consuming. Designers often resort to the use of concept design models enabling both a reduction in complexity and time for evaluation. Various optimisation methods are then typically used to explore the design space facilitating the selection of optimum or near optimum designs. It is now possible to incorporate considerations of seakeeping, stability and costs at the earliest stage in the ship design process. However, to ensure that reliable results are obtained, the models used are generally complex and computationally expensive. Methods have been developed which avoid the necessity to carry out an exhaustive search of the complete design space. One such method is described which is concerned with the application of the theory of Design Of Experiments (DOE) enabling the design space to be efficiently explored. The objective of the DOE stage is to produce response surfaces which can then be used by an optimisation module to search the design space. It is assumed that the concept exploration tool whilst being a simplification of the design problem, is still sufficiently complicated to enable reliable evaluations of a particular design concept. The response surface is used as a representation of the concept exploration tool, and by it's nature can be used to rapidly evaluate a design concept hence reducing concept exploration time. While the methodology has a wide applicability in ship design and production, it is illustrated by its application to the design of a catamaran with respect to seakeeping. The paper presents results exploring the design space for the catamaran. A concept is selected which is robust with respect to the Relative Bow Motion (RBM), the heave, pitch and roll at any particular waveheading. The design space is defined by six controllable design parameters; hull length, breadth to draught ratio, distance between demihull centres, coefficient of waterplane, longitudinal centre of floatation, longitudinal centre of buoyancy, and by one noise parameter, the waveheading. A Pareto-optimal set of solutions is obtained using RBM, heave, pitch and roll as criteria. The designer can then select from this set the design which most closely satisfies their requirements. Typical solutions are shown to yield average reductions of over 25% in the objective functions when compared to earlier results obtained using conventional optimisation methods

    Product assurance technology for custom LSI/VLSI electronics

    Get PDF
    The technology for obtaining custom integrated circuits from CMOS-bulk silicon foundries using a universal set of layout rules is presented. The technical efforts were guided by the requirement to develop a 3 micron CMOS test chip for the Combined Release and Radiation Effects Satellite (CRRES). This chip contains both analog and digital circuits. The development employed all the elements required to obtain custom circuits from silicon foundries, including circuit design, foundry interfacing, circuit test, and circuit qualification

    Architectural level delay and leakage power modelling of manufacturing process variation

    Get PDF
    PhD ThesisThe effect of manufacturing process variations has become a major issue regarding the estimation of circuit delay and power dissipation, and will gain more importance in the future as device scaling continues in order to satisfy market place demands for circuits with greater performance and functionality per unit area. Statistical modelling and analysis approaches have been widely used to reflect the effects of a variety of variational process parameters on system performance factor which will be described as probability density functions (PDFs). At present most of the investigations into statistical models has been limited to small circuits such as a logic gate. However, the massive size of present day electronic systems precludes the use of design techniques which consider a system to comprise these basic gates, as this level of design is very inefficient and error prone. This thesis proposes a methodology to bring the effects of process variation from transistor level up to architectural level in terms of circuit delay and leakage power dissipation. Using a first order canonical model and statistical analysis approach, a statistical cell library has been built which comprises not only the basic gate cell models, but also more complex functional blocks such as registers, FIFOs, counters, ALUs etc. Furthermore, other sensitive factors to the overall system performance, such as input signal slope, output load capacitance, different signal switching cases and transition types are also taken into account for each cell in the library, which makes it adaptive to an incremental circuit design. The proposed methodology enables an efficient analysis of process variation effects on system performance with significantly reduced computation time compared to the Monte Carlo simulation approach. As a demonstration vehicle for this technique, the delay and leakage power distributions of a 2-stage asynchronous micropipeline circuit has been simulated using this cell library. The experimental results show that the proposed method can predict the delay and leakage power distribution with less than 5% error and at least 50,000 times faster computation time compare to 5000-sample SPICE based Monte Carlo simulation. The methodology presented here for modelling process variability plays a significant role in Design for Manufacturability (DFM) by quantifying the direct impact of process variations on system performance. The advantages of being able to undertake this analysis at a high level of abstraction and thus early in the design cycle are two fold. First, if the predicted effects of process variation render the circuit performance to be outwith specification, design modifications can be readily incorporated to rectify the situation. Second, knowing what the acceptable limits of process variation are to maintain design performance within its specification, informed choices can be made regarding the implementation technology and manufacturer selected to fabricate the design
    • …
    corecore