4 research outputs found
AI/ML Algorithms and Applications in VLSI Design and Technology
An evident challenge ahead for the integrated circuit (IC) industry in the
nanometer regime is the investigation and development of methods that can
reduce the design complexity ensuing from growing process variations and
curtail the turnaround time of chip manufacturing. Conventional methodologies
employed for such tasks are largely manual; thus, time-consuming and
resource-intensive. In contrast, the unique learning strategies of artificial
intelligence (AI) provide numerous exciting automated approaches for handling
complex and data-intensive tasks in very-large-scale integration (VLSI) design
and testing. Employing AI and machine learning (ML) algorithms in VLSI design
and manufacturing reduces the time and effort for understanding and processing
the data within and across different abstraction levels via automated learning
algorithms. It, in turn, improves the IC yield and reduces the manufacturing
turnaround time. This paper thoroughly reviews the AI/ML automated approaches
introduced in the past towards VLSI design and manufacturing. Moreover, we
discuss the scope of AI/ML applications in the future at various abstraction
levels to revolutionize the field of VLSI design, aiming for high-speed, highly
intelligent, and efficient implementations
Robust Optimization of Nanometer SRAM Designs
Technology scaling has been the most obvious choice of designers and chip
manufacturing companies to improve the performance of analog and digital circuits.
With the ever shrinking technological node, process variations can no longer be ignored
and play a significant role in determining the performance of nanoscaled devices. By
choosing a worst case design methodology, circuit designers have been very munificent
with the design parameters chosen, often manifesting in pessimistic designs with
significant area overheads.
Significant work has been done in estimating the impact of intra-die process
variations on circuit performance, pertinently, noise margin and standby leakage power,
for fixed transistor channel dimensions. However, for an optimal, high yield, SRAM cell
design, it is absolutely imperative to analyze the impact of process variations at every
design point, especially, since the distribution of process variations is a statistically
varying parameter and has an inverse correlation with the area of the MOS transistor.
Furthermore, the first order analytical models used for optimization of SRAM memories
are not as accurate and the impact of voltage and its inclusion as an input, along with
other design parameters, is often ignored.
In this thesis, the performance parameters of a nano-scaled 6-T SRAM cell are
modeled as an accurate, yield aware, empirical polynomial predictor, in the presence of
intra-die process variations. The estimated empirical models are used in a constrained
non-linear, robust optimization framework to design an SRAM cell, for a 45 nm CMOS
technology, having optimal performance, according to bounds specified for the circuit
performance parameters, with the objective of minimizing on-chip area. This statistically aware technique provides a more realistic design methodology to study the trade off
between performance parameters of the SRAM.
Furthermore, a dual optimization approach is followed by considering SRAM
power supply and wordline voltages as additional input parameters, to simultaneously
tune the design parameters, ensuring a high yield and considerable area reduction. In
addition, the cell level optimization framework is extended to the system level
optimization of caches, under both cell level and system level performance constraints
Statistical Yield Analysis and Design for Nanometer VLSI
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process.
In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation.
At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo.
At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield.
Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated