3,896 research outputs found

    Analysis of Dynamic Logic Circuits in Deep Submicron CMOS Technologies

    Get PDF
    Dynamic logic circuits are utilized to minimize the delay in the critical path of high-performance designs such as the datapath circuits in state-of-the-art microprocessors. However, as integrated circuits (ICs) scale to the very deep submicron (VDSM) regime, dynamic logic becomes susceptible to a variety of failure modes due to decreasing noise margins and increasing leakage currents. The objective of this thesis is to characterize the performance of dynamic logic circuits in VDSM technologies and to evaluate various design strategies to mitigate the effects of leakage currents and small noise margins

    Statistical circuit simulations - from ‘atomistic’ compact models to statistical standard cell characterisation

    Get PDF
    This thesis describes the development and application of statistical circuit simulation methodologies to analyse digital circuits subject to intrinsic parameter fluctuations. The specific nature of intrinsic parameter fluctuations are discussed, and we explain the crucial importance to the semiconductor industry of developing design tools which accurately account for their effects. Current work in the area is reviewed, and three important factors are made clear: any statistical circuit simulation methodology must be based on physically correct, predictive models of device variability; the statistical compact models describing device operation must be characterised for accurate transient analysis of circuits; analysis must be carried out on realistic circuit components. Improving on previous efforts in the field, we posit a statistical circuit simulation methodology which accounts for all three of these factors. The established 3-D Glasgow atomistic simulator is employed to predict electrical characteristics for devices aimed at digital circuit applications, with gate lengths from 35 nm to 13 nm. Using these electrical characteristics, extraction of BSIM4 compact models is carried out and their accuracy in performing transient analysis using SPICE is validated against well characterised mixed-mode TCAD simulation results for 35 nm devices. Static d.c. simulations are performed to test the methodology, and a useful analytic model to predict hard logic fault limitations on CMOS supply voltage scaling is derived as part of this work. Using our toolset, the effect of statistical variability introduced by random discrete dopants on the dynamic behaviour of inverters is studied in detail. As devices scaled, dynamic noise margin variation of an inverter is increased and higher output load or input slew rate improves the noise margins and its variation. Intrinsic delay variation based on CV/I delay metric is also compared using ION and IEFF definitions where the best estimate is obtained when considering ION and input transition time variations. Critical delay distribution of a path is also investigated where it is shown non-Gaussian. Finally, the impact of the cell input slew rate definition on the accuracy of the inverter cell timing characterisation in NLDM format is investigated

    Development of a Universal MOSFET Gate Impedance Model

    Get PDF
    Scaling of CMOS technology to 100 nm & below and the endless pursuit of higher operating frequencies drive the need to accurately model effects that dominate at those feature sizes and frequencies. Current modeling techniques are frequency limited and require different models for different frequency ranges in order to achieve accuracy goals. In the foundry world, high frequency models are typically empirical in nature and significantly lag their low frequency counterparts in terms of availability. This tends to slow the adoption of new foundry technologies for high performance applications such as extremely high data rate serializer/deserializer transceiver cores. However, design cycle time and time to market while transitioning between technology nodes can be reduced by incorporating a reusable, industry-standard model. This work proposes such a model for device gate impedance that is simulator-friendly, compact, frequencyindependent, and relatively portable across technology nodes. This semi-empirical gate impedance model is based on depletion in the poly-silicon gate electrode. The effect of device length and single-leg width on the input impedance is studied with the aid of extensive measured data obtained from devices built in 110 nm and 180 nm technologies in the 1-20GHz frequency range. The measured data illustrates that the device input impedance has a non-linear frequency dependency. This variation in input impedance is the result of gate poly-silicon depletion, which can be modeled by an external RC network connected at the gate of the device. Excellent agreement between the simulation results and the measured data validates the model in the device active region for 1-20GHz frequency range. The gate impedance model is further modified by incorporating parasitic effects, extending its range to 200MHz-20GHz. This model performs accurately for 180 run, 110 nm and 90 nm technologies at different bias conditions and dimensions. The model and model parameter behavior are consistent across technology nodes thereby enabling re-usability and portability. The accuracy of this new gate impedance model is demonstrated in various applications: to validate the model extraction techniques for different device configurations, to assess the input data run-length variations on CML buffer performance and to estimate the jitter in ring oscillators

    Benchmarking the screen-grid field effect transistor (SGrFET) for digital applications

    No full text
    Continuous scaling of CMOS technology has now reached a state of evolution, therefore, novel device structures and new materials have been proposed for this purpose. The Screen- Grid field Effect Transistor is introduced as a as a novel device structure that takes advantage of several innovative aspects of the FinFET while introducing new geometrical feature to improve a FET device performance. The idea is to design a FET which is as small as possible without down-scaling issues, at the same time satisfying optimum device performance for both analogue and digital applications. The analogue operation of the SGrFET shows some promising results which make it interesting to continue the investigation on SGrFET for digital applications. The SGrFET addresses some of the concerns of scaled CMOS such as Drain Induce Barrier Lowering and sub-threshold slope, by offering the superior short channel control. In this work in order to evaluate SGrFET performance, the proposed device compared to the classical MOSFET and provides comprehensive benchmarking with finFETs. Both AC and DC simulations are presented using TaurusTM and MediciTM simulators which are commercially available via Synopsis. Initial investigation on the novel device with the single gate structure is carried out. The multi-geometrical characteristic of the proposed device is used to reduce parasitic capacitance and increase ION/IOFF ratio to improve device performance in terms of switching characteristic in different circuit structures. Using TaurusTM AC simulation, a small signal circuit is introduced for SGrFET and evaluated using both extracted small signal elements from TaurusTM and Y-parameter extraction. The SGrFET allows for the unique behavioural characteristics of an independent-gate device. Different configurations of double-gate device are introduced and benchmark against the finFET serving as a double gate device. Five different logic circuits, the complementary and N-inverter, the NOR, NAND and XOR, and controllable Current Mirror circuits are simulated with finFET and SGrFET and their performance compared. Some digital key merits are extracted for both finFET and SGrFET such as power dissipation, noise margin and switching speed to compare the devices under the investigation performance against each other. It is shown that using multi-geometrical feature in SGrFET together with its multi-gate operation can greatly decrease the number of device needed for the logic function without speed degradation and it can be used as a potential candidate in mix-circuit configuration as a multi-gate device. The initial fabrication steps of the novel device explained together with some in-house fabrication process using E-Beam lithography. The fabricated SGrFET is characterised via electrical measurements and used in a circuit configuration

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Gate leakage variability in nano-CMOS transistors

    Get PDF
    Gate leakage variability in nano-scale CMOS devices is investigated through advanced modelling and simulations of planar, bulk-type MOSFETs. The motivation for the work stems from the two of the most challenging issues in front of the semiconductor industry - excessive leakage power, and device variability - both being brought about with the aggressive downscaling of device dimensions to the nanometer scale. The aim is to deliver a comprehensive tool for the assessment of gate leakage variability in realistic nano-scale CMOS transistors. We adopt a 3D drift-diffusion device simulation approach with density-gradient quantum corrections, as the most established framework for the study of device variability. The simulator is first extended to model the direct tunnelling of electrons through the gate dielectric, by means of an improved WKB approximation. A study of a 25 nm square gate n-type MOSFET demonstrates that combined effect of discrete random dopants and oxide thickness variation lead to starndard deviation of up to 50% (10%) of the mean gate leakage current in OFF(ON)-state of the transistor. There is also a 5 to 6 times increase of the magnitude of the gate current, compared to that simulated of a uniform device. A significant part of the research is dedicated to the analysis of the non-abrupt bandgap and permittivity transition at the Si/SiO2 interface. One dimensional simulation of a MOS inversion layer with a 1nm SiO2 insulator and realistic band-gap transition reveals a strong impact on subband quantisation (over 50mV reduction in the delta-valley splitting and over 20% redistribution of carriers from the delta-2 to the delta-4 valleys), and enhancement of capacitance (over 10%) and leakage (about 10 times), relative to simulations with an abrupt band-edge transition at the interface
    • …
    corecore