1,369 research outputs found

    Identifying worst case test vectors for FPGA exposed to total ionization dose using design for testability techniques

    Get PDF
    Electronic devices often operate in harsh environments which contain a variation of radiation sources. Radiation may cause different kinds of damage to proper operation of the devices. Their sources can be found in terrestrial environments, or in extra-terrestrial environments like in space, or in man-made radiation sources like nuclear reactors, biomedical devices and high energy particles physics experiments equipment. Depending on the operation environment of the device, the radiation resultant effect manifests in several forms like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). TID effect causes an increase in the delay and the leakage current of CMOS circuits which may damage the proper operation of the integrated circuit. To ensure proper operation of these devices under radiation, thorough testing must be made especially in critical applications like space and military applications. Although the standard which describes the procedure for testing electronic devices under radiation emphasizes the use of worst case test vectors (WCTVs), they are never used in radiation testing due to the difficulty of generating these vectors for circuits under test. For decades, design for testability (DFT) has been the best choice for test engineers to test digital circuits in industry. It has become a very mature technology that can be relied on. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Surprisingly, however, radiation testing has not yet made use of this reliable technology. In this thesis, a novel methodology is proposed to extend the usage of DFT to generate WCTVs for delay failure in Flash based field programmable gate arrays (FPGAs) exposed to total ionizing dose (TID). The methodology is validated using MicroSemi ProASIC3 FPGA and cobalt 60 facility

    Statistical circuit simulations - from ‘atomistic’ compact models to statistical standard cell characterisation

    Get PDF
    This thesis describes the development and application of statistical circuit simulation methodologies to analyse digital circuits subject to intrinsic parameter fluctuations. The specific nature of intrinsic parameter fluctuations are discussed, and we explain the crucial importance to the semiconductor industry of developing design tools which accurately account for their effects. Current work in the area is reviewed, and three important factors are made clear: any statistical circuit simulation methodology must be based on physically correct, predictive models of device variability; the statistical compact models describing device operation must be characterised for accurate transient analysis of circuits; analysis must be carried out on realistic circuit components. Improving on previous efforts in the field, we posit a statistical circuit simulation methodology which accounts for all three of these factors. The established 3-D Glasgow atomistic simulator is employed to predict electrical characteristics for devices aimed at digital circuit applications, with gate lengths from 35 nm to 13 nm. Using these electrical characteristics, extraction of BSIM4 compact models is carried out and their accuracy in performing transient analysis using SPICE is validated against well characterised mixed-mode TCAD simulation results for 35 nm devices. Static d.c. simulations are performed to test the methodology, and a useful analytic model to predict hard logic fault limitations on CMOS supply voltage scaling is derived as part of this work. Using our toolset, the effect of statistical variability introduced by random discrete dopants on the dynamic behaviour of inverters is studied in detail. As devices scaled, dynamic noise margin variation of an inverter is increased and higher output load or input slew rate improves the noise margins and its variation. Intrinsic delay variation based on CV/I delay metric is also compared using ION and IEFF definitions where the best estimate is obtained when considering ION and input transition time variations. Critical delay distribution of a path is also investigated where it is shown non-Gaussian. Finally, the impact of the cell input slew rate definition on the accuracy of the inverter cell timing characterisation in NLDM format is investigated

    Design, Modeling and Analysis of Non-classical Field Effect Transistors

    Get PDF
    Transistor scaling following per Moore\u27s Law slows down its pace when entering into nanometer regime where short channel effects (SCEs), including threshold voltage fluctuation, increased leakage current and mobility degradation, become pronounced in the traditional planar silicon MOSFET. In addition, as the demand of diversified functionalities rises, conventional silicon technologies cannot satisfy all non-digital applications requirements because of restrictions that stem from the fundamental material properties. Therefore, novel device materials and structures are desirable to fuel further evolution of semiconductor technologies. In this dissertation, I have proposed innovative device structures and addressed design considerations of those non-classical field effect transistors for digital, analog/RF and power applications with projected benefits. Considering device process difficulties and the dramatic fabrication cost, application-oriented device design and optimization are performed through device physics analysis and TCAD modeling methodology to develop design guidelines utilizing transistor\u27s improved characteristics toward application-specific circuit performance enhancement. Results support proposed device design methodologies that will allow development of novel transistors capable of overcoming limitation of planar nanoscale MOSFETs. In this work, both silicon and III-V compound devices are designed, optimized and characterized for digital and non-digital applications through calibrated 2-D and 3-D TCAD simulation. For digital functionalities, silicon and InGaAs MOSFETs have been investigated. Optimized 3-D silicon-on-insulator (SOI) and body-on-insulator (BOI) FinFETs are simulated to demonstrate their impact on the performance of volatile memory SRAM module with consideration of self-heating effects. Comprehensive simulation results suggest that the current drivability degradation due to increased device temperature is modest for both devices and corresponding digital circuits. However, SOI FinFET is recommended for the design of low voltage operation digital modules because of its faster AC response and better SCEs management than the BOI structure. The FinFET concept is also applied to the non-volatile memory cell at 22 nm technology node for low voltage operation with suppressed SCEs. In addition to the silicon technology, our TCAD estimation based on upper projections show that the InGaAs FinFET, with superior mobility and improved interface conditions, achieve tremendous drive current boost and aggressively suppressed SCEs and thereby a strong contender for low-power high-performance applications over the silicon counterpart. For non-digital functionalities, multi-fin FETs and GaN HEMT have been studied. Mixed-mode simulations along with developed optimization guidelines establish the realistic application potential of underlap design of silicon multi-Fin FETs for analog/RF operation. The device with underlap design shows compromised current drivability but improve analog intrinsic gain and high frequency performance. To investigate the potential of the novel N-polar GaN material, for the first time, I have provided calibrated TCAD modeling of E-mode N-polar GaN single-channel HEMT. In this work, I have also proposed a novel E-mode dual-channel hybrid MIS-HEMT showing greatly enhanced current carrying capability. The impact of GaN layer scaling has been investigated through extensive TCAD simulations and demonstrated techniques for device optimization

    A comprehensive comparison between design for testability techniques for total dose testing of flash-based FPGAs

    Get PDF
    Radiation sources exist in different kinds of environments where electronic devices often operate. Correct device operation is usually affected negatively by radiation. The radiation resultant effect manifests in several forms depending on the operating environment of the device like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). CMOS circuits and Floating gate MOS circuits suffer from an increase in the delay and the leakage current due to TID effect. This may damage the proper operation of the integrated circuit. Exhaustive testing is needed for devices operating in harsh conditions like space and military applications to ensure correct operations in the worst circumstances. The use of worst case test vectors (WCTVs) for testing is strongly recommended by MIL-STD-883, method 1019, which is the standard describing the procedure for testing electronic devices under radiation. However, the difficulty of generating these test vectors hinders their use in radiation testing. Testing digital circuits in the industry is usually done nowadays using design for testability (DFT) techniques as they are very mature and can be relied on. DFT techniques include, but not limited to, ad-hoc technique, built-in self test (BIST), muxed D scan, clocked scan and enhanced scan. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Despite all these recommendations for DFT, radiation testing has not benefited from this reliable technology yet. Also, with the big variation in the DFT techniques, choosing the right technique is the bottleneck to achieve the best results for TID testing. In this thesis, a comprehensive comparison between different DFT techniques for TID testing of flash-based FPGAs is made to help designers choose the best suitable DFT technique depending on their application. The comparison includes muxed D scan technique, clocked scan technique and enhanced scan technique. The comparison is done using ISCAS-89 benchmarks circuits. Points of comparisons include FPGA resources utilization, difficulty of designs bring-up, added delay by DFT logic and robust testable paths in each technique

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Comparing the impact of power supply voltage on CMOS-and FinFET-based SRAMs in the presence of resistive defects

    Get PDF
    CMOS technology scaling has reached its limit at the 22 nm technology node due to several factors including Process Variations (PV), increased leakage current, Random Dopant Fluctuation (RDF), and mainly the Short-Channel Effect (SCE). In order to continue the miniaturization process via technology down-scaling while preserving system reliability and performance, Fin Field-Effect Transistors (FinFETs) arise as an alternative to CMOS transistors. In parallel, Static Random-Access Memories (SRAMs) increasingly occupy great part of Systems-on-Chips’ (SoCs) silicon area, making their reliability an important issue. SRAMs are designed to reach densities at the limit of the manufacturing process, making this component susceptible to manufacturing defects, including the resistive ones. Such defects may cause dynamic faults during the circuits’ lifetime, an important cause of test escape. Thus, the identification of the proper faulty behavior taking different operating conditions into account is considered crucial to guarantee the development of more suitable test methodologies. In this context, a comparison between the behavior of a 22 nm CMOS-based and a 20 nm FinFET-based SRAM in the presence of resistive defects is carried out considering different power supply voltages. In more detail, the behavior of defective cells operating under different power supply voltages has been investigated performing SPICE simulations. Results show that the power supply voltage plays an important role in the faulty behavior of both CMOS- and FinFET-based SRAM cells in the presence of resistive defects but demonstrate to be more expressive when considering the FinFET-based memories. Studying different operating temperatures, the results show an expressively higher occurrence of dynamic faults in FinFET-based SRAMs when compared to CMOS technology
    corecore