183 research outputs found

    Algorithms for Power Aware Testing of Nanometer Digital ICs

    Get PDF
    At-speed testing of deep-submicron digital very large scale integrated (VLSI) circuits has become mandatory to catch small delay defects. Now, due to continuous shrinking of complementary metal oxide semiconductor (CMOS) transistor feature size, power density grows geometrically with technology scaling. Additionally, power dissipation inside a digital circuit during the testing phase (for test vectors under all fault models (Potluri, 2015)) is several times higher than its power dissipation during the normal functional phase of operation. Due to this, the currents that flow in the power grid during the testing phase, are much higher than what the power grid is designed for (the functional phase of operation). As a result, during at-speed testing, the supply grid experiences unacceptable supply IR-drop, ultimately leading to delay failures during at-speed testing. Since these failures are specific to testing and do not occur during functional phase of operation of the chip, these failures are usually referred to false failures, and they reduce the yield of the chip, which is undesirable. In nanometer regime, process parameter variations has become a major problem. Due to the variation in signalling delays caused by these variations, it is important to perform at-speed testing even for stuck faults, to reduce the test escapes (McCluskey and Tseng, 2000; Vorisek et al., 2004). In this context, the problem of excessive peak power dissipation causing false failures, that was addressed previously in the context of at-speed transition fault testing (Saxena et al., 2003; Devanathan et al., 2007a,b,c), also becomes prominent in the context of at-speed testing of stuck faults (Maxwell et al., 1996; McCluskey and Tseng, 2000; Vorisek et al., 2004; Prabhu and Abraham, 2012; Potluri, 2015; Potluri et al., 2015). It is well known that excessive supply IR-drop during at-speed testing can be kept under control by minimizing switching activity during testing (Saxena et al., 2003). There is a rich collection of techniques proposed in the past for reduction of peak switching activity during at-speed testing of transition/delay faults ii in both combinational and sequential circuits. As far as at-speed testing of stuck faults are concerned, while there were some techniques proposed in the past for combinational circuits (Girard et al., 1998; Dabholkar et al., 1998), there are no techniques concerning the same for sequential circuits. This thesis addresses this open problem. We propose algorithms for minimization of peak switching activity during at-speed testing of stuck faults in sequential digital circuits under the combinational state preservation scan (CSP-scan) architecture (Potluri, 2015; Potluri et al., 2015). First, we show that, under this CSP-scan architecture, when the test set is completely specified, the peak switching activity during testing can be minimized by solving the Bottleneck Traveling Salesman Problem (BTSP). This mapping of peak test switching activity minimization problem to BTSP is novel, and proposed for the first time in the literature. Usually, as circuit size increases, the percentage of don’t cares in the test set increases. As a result, test vector ordering for any arbitrary filling of don’t care bits is insufficient for producing effective reduction in switching activity during testing of large circuits. Since don’t cares dominate the test sets for larger circuits, don’t care filling plays a crucial role in reducing switching activity during testing. Taking this into consideration, we propose an algorithm, XStat, which is capable of performing test vector ordering while preserving don’t care bits in the test vectors, following which, the don’t cares are filled in an intelligent fashion for minimizing input switching activity, which effectively minimizes switching activity inside the circuit (Girard et al., 1998). Through empirical validation on benchmark circuits, we show that XStat minimizes peak switching activity significantly, during testing. Although XStat is a very powerful heuristic for minimizing peak input-switchingactivity, it will not guarantee optimality. To address this issue, we propose an algorithm that uses Dynamic Programming to calculate the lower bound for a given sequence of test vectors, and subsequently uses a greedy strategy for filling don’t cares in this sequence to achieve this lower bound, thereby guaranteeing optimality. This algorithm, which we refer to as DP-fill in this thesis, provides the globally optimal solution for minimizing peak input-switching-activity and also is the best known in the literature for minimizing peak input-switching-activity during testing. The proof of optimality of DP-fill in minimizing peak input-switching-activity is also provided in this thesis

    Advanced 3-V semiconductor technology assessment

    Get PDF
    Components required for extensions of currently planned space communications systems are discussed for large antennas, crosslink systems, single sideband systems, Aerostat systems, and digital signal processing. Systems using advanced modulation concepts and new concepts in communications satellites are included. The current status and trends in materials technology are examined with emphasis on bulk growth of semi-insulating GaAs and InP, epitaxial growth, and ion implantation. Microwave solid state discrete active devices, multigigabit rate GaAs digital integrated circuits, microwave integrated circuits, and the exploratory development of GaInAs devices, heterojunction devices, and quasi-ballistic devices is considered. Competing technologies such as RF power generation, filter structures, and microwave circuit fabrication are discussed. The fundamental limits of semiconductor devices and problems in implementation are explored

    Investigation into yield and reliability enhancement of TSV-based three-dimensional integration circuits

    No full text
    Three dimensional integrated circuits (3D ICs) have been acknowledged as a promising technology to overcome the interconnect delay bottleneck brought by continuous CMOS scaling. Recent research shows that through-silicon-vias (TSVs), which act as vertical links between layers, pose yield and reliability challenges for 3D design. This thesis presents three original contributions.The first contribution presents a grouping-based technique to improve the yield of 3D ICs under manufacturing TSV defects, where regular and redundant TSVs are partitioned into groups. In each group, signals can select good TSVs using rerouting multiplexers avoiding defective TSVs. Grouping ratio (regular to redundant TSVs in one group) has an impact on yield and hardware overhead. Mathematical probabilistic models are presented for yield analysis under the influence of independent and clustering defect distributions. Simulation results using MATLAB show that for a given number of TSVs and TSV failure rate, careful selection of grouping ratio results in achieving 100% yield at minimal hardware cost (number of multiplexers and redundant TSVs) in comparison to a design that does not exploit TSV grouping ratios. The second contribution presents an efficient online fault tolerance technique based on redundant TSVs, to detect TSV manufacturing defects and address thermal-induced reliability issue. The proposed technique accounts for both fault detection and recovery in the presence of three TSV defects: voids, delamination between TSV and landing pad, and TSV short-to-substrate. Simulations using HSPICE and ModelSim are carried out to validate fault detection and recovery. Results show that regular and redundant TSVs can be divided into groups to minimise area overhead without affecting the fault tolerance capability of the technique. Synthesis results using 130-nm design library show that 100% repair capability can be achieved with low area overhead (4% for the best case). The last contribution proposes a technique with joint consideration of temperature mitigation and fault tolerance without introducing additional redundant TSVs. This is achieved by reusing spare TSVs that are frequently deployed for improving yield and reliability in 3D ICs. The proposed technique consists of two steps: TSV determination step, which is for achieving optimal partition between regular and spare TSVs into groups; The second step is TSV placement, where temperature mitigation is targeted while optimizing total wirelength and routing difference. Simulation results show that using the proposed technique, 100% repair capability is achieved across all (five) benchmarks with an average temperature reduction of 75.2? (34.1%) (best case is 99.8? (58.5%)), while increasing wirelength by a small amount

    Simulation of charge-trapping in nano-scale MOSFETs in the presence of random-dopants-induced variability

    Get PDF
    The growing variability of electrical characteristics is a major issue associated with continuous downscaling of contemporary bulk MOSFETs. In addition, the operating conditions brought about by these same scaling trends have pushed MOSFET degradation mechanisms such as Bias Temperature Instability (BTI) to the forefront as a critical reliability threat. This thesis investigates the impact of this ageing phenomena, in conjunction with device variability, on key MOSFET electrical parameters. A three-dimensional drift-diffusion approximation is adopted as the simulation approach in this work, with random dopant fluctuations—the dominant source of statistical variability—included in the simulations. The testbed device is a realistic 35 nm physical gate length n-channel conventional bulk MOSFET. 1000 microscopically different implementations of the transistor are simulated and subjected to charge-trapping at the oxide interface. The statistical simulations reveal relatively rare but very large threshold voltage shifts, with magnitudes over 3 times than that predicted by the conventional theoretical approach. The physical origin of this effect is investigated in terms of the electrostatic influences of the random dopants and trapped charges on the channel electron concentration. Simulations with progressively increased trapped charge densities—emulating the characteristic condition of BTI degradation—result in further variability of the threshold voltage distribution. Weak correlations of the order of 10-2 are found between the pre-degradation threshold voltage and post-degradation threshold voltage shift distributions. The importance of accounting for random dopant fluctuations in the simulations is emphasised in order to obtain qualitative agreement between simulation results and published experimental measurements. Finally, the information gained from these device-level physical simulations is integrated into statistical compact models, making the information available to circuit designers

    Statistical compact model strategies for nano CMOS transistors subject of atomic scale variability

    Get PDF
    One of the major limiting factors of the CMOS device, circuit and system simulation in sub 100nm regimes is the statistical variability introduced by the discreteness of charge and granularity of matter. The statistical variability cannot be eliminated by tuning the layout or by tightening fabrication process control. Since the compact models are the key bridge between technology and design, it is necessary to transfer reliably the MOSFET statistical variability information into compact models to facilitate variability aware design practice. The aim of this project is the development of a statistical extraction methodology essential to capture statistical variability with optimum set of parameters particularly in industry standard compact model BSIM. This task is accomplished by using a detailed study on the sensitivity analysis of the transistor current in respect to key parameters in compact model in combination with error analysis of the fitted Id-Vg characteristics. The key point in the developed direct statistical compact model strategy is that the impacts of statistical variability can be captured in device characteristics by tuning a limited number of parameters and keeping the values for remaining major set equal to their default values obtained from the “uniform” MOSFET compact model extraction. However, the statistical compact model extraction strategies will accurately represent the distribution and correlation of the electrical MOSFET figures of merit. Statistical compact model parameters are generated using statistical parameter generation techniques such as uncorrelated parameter distributions, principal component analysis and nonlinear power method. The accuracy of these methods is evaluated in comparison with the results obtained from ‘atomistic’ simulations. The impact of the correlations in the compact model parameters has been analyzed along with the corresponding transistor figures of merit. The accuracy of the circuit simulations with different statistical compact model libraries has been studied. Moreover, the impact of the MOSFET width/length on the statistical trend of the optimum set of statistical compact model parameters and electrical figures of merit has been analyzed with two methods to capture geometry dependencies in proposed statistical models

    High Quality Delay Testing Scheme for a Self-Timed Microprocessor

    Get PDF
    RÉSUMÉ La popularité d’internet et la quantité toujours croissante de données qui transitent à travers ses terminaux nécessite d’importantes infrastructures de serveurs qui consomment énormément d’énergie. Par conséquent, et puisqu’une augmentation de la consommation d’énergie se traduit par une augmentation des coûts, la demande pour des processeurs efficaces en énergie est en forte hausse. Une manière d’augmenter l’efficacité énergétique des processeurs consiste à moduler la fréquence d’opération du système en fonction de la charge de travail. Les processeurs endochrones et asynchrones sont une des solutions mettant en œuvre ce principe de modulation de l’activité à la demande. Cependant, les méthodes de conception non conventionnelles qui leur sont associées, en particulier en termes de testabilité et d’automation, sont un frein au développement de ce type de systèmes. Ce travail s’intéresse au développement d’une méthode de test de haute qualité adressée aux pannes de retards dans une architecture de processeur endochrone spécifique, appelée AnARM. La méthode proposée consiste à détecter les pannes à faibles retards (PFR) dans l’AnARM en tirant profit des lignes à délais configurables intégrées. Ces pannes sont connues pour passer au travers des modèles de pannes de retards utilisés habituellement (les pannes de retards de portes). Ce travail s’intéresse principalement aux PFR qui échappent à la détection des pannes de retards de portes mais qui sont suffisamment longues pour provoquer des erreurs dans des conditions normales d’opération. D’autre part, la détection de pannes à très faibles retards est évitée, autant que possible, afin de limiter le nombre de faux positifs. Pour réaliser un test de haute qualité, ce travail propose, dans un premier temps, une métrique de test dédiée aux PFR, qui est mieux adaptée aux circuits endochrones, puis, dans un second temps, une méthode de test des pannes de retards basée sur la modulation de la vitesse des lignes à délais intégrés, qui s’adapte à un jeu de vecteurs de test préexistant.Ce travail présente une métrique de test ciblant les PFR, appelée pourcentage de marges pondérées (PoMP), ainsi qu’un nouveau modèle de test pour les PFR (appelé test de PFR idéal).----------ABSTRACT The popularity of the Internet and the huge amount of data that is transfered between devices nowadays requires very powerful servers that demand lots of power. Since higher power consumptions mean more expenses to companies, there is an increase in demand for power eÿcient processors. One of the ways to increase the power eÿciency of processors is to adapt the processing speeds and chip activity according the needed computation load. Self-timed or asynchronous processors are one of the solutions that apply this principle of activity on demand. However, their unconventional design methodology introduces several challenges in terms of testability and design automation. This work focuses on developing a high quality delay test for a specific architecture of self-timed processors called the AnARM. The proposed delay test focuses on catching e˙ective small-delay defects (SDDs) in the AnARM by taking advantage of built-in configurable delay lines. Those defects are known to escape one of the most commonly used delay fault models (the transition delay fault model). This work mainly focuses on e˙ective SDDs which can escape transition delay fault testing and are large enough to fail the circuit under normal operating conditions. At the same time, catching very small delay defects is avoided, when possible, to avoid falsely failing functional chips. To build the high quality delay test, this work develops an SDD test quality metric that is better suited for circuits with adaptable speeds. Then, it builds a delay test optimizer that adapts the built-in delay lines speeds to a preexisting at-speed pattern set to create a high quality SDD test. This work presents a novel SDD test quality metric called the weighted slack percentage (WeSPer), along with a new SDD testing model (named the ideal SDD test model). WeSPer is built to be a flexible metric capable of adapting to the availability of information about the circuit under test and the test environment. Since the AnARM can use multiple test speeds, WeSPer computation takes special care of assessing the effects of test frequency changes on the test quality. Specifically, special care is taken into avoiding overtesting the circuit. Overtesting will cause circuits under test to fail due to defects that are too small to affect the functionality of these circuits in their present state. A computation framework is built to compute WeSPer and compare it with other existing metrics in the literature over a large sets of process-voltage-temperature computation points. Simulations are done on a selected set of known benchmark circuits synthesized in the 28nm FD-SOI technology from STMicroelectronics

    Nano-optical studies of superconducting nanowire single-photon detectors

    Get PDF
    uperconducting single-photon detectors based on superconducting nanowires offer broadband single-photon sensitivity, from visible to mid-infrared wavelengths. They have attracted particular attention due to their promising performance at telecommunications wavelengths. The additional benefits of superconducting nanowire single-photon detectors (SNSPDs) include low dark count rates (Hz) and low timing jitter (sub 100 ps). SNSPDs have been employed in practical photon-counting applications such as quantum key distribution (QKD), operation of quantum waveguide circuits and quantum emitter characterisation. Major challenges in the development of SNSPDs are the improvement of device uniformity and achieving efficient optical coupling. Nano-optical techniques such as confocal microscopy can be used to image localised areas of SNSPDs providing a direct measurement of the device uniformity. The work in this thesis describes both initial nano-optical testing at visible wavelengths in liquid helium and the construction of a fibre based miniature confocal microscope configuration operating at telecommunications wavelengths for use in a closed cycle refrigerator. In both cases localised areas of SNSPDs can be studied whilst maintaining efficient optical coupling. The miniature confocal microscope configuration has sub-nanometre position resolution over a 30 μm x 30 μm area by way of a piezoelectric X-Y scanner. A full width at half maximum (FWHM) optical resolution of 1305 nm at a wavelength of 1550 nm is achieved. SNSPDs based upon niobium nitride (NbN) nanowires fabricated on magnesium oxide (MgO) have been studied. The microscope system has allowed us to map the temporal response (timing jitter and output pulse timing delay) of constricted (non-uniform) SNSPDs. By fitting to a theoretical model, the variations in output pulse timing delay have been shown to be caused by variations in hotspot resistances across the device. This observation has provided insights into the underlying physics of SNSPDs and especially the origins of timing jitter in SNSPDs. This provides a pathway to exploitation of this effect in next-generation device designs for applications such as imaging

    Flash Memory Devices

    Get PDF
    Flash memory devices have represented a breakthrough in storage since their inception in the mid-1980s, and innovation is still ongoing. The peculiarity of such technology is an inherent flexibility in terms of performance and integration density according to the architecture devised for integration. The NOR Flash technology is still the workhorse of many code storage applications in the embedded world, ranging from microcontrollers for automotive environment to IoT smart devices. Their usage is also forecasted to be fundamental in emerging AI edge scenario. On the contrary, when massive data storage is required, NAND Flash memories are necessary to have in a system. You can find NAND Flash in USB sticks, cards, but most of all in Solid-State Drives (SSDs). Since SSDs are extremely demanding in terms of storage capacity, they fueled a new wave of innovation, namely the 3D architecture. Today “3D” means that multiple layers of memory cells are manufactured within the same piece of silicon, easily reaching a terabit capacity. So far, Flash architectures have always been based on "floating gate," where the information is stored by injecting electrons in a piece of polysilicon surrounded by oxide. On the contrary, emerging concepts are based on "charge trap" cells. In summary, flash memory devices represent the largest landscape of storage devices, and we expect more advancements in the coming years. This will require a lot of innovation in process technology, materials, circuit design, flash management algorithms, Error Correction Code and, finally, system co-design for new applications such as AI and security enforcement
    corecore