1,457 research outputs found

    SiC power MOSFETs performance, robustness and technology maturity

    Get PDF
    Relatively recently, SiC power MOSFETs have transitioned from being a research exercise to becoming an industrial reality. The potential benefits that can be drawn from this technology in the electrical energy conversion domain have been amply discussed and partly demonstrated. Before their widespread use in the field, the transistors need to be thoroughly investigated and later validated for robustness and longer term stability and reliability. This paper proposes a review of commercial SiC power MOSFETs state-of-the-art characteristics and discusses trends and needs for further technology improvements, as well as device design and engineering advancements to meet the increasing demands of power electronics

    Improvement of hardware reliability with aging monitors

    Get PDF

    Characterisation and mitigation of long-term degradation effects in programmable logic

    No full text
    Reliability has always been an issue in silicon device engineering, but until now it has been managed by the carefully tuned fabrication process. In the future the underlying physical limitations of silicon-based electronics, plus the practical challenges of manufacturing with such complexity at such a small scale, will lead to a crunch point where transistor-level reliability must be forfeited to continue achieving better productivity. Field-programmable gate arrays (FPGAs) are built on state-of-the-art silicon processes, but it has been recognised for some time that their distinctive characteristics put them in a favourable position over application-specific integrated circuits in the face of the reliability challenge. The literature shows how a regular structure, interchangeable resources and an ability to reconfigure can all be exploited to detect, locate, and overcome degradation and keep an FPGA application running. To fully exploit these characteristics, a better understanding is needed of the behavioural changes that are seen in the resources that make up an FPGA under ageing. Modelling is an attractive approach to this and in this thesis the causes and effects are explored of three important degradation mechanisms. All are shown to have an adverse affect on FPGA operation, but their characteristics show novel opportunities for ageing mitigation. Any modelling exercise is built on assumptions and so an empirical method is developed for investigating ageing on hardware with an accelerated-life test. Here, experiments show that timing degradation due to negative-bias temperature instability is the dominant process in the technology considered. Building on simulated and experimental results, this work also demonstrates a variety of methods for increasing the lifetime of FPGA lookup tables. The pre-emptive measure of wear-levelling is investigated in particular detail, and it is shown by experiment how di fferent reconfiguration algorithms can result in a significant reduction to the rate of degradation

    Product assurance technology for procuring reliable, radiation-hard, custom LSI/VLSI electronics

    Get PDF
    Advanced measurement methods using microelectronic test chips are described. These chips are intended to be used in acquiring the data needed to qualify Application Specific Integrated Circuits (ASIC's) for space use. Efforts were focused on developing the technology for obtaining custom IC's from CMOS/bulk silicon foundries. A series of test chips were developed: a parametric test strip, a fault chip, a set of reliability chips, and the CRRES (Combined Release and Radiation Effects Satellite) chip, a test circuit for monitoring space radiation effects. The technical accomplishments of the effort include: (1) development of a fault chip that contains a set of test structures used to evaluate the density of various process-induced defects; (2) development of new test structures and testing techniques for measuring gate-oxide capacitance, gate-overlap capacitance, and propagation delay; (3) development of a set of reliability chips that are used to evaluate failure mechanisms in CMOS/bulk: interconnect and contact electromigration and time-dependent dielectric breakdown; (4) development of MOSFET parameter extraction procedures for evaluating subthreshold characteristics; (5) evaluation of test chips and test strips on the second CRRES wafer run; (6) two dedicated fabrication runs for the CRRES chip flight parts; and (7) publication of two papers: one on the split-cross bridge resistor and another on asymmetrical SRAM (static random access memory) cells for single-event upset analysis

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Gated Hall and Field-Effect Transport Characterization of E-mode ZnO TFTs

    Get PDF
    Amorphous and nano-crystalline metal oxide semiconductors are an important class of materials under continuing investigation for emerging technologies. Accurate measurements of electron mobility in these materials is critical for furthering overall device development. This is complicated due to the fact that device measurements such as current response, transistor input and output characteristics, as well as mobility are affected by transport-limiting factors, such as charge trapping effects at the dielectric / active layer interface, and restriction of electronic transport across grain boundaries. In this work, we focus on the binary metal oxide thin film transistor (ZnO TFT), a normally-off (e-mode) transistor with a positive threshold voltage (Vth) and a large ( \u3e 10 MΩ)) sheet resistance at gate voltage below threshold (VG \u3c Vth), Field-effect and Hall (extrinsic and intrinsic mobilities) were measured on the same device at the same time (concurrent mobility measurements of our gated Hall system) at device relevant dimensions (25 nm Al2O3, gate dielectric, 50 nm ZnO active layer), at typical transistor gate and drain bias device operating conditions (VG \u3c 10 V and VD biased in the linear region), on the same device (100 × 100 µm Van der Pauw). The large sheet resistance (RS) of the material requires electrostatic doping (by gate bias) in order to modulate resistance and increase Hall test current. However, as VG interacts with VD and VS, resulting vertical electric fields EGS and EGD must remain below dielectric breakdown (EBD). The result of meeting these test requirements led to a fully automated gated Hall test system capable of making measurements and comparisons of mobility across the allowable test bias spectrum (VG and VD). A design of experiment in which test wafers were compared between in situ deposition and exposure to clean room ambient air between the dielectric and active layer depositions (by atomic layer deposition) was used to examine interface effects. Post temperature oven annealing was used to compare differences in grain boundary effects by increasing grain size. A simple model of two transport regimes was developed (localized and non-localized transport) to fit several contradictory trends observed in the measured data sets

    Degradation in FPGAs: Monitoring, Modeling and Mitigation

    Get PDF
    This dissertation targets the transistor aging degradation as well as the associated thermal challenges in FPGAs (since there is an exponential relation between aging and chip temperature). The main objectives are to perform experimentation, analysis and device-level model abstraction for modeling the degradation in FPGAs, then to monitor the FPGA to keep track of aging rates and ultimately to propose an aging-aware FPGA design flow to mitigate the aging

    Resilience of an embedded architecture using hardware redundancy

    Get PDF
    In the last decade the dominance of the general computing systems market has being replaced by embedded systems with billions of units manufactured every year. Embedded systems appear in contexts where continuous operation is of utmost importance and failure can be profound. Nowadays, radiation poses a serious threat to the reliable operation of safety-critical systems. Fault avoidance techniques, such as radiation hardening, have been commonly used in space applications. However, these components are expensive, lag behind commercial components with regards to performance and do not provide 100% fault elimination. Without fault tolerant mechanisms, many of these faults can become errors at the application or system level, which in turn, can result in catastrophic failures. In this work we study the concepts of fault tolerance and dependability and extend these concepts providing our own definition of resilience. We analyse the physics of radiation-induced faults, the damage mechanisms of particles and the process that leads to computing failures. We provide extensive taxonomies of 1) existing fault tolerant techniques and of 2) the effects of radiation in state-of-the-art electronics, analysing and comparing their characteristics. We propose a detailed model of faults and provide a classification of the different types of faults at various levels. We introduce an algorithm of fault tolerance and define the system states and actions necessary to implement it. We introduce novel hardware and system software techniques that provide a more efficient combination of reliability, performance and power consumption than existing techniques. We propose a new element of the system called syndrome that is the core of a resilient architecture whose software and hardware can adapt to reliable and unreliable environments. We implement a software simulator and disassembler and introduce a testing framework in combination with ERA’s assembler and commercial hardware simulators

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    Fault simulation for structural testing of analogue integrated circuits

    Get PDF
    In this thesis the ANTICS analogue fault simulation software is described which provides a statistical approach to fault simulation for accurate analogue IC test evaluation. The traditional figure of fault coverage is replaced by the average probability of fault detection. This is later refined by considering the probability of fault occurrence to generate a more realistic, weighted test metric. Two techniques to reduce the fault simulation time are described, both of which show large reductions in simulation time with little loss of accuracy. The final section of the thesis presents an accurate comparison of three test techniques and an evaluation of dynamic supply current monitoring. An increase in fault detection for dynamic supply current monitoring is obtained by removing the DC component of the supply current prior to measurement
    corecore