448 research outputs found
Fault simulation for structural testing of analogue integrated circuits
In this thesis the ANTICS analogue fault simulation software is described which provides a statistical approach to fault simulation for accurate analogue IC test evaluation. The traditional figure of fault coverage is replaced by the average probability of fault detection. This is later refined by considering the probability of fault occurrence to generate a more realistic, weighted test metric. Two techniques to reduce the fault simulation time are described, both of which show large reductions in simulation time with little loss of accuracy. The final section of the thesis presents an accurate comparison of three test techniques and an evaluation of dynamic supply current monitoring. An increase in fault detection for dynamic supply current monitoring is obtained by removing the DC component of the supply current prior to measurement
Determination Of Oxygen Chemical Diffusion Coefficients In Single Crystal SrTiO By Capacitance Manometry
The oxidation kinetics of a single crystal of SrTiOs were measured with a tensivolumetric system over the temperature range 700 -975 at 0.03 atm oxygen pressure. The oxidation was found to be oxygen diffusion limited with an activation energy of 14.9 - 1.3 kcal/mole. Combining the kinetic data with relative defect concentration data yielded an activation energy for oxygen self-diffusion of 57 - 16 kcal/mole. The enthalpy of formation of doubly ionized oxygen vacancies was calculated to be 126 - 13 kcal/mol. © 1975, The Electrochemical Society, Inc. All rights reserved
Fault modelling and accelerated simulation of integrated circuits manufacturing defects under process variation
As silicon manufacturing process scales to and beyond the 65-nm node, process variation can no longer be ignored. The impact of process variation on integrated circuit performance and power has received significant research input. Variation-aware test, on the other hand, is a relatively new research area that is currently receiving attention worldwide.Research has shown that test without considering process variation may lead to loss of test quality. Fault modelling and simulation serve as a backbone of manufacturing test. This thesis is concerned with developing efficient fault modelling techniques and simulation methodologies that take into account the effect of process variation on manufacturing defects with particular emphasis on resistive bridges and resistive opens.The first contribution of this thesis addresses the problem of long computation time required to generate logic fault of resistive bridges under process variation by developing a fast and accurate modelling technique to model logic fault behaviour of resistive bridges.The new technique is implemented by employing two efficient voltage calculation algorithms to calculate the logic threshold voltage of driven gates and critical resistance of a fault-site to enable the computation of bridge logic faults without using SPICE. Simulation results show that the technique is fast (on average 53 times faster) and accurate (worst case is 2.64% error) when compared with HSPICE. The second contribution analyses the complexity of delay fault simulation of resistive bridges to reduce the computation time of delay fault when considering process variation. An accelerated delay fault simulation methodology of resistive bridges is developed by employing a three-step strategy to speed up the calculation of transient gate output voltage which is needed to accurately compute delay faults. Simulation results show that the methodology is on average 17.4 times faster, with 5.2% error in accuracy, when compared with HSPICE. The final contribution presents an accelerated simulation methodology of resistive opens to address the problem of long simulation time of delay fault when considering process variation. The methodology is implemented by using two efficient algorithms to accelerate the computation of transient gate output voltage and timing critical resistance of an open fault-site. Simulation results show that the methodology is on average up to 52 times faster than HSPICE, with 4.2% error in accuracy
Investigation into voltage and process variation-aware manufacturing test
Increasing integration and complexity in IC design provides challenges for manufacturing testing. This thesis studies how process and supply voltage variation influence defect behaviour to determine the impact on manufacturing test cost and quality. The focus is on logic testing of static CMOS designs with respect to two important defect types in deep submicron CMOS: resistive bridges and full opens. The first part of the thesis addresses testing for resistive bridge defects in designs with multiple supply voltage settings. To enable analysis, a fault simulator is developed using a supply voltage-aware model for bridge defect behaviour. The analysis shows that for high defect coverage it is necessary to perform test for more than one supply voltage setting, due to supply voltage-dependent behaviour. A low-cost and effective test method is presented consisting of multi-voltage test generation that achieves high defect coverage and test set size reduction without compromise to defect coverage. Experiments on synthesised benchmarks with realistic bridge locations validate the proposed method.The second part focuses on the behaviour of full open defects under supply voltage variation. The aim is to determine the appropriate value of supply voltage to use when testing. Two models are considered for the behaviour of full open defects with and without gate tunnelling leakage influence. Analysis of the supply voltage-dependent behaviour of full open defects is performed to determine if it is required to test using more than one supply voltage to detect all full open defects. Experiments on synthesised benchmarks using an extended version of the fault simulator tool mentioned above, measure the quantitative impact of supply voltage variation on defect coverage.The final part studies the impact of process variation on the behaviour of bridge defects. Detailed analysis using synthesised ISCAS benchmarks and realistic bridge model shows that process variation leads to additional faults. If process variation is not considered in test generation, the test will fail to detect some of these faults, which leads to test escapes. A novel metric to quantify the impact of process variation on test quality is employed in the development of a new test generation tool, which achieves high bridge defect coverage. The method achieves a user-specified test quality with test sets which are smaller than test sets generated without consideration of process variation
Recommended from our members
Low Dose Analytical Electron Microscopy of Hybrid Perovskite Photovoltaic Devices
The rapid ascent of perovskite photovoltaics over the past decade has enabled this technology to now stand on the cusp of commercialisation. However, a successful entry into the market will only be feasible if the power conversion efficiencies of perovskite solar modules can at least approach those of laboratory-scale cells. Achieving this feat requires spatially homogeneous depositions of device layers over a large area and high-quality interconnections between adjacent cells in a module. Since perovskite photovoltaic devices are nanostructured, materials characterisation with a nanometre spatial resolution can provide valuable insights to optimise the processes involved in scalable film deposition and interconnection fabrication. This thesis presents nanoscale electron microscopy investigations of perovskite photovoltaic devices made using scalable deposition methods and the cell interconnections within. A characterisation workflow consisting of cross-sectional specimen preparation, data acquisition, and multivariate statistical data analysis is developed and validated. Preparation of electron-transparent specimens is performed using focused ion beam milling, which is shown to have minimum impact on the perovskite specimen. Nanoscale compositional mapping is performed using energy-dispersive X-ray spectroscopy in a scanning transmission electron microscope, where the applied electron dose is minimised to suppress beam-induced specimen damage while still ensuring statistical significance in the data. Principal component analysis, a multivariate statistical analysis algorithm, is optimised and applied to improve the signal-to-noise ratio in the obtained datasets by an order of magnitude. This sequence allows acquisition of spatially resolved morphological and compositional data with minimum damage on the perovskite specimen, which are supported by complementary computational methods and other characterisation techniques. The optimised workflow is applied to study perovskite solar modules deposited by blade coating, where electron microscopy revealed how additives in the perovskite precursor solutions contribute towards a more homogeneous device stack and, ultimately, more efficient modules. Finally, the interconnections are studied as they are critical to ensure good electrical performance in solar modules. Compositional characterisation shows how laser pulses used in scribing the interconnection lines can decompose the perovskite layer next to those lines, and also how the decomposition is affected by the perovskite’s homogeneity. Furthermore, elemental mapping reveals diffusion of sodium from the glass substrate into the active layers through the interconnection lines, even before the devices are operated. Sodium diffusion results in passivated defect sites and stronger perovskite luminescence, but also carries an inherent risk of excessive diffusion throughout the device’s lifetime.Jardine Foundation;
Cambridge Trus
Reliability history of the Apollo guidance computer
The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer
- …