418 research outputs found

    Integrated circuit outlier identification by multiple parameter correlation

    Get PDF
    Semiconductor manufacturers must ensure that chips conform to their specifications before they are shipped to customers. This is achieved by testing various parameters of a chip to determine whether it is defective or not. Separating defective chips from fault-free ones is relatively straightforward for functional or other Boolean tests that produce a go/no-go type of result. However, making this distinction is extremely challenging for parametric tests. Owing to continuous distributions of parameters, any pass/fail threshold results in yield loss and/or test escapes. The continuous advances in process technology, increased process variations and inaccurate fault models all make this even worse. The pass/fail thresholds for such tests are usually set using prior experience or by a combination of visual inspection and engineering judgment. Many chips have parameters that exceed certain thresholds but pass Boolean tests. Owing to the imperfect nature of tests, to determine whether these chips (called "outliers") are indeed defective is nontrivial. To avoid wasted investment in packaging or further testing it is important to screen defective chips early in a test flow. Moreover, if seemingly strange behavior of outlier chips can be explained with the help of certain process parameters or by correlating additional test data, such chips can be retained in the test flow before they are proved to be fatally flawed. In this research, we investigate several methods to identify true outliers (defective chips, or chips that lead to functional failure) from apparent outliers (seemingly defective, but fault-free chips). The outlier identification methods in this research primarily rely on wafer-level spatial correlation, but also use additional test parameters. These methods are evaluated and validated using industrial test data. The potential of these methods to reduce burn-in is discussed

    Constraint-driven RF test stimulus generation and built-in test

    Get PDF
    With the explosive growth in wireless applications, the last decade witnessed an ever-increasing test challenge for radio frequency (RF) circuits. While the design community has pushed the envelope far into the future, by expanding CMOS process to be used with high-frequency wireless devices, test methodology has not advanced at the same pace. Consequently, testing such devices has become a major bottleneck in high-volume production, further driven by the growing need for tighter quality control. RF devices undergo testing during the prototype phase and during high-volume manufacturing (HVM). The benchtop test equipment used throughout prototyping is very precise yet specialized for a subset of functionalities. HVM calls for a different kind of test paradigm that emphasizes throughput and sufficiency, during which the projected performance parameters are measured one by one for each device by automated test equipment (ATE) and compared against defined limits called specifications. The set of tests required for each product differs greatly in terms of the equipment required and the time taken to test individual devices. Together with signal integrity, precision, and repeatability concerns, the initial cost of RF ATE is prohibitively high. As more functionality and protocols are integrated into a single RF device, the required number of specifications to be tested also increases, adding to the overall cost of testing, both in terms of the initial and recurring operating costs. In addition to the cost problem, RF testing proposes another challenge when these components are integrated into package-level system solutions. In systems-on-packages (SOP), the test problems resulting from signal integrity, input/output bandwidth (IO), and limited controllability and observability have initiated a paradigm shift in high-speed analog testing, favoring alternative approaches such as built-in tests (BIT) where the test functionality is brought into the package. This scheme can make use of a low-cost external tester connected through a low-bandwidth link in order to perform demanding response evaluations, as well as make use of the analog-to-digital converters and the digital signal processors available in the package to facilitate testing. Although research on analog built-in test has demonstrated hardware solutions for single specifications, the paradigm shift calls for a rather general approach in which a single methodology can be applied across different devices, and multiple specifications can be verified through a single test hardware unit, minimizing the area overhead. Specification-based alternate test methodology provides a suitable and flexible platform for handling the challenges addressed above. In this thesis, a framework that integrates ATE and system constraints into test stimulus generation and test response extraction is presented for the efficient production testing of high-performance RF devices using specification-based alternate tests. The main components of the presented framework are as follows: Constraint-driven RF alternate test stimulus generation: An automated test stimulus generation algorithm for RF devices that are evaluated by a specification-based alternate test solution is developed. The high-level models of the test signal path define constraints in the search space of the optimized test stimulus. These models are generated in enough detail such that they inherently define limitations of the low-cost ATE and the I/O restrictions of the device under test (DUT), yet they are simple enough that the non-linear optimization problem can be solved empirically in a reasonable amount of time. Feature extractors for BIT: A methodology for the built-in testing of RF devices integrated into SOPs is developed using additional hardware components. These hardware components correlate the high-bandwidth test response to low bandwidth signatures while extracting the test-critical features of the DUT. Supervised learning is used to map these extracted features, which otherwise are too complicated to decipher by plain mathematical analysis, into the specifications under test. Defect-based alternate testing of RF circuits: A methodology for the efficient testing of RF devices with low-cost defect-based alternate tests is developed. The signature of the DUT is probabilistically compared with a class of defect-free device signatures to explore possible corners under acceptable levels of process parameter variations. Such a defect filter applies discrimination rules generated by a supervised classifier and eliminates the need for a library of possible catastrophic defects.Ph.D.Committee Chair: Chatterjee, Abhijit; Committee Member: Durgin, Greg; Committee Member: Keezer, David; Committee Member: Milor, Linda; Committee Member: Sitaraman, Sures

    Artificial neural networks and their applications to intelligent fault diagnosis of power transmission lines

    Get PDF
    Over the past thirty years, the idea of computing based on models inspired by human brains and biological neural networks emerged. Artificial neural networks play an important role in the field of machine learning and hold the key to the success of performing many intelligent tasks by machines. They are used in various applications such as pattern recognition, data classification, stock market prediction, aerospace, weather forecasting, control systems, intelligent automation, robotics, and healthcare. Their architectures generally consist of an input layer, multiple hidden layers, and one output layer. They can be implemented on software or hardware. Nowadays, various structures with various names exist for artificial neural networks, each of which has its own particular applications. Those used types in this study include feedforward neural networks, convolutional neural networks, and general regression neural networks. Increasing the number of layers in artificial neural networks as needed for large datasets, implies increased computational expenses. Therefore, besides these basic structures in deep learning, some advanced techniques are proposed to overcome the drawbacks of original structures in deep learning such as transfer learning, federated learning, and reinforcement learning. Furthermore, implementing artificial neural networks in hardware gives scientists and engineers the chance to perform high-dimensional and big data-related tasks because it removes the constraints of memory access time defined as the von Neuman bottleneck. Accordingly, analog and digital circuits are used for artificial neural network implementations without using general-purpose CPUs. In this study, the problem of fault detection, identification, and location estimation of transmission lines is studied and various deep learning approaches are implemented and designed as solutions. This research work focuses on the transmission lines’ datasets, their faults, and the importance of identification, detection, and location estimation of them. It also includes a comprehensive review of the previous studies to perform these three tasks. The application of various artificial neural networks such as feedforward neural networks, convolutional neural networks, and general regression neural networks for identification, detection, and location estimation of transmission line datasets are also discussed in this study. Some advanced methods based on artificial neural networks are taken into account in this thesis such as the transfer learning technique. These methodologies are designed and applied on transmission line datasets to enable the scientist and engineers with using fewer data points for the training purpose and wasting less time on the training step. This work also proposes a transfer learning-based technique for distinguishing faulty and non-faulty insulators in transmission line images. Besides, an effective design for an activation function of the artificial neural networks is proposed in this thesis. Using hyperbolic tangent as an activation function in artificial neural networks has several benefits including inclusiveness and high accuracy

    Machine learning support for logic diagnosis

    Get PDF

    Microelectronic circuits for noninvasive ear type assistive devices

    Get PDF
    An ear type system and its circuit realization with application as new assistive devices are investigated. The auditory brainstem responses obtained from clinical hearing measurements are utilized for which the ear type systems mimicking the physical and behavioral characteristics of the individual auditory system are developed. In the case that effects from the hearing loss and disorder can be detected via the measured responses, differentiations between normal and impaired characteristics of the human auditory system are made possible from which the new noninvasive way of correcting these undesired effects is proposed. The ear type system of auditory brainstem response is developed using an adaptation of the nonlinear neural network architecture and the system for making a correction is realized using the derived inverse of neural network. Microelectronic circuits of the systems are designed and simulated showing a possibility of developing into a hearing aid type device which potentially helps hearing impaired patients in an alternate and noninvasive useful way

    High-Density Solid-State Memory Devices and Technologies

    Get PDF
    This Special Issue aims to examine high-density solid-state memory devices and technologies from various standpoints in an attempt to foster their continuous success in the future. Considering that broadening of the range of applications will likely offer different types of solid-state memories their chance in the spotlight, the Special Issue is not focused on a specific storage solution but rather embraces all the most relevant solid-state memory devices and technologies currently on stage. Even the subjects dealt with in this Special Issue are widespread, ranging from process and design issues/innovations to the experimental and theoretical analysis of the operation and from the performance and reliability of memory devices and arrays to the exploitation of solid-state memories to pursue new computing paradigms

    Interpixel capacitive coupling

    Get PDF
    Interpixel coupling (IPC) is an electronic crosstalk where a pixel couples signal charges to its neighbors capacitively. It is a deterministic process, whereas diffusion crosstalk is stochastic. It will smooth normal image signal as well as Poisson noise. As a result, the conversion gain will be underestimated by the photon transfer method. However, the capacitive coupling has not received much attention until Andrew Moore and Gert Finger recently studied its potential effect on the measurement of responsive detective efficiency of image detector arrays in both theory and observation. This thesis continues to investigate this electronic effect. The potential impact of capacitive coupling on the photometric measurement is first simulated. Methods based on inverse filer and Wiener filer are tested to correct this coupling impact. It appears that the signal loss can be restored to reasonable accuracy by applying the pseudo-inverse filter, provided that we have full knowledge of interpixel coupling. New methods to measure the IPC value are proposed and tested based on the cosmic ray events and hot pixels, where the dark frame data sampled up-the-ramp are used. By these two methods, the coupling effect of a hybrid HgCdTe array is studied. The coupling magnitude is measured at different detector temperatures. It shows that the IPC magnitude decreases with increasing temperature. The IPC dependencies on the center pixel intensity and background are also checked: the magnitude becomes smaller when the target pixel event gets stronger, while the trend reverses with the change of background level. Finally, the possible impact on read noise, i.e. noise correlation with adjacent pixels, is examined based on the dark frames from the infrared HgCdTe and Si-PIN arrays. The initial results indicate that the read noise component is not affected by the capacitive coupling between pixels, in the sense that read noise is uncorrelated to neighboring pixels, as compared to the neighbor-correlated Poisson noise due to interpixel capacitance

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Design, Characterization And Analysis Of Electrostatic Discharge (esd) Protection Solutions In Emerging And Modern Technologies

    Get PDF
    Electrostatic Discharge (ESD) is a significant hazard to electronic components and systems. Based on a specific processing technology, a given circuit application requires a customized ESD consideration that includes the devices’ operating voltage, leakage current, breakdown constraints, and footprint. As new technology nodes mature every 3-5 years, design of effective ESD protection solutions has become more and more challenging due to the narrowed design window, elevated electric field and current density, as well as new failure mechanisms that are not well understood. The endeavor of this research is to develop novel, effective and robust ESD protection solutions for both emerging technologies and modern complementary metal–oxide–semiconductor (CMOS) technologies. The Si nanowire field-effect transistors are projected by the International Technology Roadmap for Semiconductors as promising next-generation CMOS devices due to their superior DC and RF performances, as well as ease of fabrication in existing Silicon processing. Aiming at proposing ESD protection solutions for nanowire based circuits, the dimension parameters, fabrication process, and layout dependency of such devices under Human Body Mode (HBM) ESD stresses are studied experimentally in company with failure analysis revealing the failure mechanism induced by ESD. The findings, including design methodologies, failure mechanism, and technology comparisons should provide practical knowhow of the development of ESD protection schemes for the nanowire based integrated circuits. Organic thin-film transistors (OTFTs) are the basic elements for the emerging flexible, printable, large-area, and low-cost organic electronic circuits. Although there are plentiful studies focusing on the DC stress induced reliability degradation, the operation mechanism of OTFTs iv subject to ESD is not yet available in the literature and are urgently needed before the organic technology can be pushed into consumer market. In this work, the ESD operation mechanism of OTFT depending on gate biasing condition and dimension parameters are investigated by extensive characterization and thorough evaluation. The device degradation evolution and failure mechanism under ESD are also investigated by specially designed experiments. In addition to the exploration of ESD protection solutions in emerging technologies, efforts have also been placed in the design and analysis of a major ESD protection device, diodetriggered-silicon-controlled-rectifier (DTSCR), in modern CMOS technology (90nm bulk). On the one hand, a new type DTSCR having bi-directional conduction capability, optimized design window, high HBM robustness and low parasitic capacitance are developed utilizing the combination of a bi-directional silicon-controlled-rectifier and bi-directional diode strings. On the other hand, the HBM and Charged Device Mode (CDM) ESD robustness of DTSCRs using four typical layout topologies are compared and analyzed in terms of trigger voltage, holding voltage, failure current density, turn-on time, and overshoot voltage. The advantages and drawbacks of each layout are summarized and those offering the best overall performance are suggested at the en

    Sustainable Fault-handling Of Reconfigurable Logic Using Throughput-driven Assessment

    Get PDF
    A sustainable Evolvable Hardware (EH) system is developed for SRAM-based reconfigurable Field Programmable Gate Arrays (FPGAs) using outlier detection and group testing-based assessment principles. The fault diagnosis methods presented herein leverage throughput-driven, relative fitness assessment to maintain resource viability autonomously. Group testing-based techniques are developed for adaptive input-driven fault isolation in FPGAs, without the need for exhaustive testing or coding-based evaluation. The techniques maintain the device operational, and when possible generate validated outputs throughout the repair process. Adaptive fault isolation methods based on discrepancy-enabled pair-wise comparisons are developed. By observing the discrepancy characteristics of multiple Concurrent Error Detection (CED) configurations, a method for robust detection of faults is developed based on pairwise parallel evaluation using Discrepancy Mirror logic. The results from the analytical FPGA model are demonstrated via a self-healing, self-organizing evolvable hardware system. Reconfigurability of the SRAM-based FPGA is leveraged to identify logic resource faults which are successively excluded by group testing using alternate device configurations. This simplifies the system architect\u27s role to definition of functionality using a high-level Hardware Description Language (HDL) and system-level performance versus availability operating point. System availability, throughput, and mean time to isolate faults are monitored and maintained using an Observer-Controller model. Results are demonstrated using a Data Encryption Standard (DES) core that occupies approximately 305 FPGA slices on a Xilinx Virtex-II Pro FPGA. With a single simulated stuck-at-fault, the system identifies a completely validated replacement configuration within three to five positive tests. The approach demonstrates a readily-implemented yet robust organic hardware application framework featuring a high degree of autonomous self-control
    • …
    corecore