355 research outputs found

    Multi-Cycle Test with Partial Observation on Scan-Based BIST Structure

    Get PDF
    Field test for reliability is usually performed with small amount of memory resource, and it requires a new technique which might be somewhat different from the conventional manufacturing tests. This paper proposes a novel technique that improves fault coverage or reduces the number of test vectors that is needed for achieving the given fault coverage on scan-based BIST structure. We evaluate a multi-cycle test method that observes the values of partial flip-flops on a chip during capture-mode. The experimental result shows that the partial observation achieves fault coverage improvement with small hardware overhead than the full observation.2011 Asian Test Symposium (ATS), 20-23 Nov. 2011, New Delhi, Indi

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Advances in Architectures and Tools for FPGAs and their Impact on the Design of Complex Systems for Particle Physics

    Get PDF
    The continual improvement of semiconductor technology has provided rapid advancements in device frequency and density. Designers of electronics systems for high-energy physics (HEP) have benefited from these advancements, transitioning many designs from fixed-function ASICs to more flexible FPGA-based platforms. Today’s FPGA devices provide a significantly higher amount of resources than those available during the initial Large Hadron Collider design phase. To take advantage of the capabilities of future FPGAs in the next generation of HEP experiments, designers must not only anticipate further improvements in FPGA hardware, but must also adopt design tools and methodologies that can scale along with that hardware. In this paper, we outline the major trends in FPGA hardware, describe the design challenges these trends will present to developers of HEP electronics, and discuss a range of techniques that can be adopted to overcome these challenges

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    フィールドにおけるテスト印加と低電力論理BISTに関する研究

    Get PDF
    Advances in semiconductor process technology have resulted in various aging issues in field operation of Very Large Scale Integration (VLSI) circuits. For example, HCI (Hot carrier injection), BTI (Bias Temperature Instability), TDDB (Time Dependent Dielectric Breakdown) are well-known aging phenomena, and they can increase the circuit delay resulting in serious reliability problems. In order to avoid system failures caused by aging, recent design usually sets a certain timing margin in operational frequency of the circuit. However, it is difficult to determine the size of the proper timing margin because of the difficulty of prediction of its aging speed in actual use that is related to operational environment. Pessimistic prediction may result in performance sacrificing although it will improve the reliability of the system. BIST-based field test is a promising way to guarantee the reliability of the circuit through detecting the aging-induced faults during the circuit operation. However, the field test has a limitation on test application time, which makes it difficult to achieve high test quality. Therefore an effective test application method at field is required. In addition to the requirement of short test application time, the BIST-based field test requires performing at-speed testing in order to detect timing-related defects. However, it is well known that power dissipation during testing is much higher than that in normal circuit operation. Because excessive power dissipation causes higher IR-drop and higher temperature, it results in delay increase during testing, and in turn, causing false at-speed testing and yield loss. While many low power test methods have been proposed to tackle the test power issue, inadequate test power reduction and lower fault coverage still remain as important issues. Moreover, low power testing that just focuses on power reduction is insufficient. When the test power is reduced to a very low level, a timing-related defect may be missed by the test, and a defective circuit will appear to be a good part passing the test. Therefore, appropriate test power control is necessary though it was out of considering in the existing methods. In this dissertation, we first proposed a new test application to satisfy the limitation of short test application time for BIST-based field test, and then we proposed a new low power BIST scheme that focuses on controlling the test power to a specified value for improving the field test quality. In chapter 3, a new field test application method named “rotating test” is presented in which a set of generated test patterns to detect aging-induced faults is partitioned into several subsets, and apply each subset in one test session at field. In order to maximize the test quality for rotating test, we proposed test partitioning methods that refer to two items: First one aims at maximizing fault coverage of each subset obtained by partitioning. Second one aims at minimizing the detection time interval of all faults in rotating test to avoid system failures. Experimental results demonstrated the effectiveness of the proposed partitioning methods. In chapter 4, we proposed a new low power BIST scheme which can control the scan-in power, scan-out power and capture power while keeping test coverage at high level. In this scheme, a new circuit called pseudo low-pass filter (PLPF) is developed for scan-in power control, and a multi-cycle capture test technique is employed to reduce the capture power. In order to control scan-out power dissipated by test responses, we proposed a novel method that selects some flip-flops in scan chains at logic design phase, and fills the selected flip-flops with proper values before starting scan-shift operation so as to reduce the switching activity associated with scan-out. The experimental results for ISCAS-89 and ITC-99 benchmark circuits show that significant scan-in power reduction rate (the original rate of 50% is reduced to 7~8%) and capture power reduction rate (the original rate of 20% is reduced to 6~7%) were derived. With the scan-out controlling method, the scan-out power can be reduced from 17.2% to 8.4%, which could not be achieved by the conventional methods. Moreover, in order to control the test power to the specified rate to accommodate the various test power requirements. A scan-shift power controlling scheme was also discussed. It showed the capability of controlling any scan-shift toggle rate between 6.7% and 50%.九州工業大学博士学位論文 学位記番号:情工博甲第289号 学位授与年月日:平成26年3月25日1. INTRODUCTION|2. PRELIMINARY|3. BIST-BASED FIELD ROTATING TEST FOR AGING-INDUCED FAULT DETECTION|4. TEST POWER REDUCTION FOR LOGIC-BIST|5. SUMMARY九州工業大学平成25年

    Low-overhead fault-tolerant logic for field-programmable gate arrays

    Get PDF
    While allowing for the fabrication of increasingly complex and efficient circuitry, transistor shrinkage and count-per-device expansion have major downsides: chiefly increased variation, degradation and fault susceptibility. For this reason, design-time consideration of faults will have to be given to increasing numbers of electronic systems in the future to ensure yields, reliabilities and lifetimes remain acceptably high. Many mathematical operators commonly accelerated in hardware are suited to modification resulting in datapath error detection and correction capabilities with far lower area, performance and/or power consumption overheads than those incurred through the utilisation of more established, general-purpose fault tolerance methods such as modular redundancy. Field-programmable gate arrays are uniquely placed to allow further area savings to be made thanks to their dynamic reconfigurability. The majority of the technical work presented within this thesis is based upon a benchmark hardware accelerator---a matrix multiplier---that underwent several evolutions in order to detect and correct faults manifesting along its datapath at runtime. In the first instance, fault detectability in excess of 99% was achieved in return for 7.87% additional area and 45.5% extra latency. In the second, the ability to correct errors caused by those faults was added at the cost of 4.20% more area, while 50.7% of this---and 46.2% of the previously incurred latency overhead---was removed through the introduction of partial reconfiguration in the third. The fourth demonstrates further reductions in both area and performance overheads---of 16.7% and 8.27%, respectively---through systematic data width reduction by allowing errors of less than ±0.5% of the maximum output value to propagate.Open Acces

    Built-in-self-test of RF front-end circuitry

    Get PDF
    Fuelled by the ever increasing demand for wireless products and the advent of deep submicron CMOS, RF ICs have become fairly commonplace in the semiconductor market. This has given rise to a new breed of Systems-On-Chip (SOCs) with RF front-ends tightly integrated along with digital, analog and mixed signal circuitry. However, the reliability of the integrated RF front-end continues to be a matter of significant concern and considerable research. A major challenge to the reliability of RF ICs is the fact that their performance is also severely degraded by wide tolerances in on-chip passives and package parasitics, in addition to process related faults. Due to the absence of contact based testing solutions in embedded RF SOCs (because the very act of probing may affect the performance of the RF circuit), coupled with the presence of very few test access nodes, a Built In Self Test approach (BiST) may prove to be the most efficient test scheme. However due to the associated challenges, a comprehensive and low-overhead BiST methodology for on-chip testing of RF ICs has not yet been reported in literature. In the current work, an approach to RF self-test that has hitherto been unexplored both in literature and in the commercial arena is proposed. A sensitive current monitor has been used to extract variations in the supply current drawn by the circuit-under-test (CUT). These variations are then processed in time and frequency domain to develop signatures. The acquired signatures can then be mapped to specific behavioral anomalies and the locations of these anomalies. The CUT is first excited by simple test inputs that can be generated on-chip. The current monitor extracts the corresponding variations in the supply current of the CUT, thereby creating signatures that map to various performance metrics of the circuit. These signatures can then be post-processed by low overhead on-chip circuitry and converted into an accessible form. To be successful in the RF domain any BIST architecture must be minimally invasive, reliable, offer good fault coverage and present low real estate and power overheads. The current-based self-test approach successfully addresses all these concerns. The technique has been applied to RF Low Noise Amplifiers, Mixers and Voltage Controlled Oscillators. The circuitry and post-processing techniques have also been demonstrated in silicon (using the IBM 0.25 micron RF CMOS process). The entire self-test of the RF front-end can be accomplished with a total test time of approximately 30µs, which is several orders of magnitude better than existing commercial test schemes
    corecore