82 research outputs found

    On testing VLSI chips for the big Viterbi decoder

    Get PDF
    A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature

    LSI/VLSI design for testability analysis and general approach

    Get PDF
    The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented

    Identifying worst case test vectors for FPGA exposed to total ionization dose using design for testability techniques

    Get PDF
    Electronic devices often operate in harsh environments which contain a variation of radiation sources. Radiation may cause different kinds of damage to proper operation of the devices. Their sources can be found in terrestrial environments, or in extra-terrestrial environments like in space, or in man-made radiation sources like nuclear reactors, biomedical devices and high energy particles physics experiments equipment. Depending on the operation environment of the device, the radiation resultant effect manifests in several forms like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). TID effect causes an increase in the delay and the leakage current of CMOS circuits which may damage the proper operation of the integrated circuit. To ensure proper operation of these devices under radiation, thorough testing must be made especially in critical applications like space and military applications. Although the standard which describes the procedure for testing electronic devices under radiation emphasizes the use of worst case test vectors (WCTVs), they are never used in radiation testing due to the difficulty of generating these vectors for circuits under test. For decades, design for testability (DFT) has been the best choice for test engineers to test digital circuits in industry. It has become a very mature technology that can be relied on. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Surprisingly, however, radiation testing has not yet made use of this reliable technology. In this thesis, a novel methodology is proposed to extend the usage of DFT to generate WCTVs for delay failure in Flash based field programmable gate arrays (FPGAs) exposed to total ionizing dose (TID). The methodology is validated using MicroSemi ProASIC3 FPGA and cobalt 60 facility

    Implementation of testability in VLSI circuits /

    Get PDF

    Design for testability of a latch-based design

    Get PDF
    Abstract. The purpose of this thesis was to decrease the area of digital logic in a power management integrated circuit (PMIC), by replacing selected flip-flops with latches. The thesis consists of a theory part, that provides background theory for the thesis, and a practical part, that presents a latch register design and design for testability (DFT) method for achieving an acceptable level of manufacturing fault coverage for it. The total area was decreased by replacing flip-flops of read-write and one-time programmable registers with latches. One set of negative level active primary latches were shared with all the positive level active latch registers in the same register bank. Clock gating was used to select which latch register the write data was loaded to from the primary latches. The latches were made transparent during the shift operation of partial scan testing. The observability of the latch register clock gating logic was improved by leaving the first bit of each latch register as a flip-flop. The controllability was improved by inserting control points. The latch register design, developed in this thesis, resulted in a total area decrease of 5% and a register bank area decrease of 15% compared to a flip-flop-based reference design. The latch register design manages to maintain the same stuck-at fault coverage as the reference design.Salpaperäisen piirin testattavuuden suunnittelu. Tiivistelmä. Tämän opinnäytetyön tarkoituksena oli pienentää digitaalisen logiikan pinta-alaa integroidussa tehonhallintapiirissä, korvaamalla valitut kiikut salpapiireillä. Opinnäytetyö koostuu teoriaosasta, joka antaa taustatietoa opinnäytetyölle, ja käytännön osuudesta, jossa esitellään salparekisteripiiri ja testattavuussuunnittelun menetelmä, jolla saavutettiin riittävän hyvä virhekattavuus salparekisteripiirille. Kokonaispinta-alaa pienennettiin korvaamalla luku-kirjoitusrekistereiden ja kerran ohjelmoitavien rekistereiden kiikut salpapiireillä. Yhdet negatiivisella tasolla aktiiviset isäntä-salpapiirit jaettiin kaikkien samassa rekisteripankissa olevien positiivisella tasolla aktiivisten salparekistereiden kanssa. Kellon portittamisella valittiin mihin salparekisteriin kirjoitusdata ladattiin yhteisistä isäntä-salpapireistä. Osittaisessa testipolkuihin perustuvassa testauksessa salpapiirit tehtiin läpinäkyviksi siirtooperaation aikana. Salparekisterin kellon portituslogiikan havaittavuutta parannettiin jättämällä jokaisen salparekisterin ensimmäinen bitti kiikuksi. Ohjattavuutta parannettiin lisäämällä ohjauspisteitä. Salparekisteripiiri, joka suunniteltiin tässä diplomityössä, pienensi kokonaispinta-alaa 5 % ja rekisteripankin pinta-alaa 15 % verrattuna kiikkuperäiseen vertailupiiriin. Salparekisteripiiri onnistuu pitämään saman juuttumisvikamallin virhekattavuuden kuin vertailupiiri

    Scan cell design for enhanced delay fault testability

    Get PDF
    Problems in testing scannable sequential circuits for delay faults are addressed. Modifications to improve circuit controllability and observability for the testing of delay faults are implemented efficiently in a scan cell design. A layout on a gate array is designed and evaluated for this scan cel

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    Design and testing of a microprocessor compatible 128-bit correlator chip

    Get PDF
    Ankara :The Department of Electrical and Electronics Engineering and Institute of Engineering and Sciences of Bilkent Univ. , 1989.Thesis (Master's) -- Bilkent University, 1989.Includes bibliographical references leaves 65-67.Topçu, SatılmışM.S

    Secure Split Test for Preventing IC Piracy by Un-Trusted Foundry and Assembly

    Get PDF
    In the era of globalization, integrated circuit design and manufacturing is spread across different continents. This has posed several hardware intrinsic security issues. The issues are related to overproduction of chips without knowledge of designer or OEM, insertion of hardware Trojans at design and fabrication phase, faulty chips getting into markets from test centers, etc. In this thesis work, we have addressed the problem of counterfeit IC‟s getting into the market through test centers. The problem of counterfeit IC has different dimensions. Each problem related to counterfeiting has different solutions. Overbuilding of chips at overseas foundry can be addressed using passive or active metering. The solution to avoid faulty chips getting into open markets from overseas test centers is secure split test (SST). The further improvement to SST is also proposed by other researchers and is known as Connecticut Secure Split Test (CSST). In this work, we focus on improvements to CSST techniques in terms of security, test time and area. In this direction, we have designed all the required sub-blocks required for CSST architecture, namely, RSA, TRNG, Scrambler block, study of benchmark circuits like S38417, adding scan chains to benchmarks is done. Further, as a security measure, we add, XOR gate at the output of the scan chains to obfuscate the signal coming out of the scan chains. Further, we have improved the security of the design by using the PUF circuit instead of TRNG and avoid the use of the memory circuits. This use of PUF not only eliminates the use of memory circuits, but also it provides the way for functional testing also. We have carried out the hamming distance analysis for introduced security measure and results show that security design is reasonably good.Further, as a future work we can focus on: • Developing the circuit which is secuered for the whole semiconductor supply chain with reasonable hamming distance and less area overhead
    corecore