1,012 research outputs found

    A timing-driven pseudo-exhaustive testing of VLSI circuits

    Get PDF
    [[abstract]]The object of this paper is to reduce the delay penalty of bypass storage cell (bsc) insertion for pseudo-exhaustive testing. We first propose a tight delay lower bound algorithm which estimates the minimum circuit delay for each node after bsc insertion. By understanding how the lower bound algorithm loses optimality, we can propose a bsc insertion heuristic which tries to insert bscs so that the final delay is as close to the lower bound as possible. Our experiments show that the results of our heuristic are either optimal because they are the same as the delay lower bounds or they are very close to the optimal solutions.[[conferencetype]]朋際[[conferencedate]]20000528~20000531[[booktype]]çŽ™æœŹ[[conferencelocation]]Geneva, Switzerlan

    Timing Measurement Platform for Arbitrary Black-Box Circuits Based on Transition Probability

    No full text

    A Hardware Security Solution against Scan-Based Attacks

    Get PDF
    Scan based Design for Test (DfT) schemes have been widely used to achieve high fault coverage for integrated circuits. The scan technique provides full access to the internal nodes of the device-under-test to control them or observe their response to input test vectors. While such comprehensive access is highly desirable for testing, it is not acceptable for secure chips as it is subject to exploitation by various attacks. In this work, new methods are presented to protect the security of critical information against scan-based attacks. In the proposed methods, access to the circuit containing secret information via the scan chain has been severely limited in order to reduce the risk of a security breach. To ensure the testability of the circuit, a built-in self-test which utilizes an LFSR as the test pattern generator (TPG) is proposed. The proposed schemes can be used as a countermeasure against side channel attacks with a low area overhead as compared to the existing solutions in literature

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    Testing of leakage current failure in ASIC devices exposed to total ionizing dose environment using design for testability techniques

    Get PDF
    Due to the advancements in technology, electronic devices have been relied upon to operate under harsh conditions. Radiation is one of the main causes of different failures of the electronics devices. According to the operation environment, the sources of the radiation can be terrestrial or extra-terrestrial. For terrestrial the devices can be used in nuclear reactors or biomedical devices where the radiation is man-made. While for the extra- terrestrial, the devices can be used in satellites, the international space station or spaceships, where the radiation comes from various sources like the Sun. According to the operation environment the effects of radiation differ. These effects falls under two categories, total ionizing dose effect (TID) and single event effects (SEEs). TID effects can be affect the delay and leakage current of CMOS circuits negatively. The affects can therefore hinder the integrated circuits\u27 operation. Before the circuits are used, particularly in critical radiation heavy applications like military and space, testing under radiation must be done to avoid any failures during operation. The standard in testing electronic devices is generating worst case test vectors (WCTVs) and under radiation using these vectors the circuits are tested. However, the generation of these WCTVs have been very challenging so this approach is rarely used for TIDs effects. Design for testability (DFT) have been widely used in the industry for digital circuits testing applications. DFT is usually used with automatic test patterns generation software to generate test vectors against fault models of manufacturer defects for application specific integrated circuit (ASIC.) However, it was never used to generate test vectors for leakage current testing induced in ASICs exposed to TID radiation environment. The purpose of the thesis is to use DFT to identify WCTVs for leakage current failures in sequential circuits for ASIC devices exposed to TID. A novel methodology was devised to identify these test vectors. The methodology is validated and compared to previous non DFT methods. The methodology is proven to overcome the limitation of previous methodologies

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    Identifying worst case test vectors for FPGA exposed to total ionization dose using design for testability techniques

    Get PDF
    Electronic devices often operate in harsh environments which contain a variation of radiation sources. Radiation may cause different kinds of damage to proper operation of the devices. Their sources can be found in terrestrial environments, or in extra-terrestrial environments like in space, or in man-made radiation sources like nuclear reactors, biomedical devices and high energy particles physics experiments equipment. Depending on the operation environment of the device, the radiation resultant effect manifests in several forms like total ionizing dose effect (TID), or single event effects (SEEs) such as single event upset (SEU), single event gate rupture (SEGR), and single event latch up (SEL). TID effect causes an increase in the delay and the leakage current of CMOS circuits which may damage the proper operation of the integrated circuit. To ensure proper operation of these devices under radiation, thorough testing must be made especially in critical applications like space and military applications. Although the standard which describes the procedure for testing electronic devices under radiation emphasizes the use of worst case test vectors (WCTVs), they are never used in radiation testing due to the difficulty of generating these vectors for circuits under test. For decades, design for testability (DFT) has been the best choice for test engineers to test digital circuits in industry. It has become a very mature technology that can be relied on. DFT is usually used with automatic test patterns generation (ATPG) software to generate test vectors to test application specific integrated circuits (ASICs), especially with sequential circuits, against faults like stuck at faults and path delay faults. Surprisingly, however, radiation testing has not yet made use of this reliable technology. In this thesis, a novel methodology is proposed to extend the usage of DFT to generate WCTVs for delay failure in Flash based field programmable gate arrays (FPGAs) exposed to total ionizing dose (TID). The methodology is validated using MicroSemi ProASIC3 FPGA and cobalt 60 facility

    VLSI design of high-speed adders for digital signal processing applications.

    Get PDF

    Algorithm/Architecture Co-Design for Low-Power Neuromorphic Computing

    Full text link
    The development of computing systems based on the conventional von Neumann architecture has slowed down in the past decade as complementary metal-oxide-semiconductor (CMOS) technology scaling becomes more and more difficult. To satisfy the ever-increasing demands in computing power, neuromorphic computing has emerged as an attractive alternative. This dissertation focuses on developing learning algorithm, hardware architecture, circuit components, and design methodologies for low-power neuromorphic computing that can be employed in various energy-constrained applications. A top-down approach is adopted in this research. Starting from the algorithm-architecture co-design, a hardware-friendly learning algorithm is developed for spiking neural networks (SNNs). The possibility of estimating gradients from spike timings is explored. The learning algorithm is developed for the ease of hardware implementation, as well as the compatibility with many well-established learning techniques developed for classic artificial neural networks (ANNs). An SNN hardware equipped with the proposed on-chip learning algorithm is implemented in CMOS technology. In this design, two unique features of SNNs, the event-driven computation and the inferring with a progressive precision, are leveraged to reduce the energy consumption. In addition to low-power SNN hardware, accelerators for ANNs are also presented to accelerate the adaptive dynamic programing algorithm. An efficient and flexible single-instruction-multiple-data architecture is proposed to exploit the inherent data-level parallelism in the inference and learning of ANNs. In addition, the accelerator is augmented with a virtual update technique, which helps improve the throughput and energy efficiency remarkably. Lastly, two techniques in the architecture-circuit level are introduced to mitigate the degraded reliability of the memory system in a neuromorphic hardware owing to the aggressively-scaled supply voltage and integration density. The first method uses on-chip feedback to compensate for the process variation and the second technique improves the throughput and energy efficiency of a conventional error-correction method.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144149/1/zhengn_1.pd

    Design for testability of a latch-based design

    Get PDF
    Abstract. The purpose of this thesis was to decrease the area of digital logic in a power management integrated circuit (PMIC), by replacing selected flip-flops with latches. The thesis consists of a theory part, that provides background theory for the thesis, and a practical part, that presents a latch register design and design for testability (DFT) method for achieving an acceptable level of manufacturing fault coverage for it. The total area was decreased by replacing flip-flops of read-write and one-time programmable registers with latches. One set of negative level active primary latches were shared with all the positive level active latch registers in the same register bank. Clock gating was used to select which latch register the write data was loaded to from the primary latches. The latches were made transparent during the shift operation of partial scan testing. The observability of the latch register clock gating logic was improved by leaving the first bit of each latch register as a flip-flop. The controllability was improved by inserting control points. The latch register design, developed in this thesis, resulted in a total area decrease of 5% and a register bank area decrease of 15% compared to a flip-flop-based reference design. The latch register design manages to maintain the same stuck-at fault coverage as the reference design.SalpaperÀisen piirin testattavuuden suunnittelu. TiivistelmÀ. TÀmÀn opinnÀytetyön tarkoituksena oli pienentÀÀ digitaalisen logiikan pinta-alaa integroidussa tehonhallintapiirissÀ, korvaamalla valitut kiikut salpapiireillÀ. OpinnÀytetyö koostuu teoriaosasta, joka antaa taustatietoa opinnÀytetyölle, ja kÀytÀnnön osuudesta, jossa esitellÀÀn salparekisteripiiri ja testattavuussuunnittelun menetelmÀ, jolla saavutettiin riittÀvÀn hyvÀ virhekattavuus salparekisteripiirille. Kokonaispinta-alaa pienennettiin korvaamalla luku-kirjoitusrekistereiden ja kerran ohjelmoitavien rekistereiden kiikut salpapiireillÀ. Yhdet negatiivisella tasolla aktiiviset isÀntÀ-salpapiirit jaettiin kaikkien samassa rekisteripankissa olevien positiivisella tasolla aktiivisten salparekistereiden kanssa. Kellon portittamisella valittiin mihin salparekisteriin kirjoitusdata ladattiin yhteisistÀ isÀntÀ-salpapireistÀ. Osittaisessa testipolkuihin perustuvassa testauksessa salpapiirit tehtiin lÀpinÀkyviksi siirtooperaation aikana. Salparekisterin kellon portituslogiikan havaittavuutta parannettiin jÀttÀmÀllÀ jokaisen salparekisterin ensimmÀinen bitti kiikuksi. Ohjattavuutta parannettiin lisÀÀmÀllÀ ohjauspisteitÀ. Salparekisteripiiri, joka suunniteltiin tÀssÀ diplomityössÀ, pienensi kokonaispinta-alaa 5 % ja rekisteripankin pinta-alaa 15 % verrattuna kiikkuperÀiseen vertailupiiriin. Salparekisteripiiri onnistuu pitÀmÀÀn saman juuttumisvikamallin virhekattavuuden kuin vertailupiiri
    • 

    corecore