16 research outputs found

    Design of a 1.9 GHz low-power LFSR circuit using the Reed-Solomon algorithm for Pseudo-Random Test Pattern Generation

    Get PDF
    A linear feedback shift register (LFSR) has been frequently used in the Built-in Self-Test (BIST) designs for the pseudo-random test pattern generation. The volume of the test patterns and test power dissipation are the key features in the large complex designs. The objective of this paper is to propose a new LFSR circuit based on the proposed Reed-Solomon (RS) algorithm. The RS algorithm is created by considering the factors of the maximum length test pattern with a minimum distance over the time. Also, it has achieved an effective generation of test patterns over a stage of complexity order O (m log2 m), where m denotes the total number of message bits. We analyzed our RS LFSR mathematically using the feedback polynomial function for an area-sensitive design. However, the bit-wise stages of the proposed RS LFSR are simulated using the TSMC 130 nm IC design tool in the Mentor Graphics platform. Experimental results showed that the proposed LFSR achieved the effective pseudo-random test patterns with a low-test power dissipation (25.13 µW). Ultimately, the circuit has operated in the highest operating frequency (1.9 GHz) environment.   &nbsp

    Built-In Self-Test (BIST) for Multi-Threshold NULL Convention Logic (MTNCL) Circuits

    Get PDF
    This dissertation proposes a Built-In Self-Test (BIST) hardware implementation for Multi-Threshold NULL Convention Logic (MTNCL) circuits. Two different methods are proposed: an area-optimized topology that requires minimal area overhead, and a test-performance-optimized topology that utilizes parallelism and internal hardware to reduce the overall test time through additional controllability points. Furthermore, an automated software flow is proposed to insert, simulate, and analyze an input MTNCL netlist to obtain a desired fault coverage, if possible, through iterative digital and fault simulations. The proposed automated flow is capable of producing both area-optimized and test-performance-optimized BIST circuits and scripts for digital and fault simulation using commercial software that may be utilized to manually verify or adjust further, if desired

    Towards an embedded board-level tester: study of a configurable test processor

    Get PDF
    The demand for electronic systems with more features, higher performance, and less power consumption increases continuously. This is a real challenge for design and test engineers because they have to deal with electronic systems with ever-increasing complexity maintaining production and test costs low and meeting critical time to market deadlines. For a test engineer working at the board-level, this means that manufacturing defects must be detected as soon as possible and at a low cost. However, the use of classical test techniques for testing modern printed circuit boards is not sufficient, and in the worst case these techniques cannot be used at all. This is mainly due to modern packaging technologies, a high device density, and high operation frequencies of modern printed circuit boards. This leads to very long test times, low fault coverage, and high test costs. This dissertation addresses these issues and proposes an FPGA-based test approach for printed circuit boards. The concept is based on a configurable test processor that is temporarily implemented in the on-board FPGA and provides the corresponding mechanisms to communicate to external test equipment and co-processors implemented in the FPGA. This embedded test approach provides the flexibility to implement test functions either in the external test equipment or in the FPGA. In this manner, tests are executed at-speed increasing the fault coverage, test times are reduced, and the test system can be adapted automatically to the properties of the FPGA and devices located on the board. An essential part of the FPGA-based test approach deals with the development of a test processor. In this dissertation the required properties of the processor are discussed, and it is shown that the adaptation to the specific test scenario plays a very important role for the optimization. For this purpose, the test processor is equipped with configuration parameters at the instruction set architecture and microarchitecture level. Additionally, an automatic generation process for the test system and for the computation of some of the processor’s configuration parameters is proposed. The automatic generation process uses as input a model known as the device under test model (DUT-M). In order to evaluate the entire FPGA-based test approach and the viability of a processor for testing printed circuit boards, the developed test system is used to test interconnections to two different devices: a static random memory (SRAM) and a liquid crystal display (LCD). Experiments were conducted in order to determine the resource utilization of the processor and FPGA-based test system and to measure test time when different test functions are implemented in the external test equipment or the FPGA. It has been shown that the introduced approach is suitable to test printed circuit boards and that the test processor represents a realistic alternative for testing at board-level.Der Bedarf an elektronischen Systemen mit zusätzlichen Merkmalen, höherer Leistung und geringerem Energieverbrauch nimmt ständig zu. Dies stellt eine erhebliche Herausforderung für Entwicklungs- und Testingenieure dar, weil sie sich mit elektronischen Systemen mit einer steigenden Komplexität zu befassen haben. Außerdem müssen die Herstellungs- und Testkosten gering bleiben und die Produkteinführungsfristen so kurz wie möglich gehalten werden. Daraus folgt, dass ein Testingenieur, der auf Leiterplatten-Ebene arbeitet, die Herstellungsfehler so früh wie möglich entdecken und dabei möglichst niedrige Kosten verursachen soll. Allerdings sind die klassischen Testmethoden nicht in der Lage, die Anforderungen von modernen Leiterplatten zu erfüllen und im schlimmsten Fall können diese Testmethoden überhaupt nicht verwendet werden. Dies liegt vor allem an modernen Gehäuse-Technologien, der hohen Bauteildichte und den hohen Arbeitsfrequenzen von modernen Leiterplatten. Das führt zu sehr langen Testzeiten, geringer Testabdeckung und hohen Testkosten. Die Dissertation greift diese Problematik auf und liefert einen FPGA-basierten Testansatz für Leiterplatten. Das Konzept beruht auf einem konfigurierbaren Testprozessor, welcher im On-Board-FPGA temporär implementiert wird und die entsprechenden Mechanismen für die Kommunikation mit der externen Testeinrichtung und Co-Prozessoren im FPGA bereitstellt. Dadurch ist es möglich Testfunktionen flexibel entweder auf der externen Testeinrichtung oder auf dem FPGA zu implementieren. Auf diese Weise werden Tests at-speed ausgeführt, um die Testabdeckung zu erhöhen. Außerdem wird die Testzeit verkürzt und das Testsystem automatisch an die Eigenschaften des FPGAs und anderer Bauteile auf der Leiterplatte angepasst. Ein wesentlicher Teil des FPGA-basierten Testansatzes umfasst die Entwicklung eines Testprozessors. In dieser Dissertation wird über die benötigten Eigenschaften des Prozessors diskutiert und es wird gezeigt, dass die Anpassung des Prozessors an den spezifischen Testfall von großer Bedeutung für die Optimierung ist. Zu diesem Zweck wird der Prozessor mit Konfigurationsparametern auf der Befehlssatzarchitektur-Ebene und Mikroarchitektur-Ebene ausgerüstet. Außerdem wird ein automatischer Generierungsprozess für die Realisierung des Testsystems und für die Berechnung einer Untergruppe von Konfigurationsparametern des Prozessors vorgestellt. Der automatische Generierungsprozess benutzt als Eingangsinformation ein Modell des Prüflings (device under test model, DUT-M). Das entwickelte Testsystem wurde zum Testen von Leiterplatten für Verbindungen zwischen dem FPGA und zwei Bauteilen verwendet, um den FPGA-basierten Testansatz und die Durchführbarkeit des Testprozessors für das Testen auf Leiterplatte-Ebene zu evaluieren. Die zwei Bauteile sind ein Speicher mit direktem Zugriff (static random-access memory, SRAM) und eine Flüssigkristallanzeige (liquid crystal display, LCD). Die Experimente wurden durchgeführt, um den Ressourcenverbrauch des Prozessors und Testsystems festzustellen und um die Testzeit zu messen. Dies geschah durch die Implementierung von unterschiedlichen Testfunktionen auf der externen Testeinrichtung und dem FPGA. Dadurch konnte gezeigt werden, dass der FPGA-basierte Ansatz für das Testen von Leiterplatten geeignet ist und dass der Testprozessor eine realistische Alternative für das Testen auf Leiterplatten-Ebene ist

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    A built-in self-test technique for high speed analog-to-digital converters

    Get PDF
    Fundação para a Ciência e a Tecnologia (FCT) - PhD grant (SFRH/BD/62568/2009

    Power constrained test scheduling in system-on-chip design

    Get PDF
    With the development of VLSI technologies, especially with the coming of deep sub-micron semiconductor process technologies, power dissipation becomes a critical factor that cannot be ignored either in normal operation or in test mode of digital systems. Test scheduling has to take into consideration of both test concurrency and power dissipation constraints. For satisfying high fault coverage goals with minimum test application time under certain power dissipation constraints, the testing of all components on the system should be performed in parallel as much as possible. The main objective of this thesis is to address the test-scheduling problem faced by SOC designers at system level. Through the analysis of several existing scheduling approaches, we enlarge the basis that current approaches based on to minimize test application time and propose an efficient and integrated technique for the test scheduling of SOCs under power-constraint. The proposed merging approach is based on a tree growing technique and can be used to overlay the block-test sessions in order to reduce further test application time. A number of experiments, based on academic benchmarks and industrial designs, have been carried out to demonstrate the usefulness and efficiency of the proposed approaches

    Cellular Automata

    Get PDF
    Modelling and simulation are disciplines of major importance for science and engineering. There is no science without models, and simulation has nowadays become a very useful tool, sometimes unavoidable, for development of both science and engineering. The main attractive feature of cellular automata is that, in spite of their conceptual simplicity which allows an easiness of implementation for computer simulation, as a detailed and complete mathematical analysis in principle, they are able to exhibit a wide variety of amazingly complex behaviour. This feature of cellular automata has attracted the researchers' attention from a wide variety of divergent fields of the exact disciplines of science and engineering, but also of the social sciences, and sometimes beyond. The collective complex behaviour of numerous systems, which emerge from the interaction of a multitude of simple individuals, is being conveniently modelled and simulated with cellular automata for very different purposes. In this book, a number of innovative applications of cellular automata models in the fields of Quantum Computing, Materials Science, Cryptography and Coding, and Robotics and Image Processing are presented

    SCAN CHAIN BASED HARDWARE SECURITY

    Get PDF
    Hardware has become a popular target for attackers to hack into any computing and communication system. Starting from the legendary power analysis attacks discovered 20 years ago to the recent Intel Spectre and Meltdown attacks, security vulnerabilities in hardware design have been exploited for malicious purposes. With the emerging Internet of Things (IoT) applications, where the IoT devices are extremely resource constrained, many proven secure but computational expensive cryptography protocols cannot be applied on such devices. Thus there is an urgent need to understand the hardware vulnerabilities and develop cost effective mitigation methods. One established field in the semiconductor and integrated circuit (IC) industry, known as IC test, has the goal of ensuring that fabricated ICs are free of manufacturing defects and perform the required functionalities. Testing is essential to isolate faulty chips from good ones. The concept of design for test (DFT) has been integrated in the commercial IC design and fabrication process for several decades. Scan chain, which provides test engineer access to all the flip flops in the chip through the scan in (SI) and scan out (SO) ports, is the backbone of industrial testing methods and can be found in almost all the modern designs. In addition to IC testing, scan chain has found applications in intellectual property (IP) protection and IC identification. However, attackers can also leverage the controllability and observability of scan chain as a side channel to break systems such as cryptographic chips. This dissertation addresses these two important security problems by proposing (1) a practical scan chain based security primitive for IP protection and (2) a partial scan chain framework that can mitigate all the existing scan based attacks. First, we observe the fact that each D-flip-flop has two output ports, Q and Q’, designed to simplify the logic and has been used to reduce the power consumption for IC test. The availability of both Q and Q’ ports provide the opportunity for IP protection. More specifically, we can generate a digital fingerprint by selecting different connection styles between adjacent scan cells during the design of scan chain. This method has two major advantages: fingerprints are created as a post-silicon procedure and therefore there will be little fabrication overhead; altering the connection style requires the modification of test vectors for each fingerprinted IP and thus enables a non-intrusive fingerprint verification method. This addresses the overhead and detectability problems, two of the most challenging problems of designing practical IP fingerprinting techniques in the past two decades. Combined with the recently developed reconfigurable scan networks (RSNs) that are popular for embedded and IoT devices, we design an IC identification (ID) scheme utilizing the different connection styles. We perform experiments on standard benchmarks to demonstrate that our approach has low design overhead. We also conduct security analysis to show that such fingerprints and IC IDs are robust against various attacks. In the second part of this dissertation, we consider the scan chain side channel attack, which has been reported as one of the most severe side channel attacks to modern secure systems. We argue that the current countermeasures are restricted to the requirement of providing direct SI and SO for testing and thus suffers the vulnerability of leaving this side channel open to the attackers as well. Therefore, we propose a novel public-private partial scan chain based approach with the basic idea of removing the flip flops that store sensitive information from the scan chain. This will eliminate the scan chain side channel, but it also limits IC test. The key contribution in our proposed public-private partial scan chain design is that it can keep the full test coverage while providing security to the scan chain. This is achieved by chaining the removed flip flops into one or more private partial scan chains and adding protections to the SI and SO ports of such chains. Unlike the traditional partial scan design which not only fails to provide full fault coverage, but also incur huge overhead in test time and test vector generation time, we propose a set of techniques to ensure that the desired test vectors can be entered into the system efficiently. These techniques include test vector reordering, test vector reusing, and test vector generation based on a novel finite state machine (FSM) structure we have invented. On the other hand, to enable the test engineers the ability to observe the test output to diagnose the chip while not leaking information to the attackers, we propose two lightweight mechanisms, one based on linear feedback shift register (LFSR) and the other one based on configurable physical unclonable function (PUF). Finally, we discuss a protocol on how in-field test can be realized using our public-private partial scan chain. We conduct experiments with industrial scan design tools to demonstrate that the required hardware in our approach has negligible area overhead and gives full test coverage with reduced test time and does not need to re-generate test vectors. In sum, this dissertation focuses on the role of scan chain, a conventional design for test facility, in hardware security. We show that scan chain features can be leveraged to create practical IP protection techniques including IP watermarking and fingerprinting as well as IC identification and authentication. We also propose a novel public-private partial scan design principle to close the scan chain side channel to the attackers. Through this dissertation work, we demonstrate that it is possible to develop highly practical scan chain based techniques that can benefit both the community of IC test and hardware security

    On-Chip Analog Circuit Design Using Built-In Self-Test and an Integrated Multi-Dimensional Optimization Platform

    Get PDF
    Nowadays, the rapid development of system-on-chip (SoC) market introduces tremendous complexity into the integrated circuit (IC) design. Meanwhile, the IC fabrication process is scaling down to allow higher density of integration but makes the chips more sensitive to the process-voltage-temperature (PVT) variations. A successful IC product not only imposes great pressure on the IC designers, who have to handle wider variations and enforce more design margins, but also challenges the test procedure, leading to more check points and longer test time. To relax the designers’ burden and reduce the cost of testing, it is valuable to make the IC chips able to test and tune itself to some extent. In this dissertation, a fully integrated in-situ design validation and optimization (VO) hardware for analog circuits is proposed. It implements in-situ built-in self-test (BIST) techniques for analog circuits. Based on the data collected from BIST, the error between the measured and the desired performance of the target circuit is evaluated using a cost function. A digital multi-dimensional optimization engine is implemented to adaptively adjust the analog circuit parameters, seeking the minimum value of the cost function and achieving the desired performance. To verify this concept, study cases of a 2nd/4th active-RC band-pass filter (BPF) and a 2nd order Gm-C BPF, as well as all BIST and optimization blocks, are adopted on-chip. Apart from the VO system, several improved BIST techniques are also proposed in this dissertation. A single-tone sinusoidal waveform generator based on a finite-impulse-response (FIR) architecture, which utilizes an optimization algorithm to enhance its spur free dynamic range (SFDR), is proposed. It achieves an SFDR of 59 to 70 dBc from 150 to 850 MHz after the optimization procedure. A low-distortion current-steering two-tone sinusoidal signal synthesizer based on a mixing-FIR architecture is also proposed. The two-tone synthesizer extends the FIR architecture to two stages and implements an up-conversion mixer to generate the two tones, achieving better than -68 dBc IM3 below 480 MHz LO frequency without calibration. Moreover, an on-chip RF receiver linearity BIST methodology for continuous and discrete-time hybrid baseband chain is proposed. The proposed receiver chain implements a charge-domain FIR filter to notch the two excitation signals but expose the third order intermodulation (IM3) tones. It simplifies the linearity measurement procedure–using a power detector is enough to analyze the receiver’s linearity. Finally, a low cost fully digital built-in analog tester for linear-time-invariant (LTI) analog blocks is proposed. It adopts a time-to-digital converter (TDC) to measure the delays corresponded to a ramp excitation signal and is able to estimate the pole or zero locations of a low-pass LTI system
    corecore