28 research outputs found

    MISSED: an environment for mixed-signal microsystem testing and diagnosis

    Get PDF
    A tight link between design and test data is proposed for speeding up test-pattern generation and diagnosis during mixed-signal prototype verification. Test requirements are already incorporated at the behavioral level and specified with increased detail at lower hierarchical levels. A strict distinction between generic routines and implementation data makes reuse of software possible. A testability-analysis tool and test and DFT libraries support the designer to guarantee testability. Hierarchical backtrace procedures in combination with an expert system and fault libraries assist the designer during mixed-signal chip debuggin

    Cost modelling and concurrent engineering for testable design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.As integrated circuits and printed circuit boards increase in complexity, testing becomes a major cost factor of the design and production of the complex devices. Testability has to be considered during the design of complex electronic systems, and automatic test systems have to be used in order to facilitate the test. This fact is now widely accepted in industry. Both design for testability and the usage of automatic test systems aim at reducing the cost of production testing or, sometimes, making it possible at all. Many design for testability methods and test systems are available which can be configured into a production test strategy, in order to achieve high quality of the final product. The designer has to select from the various options for creating a test strategy, by maximising the quality and minimising the total cost for the electronic system. This thesis presents a methodology for test strategy generation which is based on consideration of the economics during the life cycle of the electronic system. This methodology is a concurrent engineering approach which takes into account all effects of a test strategy on the electronic system during its life cycle by evaluating its related cost. This objective methodology is used in an original test strategy planning advisory system, which allows for test strategy planning for VLSI circuits as well as for digital electronic systems. The cost models which are used for evaluating the economics of test strategies are described in detail and the test strategy planning system is presented. A methodology for making decisions which are based on estimated costing data is presented. Results of using the cost models and the test strategy planning system for evaluating the economics of test strategies for selected industrial designs are presented

    Tester for chosen sub-standard of the IEEE 802.1Q

    Get PDF
    Tato práce se zabývá analyzováním IEEE 802.1Q standardu TSN skupiny a návrhem testovacího modulu. Testovací modul je napsán v jazyku VHDL a je možné jej implementovat do Intel Stratix® V GX FPGA (5SGXEA7N2F45C2) vývojové desky. Standard IEEE 802.1Q (TSN) definuje deterministickou komunikace přes Ethernet sít, v reálném čase, požíváním globálního času a správným rozvrhem vysíláním a příjmem zpráv. Hlavní funkce tohoto standardu jsou: časová synchronizace, plánování provozu a konfigurace sítě. Každá z těchto funkcí je definovaná pomocí více různých podskupin tohoto standardu. Podle definice IEEE 802.1Q standardu je možno tyto podskupiny vzájemně libovolně kombinovat. Některé podskupiny standardu nemohou fungovat nezávisle, musí využívat funkce jiných podskupin standardu. Realizace funkce podskupin standardu je možná softwarově, hardwarově, nebo jejich kombinací. Na základě výše uvedených fakt, implementace podskupin standardu, které jsou softwarově související, byly vyloučené. Taky byly vyloučené podskupiny standardů, které jsou závislé na jiných podskupinách. IEEE 802.1Qbu byl vybrán jako vhodná část pro realizaci hardwarového testu. Různé způsoby testování byly vysvětleny jako DFT, BIST, ATPG a další jiné techniky. Pro hardwarové testování byla vybrána „Protocol Aware (PA)“technika, protože tato technika zrychluje testování, dovoluje opakovanou použitelnost a taky zkracuje dobu uvedení na trh. Testovací modul se skládá ze dvou objektů (generátor a monitor), které mají implementovanou IEEE 802.1Qbu podskupinu standardu. Funkce generátoru je vygenerovat náhodné nebo nenáhodné impulzy a potom je poslat do testovaného zařízeni ve správném definovaném protokolu. Funkce monitoru je přijat ethernet rámce a ověřit jejich správnost. Objekty jsou navrhnuty stejným způsobem na „TOP“úrovni a skládají se ze čtyř modulů: Avalon MM rozhraní, dvou šablon a jednoho portu. Avalon MM rozhraní bylo vytvořeno pro komunikaci softwaru s hardwarem. Tento modul přijme pakety ze softwaru a potom je dekóduje podle definovaného protokolu a „pod-protokolu “. „Pod-protokol“se skládá z příkazu a hodnoty daného příkazu. Podle dekódovaného příkazu a hodnot daných příkazem je kontrolovaný celý objekt. Šablona se používá na generování nebo ověřování náhodných nebo nenáhodných dat. Dvě šablony byly implementovány pro expresní ověřování nebo preempční transakce, definované IEEE 802.1Qbu. Porty byly vytvořené pro komunikaci mezi testovaným zařízením a šablonou podle daného standardu. Port „generátor“má za úkol vybrat a vyslat rámce podle priority a času vysílaní. Port „monitor“přijme rámce do „content-addressable memory”, která ověřuje priority rámce a podle toho je posílá do správné šablony. Výsledky prokázaly, že tato testovací technika dosahuje vysoké rychlosti a rychlé implementace.This master paper is dealing with the analysis of IEEE 802.1Q group of TSN standards and with the design of HW tester. Standard IEEE 802.1Qbu has appeared to be an optimal solution for this paper. Detail explanation of this sub-standard are included in this paper. As HW test the implementation, a protocol aware technique was chosen in order to accelerate testing. Paper further describes architecture of this tester, with detail explanation of the modules. Essential issue of protocol aware controlling objects by SW, have been resolved and described. Result proof that this technique has reached higher speed of testing, reusability, and fast implementation.

    Ensuring a High Quality Digital Device through Design for Testability

    Get PDF
    An electronic device is reliable if it is available for use most of the times throughout its life. The reliability can be affected by mishandling and use under abnormal operating conditions. High quality product cannot be achieved without proper verification and testing during the product development cycle. If the design is difficult to test, then it is very likely that most of the faults will not be detected before it is shipped to the customer. This paper describes how product quality can be improved by making the hardware design testable. Various designs for testability techniqueswere discussed. A three bit counter circuit was used to illustrate the benefits of design for testability by using scan chain methodology

    High level behavioural modelling of boundary scan architecture.

    Get PDF
    This project involves the development of a software tool which enables the integration of the IEEE 1149.1/JTAG Boundary Scan Test Architecture automatically into an ASIC (Application Specific Integrated Circuit) design. The tool requires the original design (the ASIC) to be described in VHDL-IEEE 1076 Hardware Description Language. The tool consists of the two major elements: i) A parsing and insertion algorithm developed and implemented in 'C'; ii) A high level model of the Boundary Scan Test Architecture implemented in 'VHDL'. The parsing and insertion algorithm is developed to deal with identifying the design Input/Output (I/O) terminals, their types and the order they appear in the ASIC design. It then attaches suitable Boundary Scan Cells to each I/O, except power and ground and inserts the high level models of the full Boundary Scan Architecture into the ASIC without altering the design core structure

    Conception et test des circuits et systèmes numériques à haute fiabilité et sécurité

    Get PDF
    Research activities I carried on after my nomination as Chargé de Recherche deal with the definition of methodologies and tools for the design, the test and the reliability of secure digital circuits and trustworthy manufacturing. More recently, we have started a new research activity on the test of 3D stacked Integrated CIrcuits, based on the use of Through Silicon Vias. Moreover, thanks to the relationships I have maintained after my post-doc in Italy, I have kept on cooperating with Politecnico di Torino on the topics related to test and reliability of memories and microprocessors.Secure and Trusted DevicesSecurity is a critical part of information and communication technologies and it is the necessary basis for obtaining confidentiality, authentication, and integrity of data. The importance of security is confirmed by the extremely high growth of the smart-card market in the last 20 years. It is reported in "Le monde Informatique" in the article "Computer Crime and Security Survey" in 2007 that financial losses due to attacks on "secure objects" in the digital world are greater than $11 Billions. Since the race among developers of these secure devices and attackers accelerates, also due to the heterogeneity of new systems and their number, the improvement of the resistance of such components becomes today’s major challenge.Concerning all the possible security threats, the vulnerability of electronic devices that implement cryptography functions (including smart cards, electronic passports) has become the Achille’s heel in the last decade. Indeed, even though recent crypto-algorithms have been proven resistant to cryptanalysis, certain fraudulent manipulations on the hardware implementing such algorithms can allow extracting confidential information. So-called Side-Channel Attacks have been the first type of attacks that target the physical device. They are based on information gathered from the physical implementation of a cryptosystem. For instance, by correlating the power consumed and the data manipulated by the device, it is possible to discover the secret encryption key. Nevertheless, this point is widely addressed and integrated circuit (IC) manufacturers have already developed different kinds of countermeasures.More recently, new threats have menaced secure devices and the security of the manufacturing process. A first issue is the trustworthiness of the manufacturing process. From one side, secure devices must assure a very high production quality in order not to leak confidential information due to a malfunctioning of the device. Therefore, possible defects due to manufacturing imperfections must be detected. This requires high-quality test procedures that rely on the use of test features that increases the controllability and the observability of inner points of the circuit. Unfortunately, this is harmful from a security point of view, and therefore the access to these test features must be protected from unauthorized users. Another harm is related to the possibility for an untrusted manufacturer to do malicious alterations to the design (for instance to bypass or to disable the security fence of the system). Nowadays, many steps of the production cycle of a circuit are outsourced. For economic reasons, the manufacturing process is often carried out by foundries located in foreign countries. The threat brought by so-called Hardware Trojan Horses, which was long considered theoretical, begins to materialize.A second issue is the hazard of faults that can appear during the circuit’s lifetime and that may affect the circuit behavior by way of soft errors or deliberate manipulations, called Fault Attacks. They can be based on the intentional modification of the circuit’s environment (e.g., applying extreme temperature, exposing the IC to radiation, X-rays, ultra-violet or visible light, or tampering with clock frequency) in such a way that the function implemented by the device generates an erroneous result. The attacker can discover secret information by comparing the erroneous result with the correct one. In-the-field detection of any failing behavior is therefore of prime interest for taking further action, such as discontinuing operation or triggering an alarm. In addition, today’s smart cards use 90nm technology and according to the various suppliers of chip, 65nm technology will be effective on the horizon 2013-2014. Since the energy required to force a transistor to switch is reduced for these new technologies, next-generation secure systems will become even more sensitive to various classes of fault attacks.Based on these considerations, within the group I work with, we have proposed new methods, architectures and tools to solve the following problems:• Test of secure devices: unfortunately, classical techniques for digital circuit testing cannot be easily used in this context. Indeed, classical testing solutions are based on the use of Design-For-Testability techniques that add hardware components to the circuit, aiming to provide full controllability and observability of internal states. Because crypto‐ processors and others cores in a secure system must pass through high‐quality test procedures to ensure that data are correctly processed, testing of crypto chips faces a dilemma. In fact design‐for‐testability schemes want to provide high controllability and observability of the device while security wants minimal controllability and observability in order to hide the secret. We have therefore proposed, form one side, the use of enhanced scan-based test techniques that exploit compaction schemes to reduce the observability of internal information while preserving the high level of testability. From the other side, we have proposed the use of Built-In Self-Test for such devices in order to avoid scan chain based test.• Reliability of secure devices: we proposed an on-line self-test architecture for hardware implementation of the Advanced Encryption Standard (AES). The solution exploits the inherent spatial replications of a parallel architecture for implementing functional redundancy at low cost.• Fault Attacks: one of the most powerful types of attack for secure devices is based on the intentional injection of faults (for instance by using a laser beam) into the system while an encryption occurs. By comparing the outputs of the circuits with and without the injection of the fault, it is possible to identify the secret key. To face this problem we have analyzed how to use error detection and correction codes as counter measure against this type of attack, and we have proposed a new code-based architecture. Moreover, we have proposed a bulk built-in current-sensor that allows detecting the presence of undesired current in the substrate of the CMOS device.• Fault simulation: to evaluate the effectiveness of countermeasures against fault attacks, we developed an open source fault simulator able to perform fault simulation for the most classical fault models as well as user-defined electrical level fault models, to accurately model the effect of laser injections on CMOS circuits.• Side-Channel attacks: they exploit physical data-related information leaking from the device (e.g. current consumption or electro-magnetic emission). One of the most intensively studied attacks is the Differential Power Analysis (DPA) that relies on the observation of the chip power fluctuations during data processing. I studied this type of attack in order to evaluate the influence of the countermeasures against fault attack on the power consumption of the device. Indeed, the introduction of countermeasures for one type of attack could lead to the insertion of some circuitry whose power consumption is related to the secret key, thus allowing another type of attack more easily. We have developed a flexible integrated simulation-based environment that allows validating a digital circuit when the device is attacked by means of this attack. All architectures we designed have been validated through this tool. Moreover, we developed a methodology that allows to drastically reduce the time required to validate countermeasures against this type of attack.TSV- based 3D Stacked Integrated Circuits TestThe stacking process of integrated circuits using TSVs (Through Silicon Via) is a promising technology that keeps the development of the integration more than Moore’s law, where TSVs enable to tightly integrate various dies in a 3D fashion. Nevertheless, 3D integrated circuits present many test challenges including the test at different levels of the 3D fabrication process: pre-, mid-, and post- bond tests. Pre-bond test targets the individual dies at wafer level, by testing not only classical logic (digital logic, IOs, RAM, etc) but also unbounded TSVs. Mid-bond test targets the test of partially assembled 3D stacks, whereas finally post-bond test targets the final circuit.The activities carried out within this topic cover 2 main issues:• Pre-bond test of TSVs: the electrical model of a TSV buried within the substrate of a CMOS circuit is a capacitance connected to ground (when the substrate is connected to ground). The main assumption is that a defect may affect the value of that capacitance. By measuring the variation of the capacitance’s value it is possible to check whether the TSV is correctly fabricated or not. We have proposed a method to measure the value of the capacitance based on the charge/ discharge delay of the RC network containing the TSV.• Test infrastructures for 3D stacked Integrated Circuits: testing a die before stacking to another die introduces the problem of a dynamic test infrastructure, where test data must be routed to a specific die based on the reached fabrication step. New solutions are proposed in literature that allow reconfiguring the test paths within the circuit, based on on-the-fly requirements. We have started working on an extension of the IEEE P1687 test standard that makes use of an automatic die-detection based on pull-up resistors.Memory and Microprocessor Test and ReliabilityThanks to device shrinking and miniaturization of fabrication technology, performances of microprocessors and of memories have grown of more than 5 magnitude order in the last 30 years. With this technology trend, it is necessary to face new problems and challenges, such as reliability, transient errors, variability and aging.In the last five years I’ve worked in cooperation with the Testgroup of Politecnico di Torino (Italy) to propose a new method to on-line validate the correctness of the program execution of a microprocessor. The main idea is to monitor a small set of control signals of the processors in order to identify incorrect activation sequences. This approach can detect both permanent and transient errors of the internal logic of the processor.Concerning the test of memories, we have proposed a new approach to automatically generate test programs starting from a functional description of the possible faults in the memory.Moreover, we proposed a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success
    corecore