11 research outputs found

    Test and Testability of Asynchronous Circuits

    Full text link
    The ever-increasing transistor shrinkage and higher clock frequencies are causing serious clock distribution, power management, and reliability issues. Asynchronous design is predicted to have a significant role in tackling these challenges because of its distributed control mechanism and on-demand, rather than continuous, switching activity. Null Convention Logic (NCL) is a robust and low-power asynchronous paradigm that introduces new challenges to test and testability algorithms because 1) the lack of deterministic timing in NCL complicates the management of test timing, 2) all NCL gates are state-holding and even simple combinational circuits show sequential behaviour, and 3) stuck-at faults on gate internal feedback (GIF) of NCL gates do not always cause an incorrect output and therefore are undetectable by automatic test pattern generation (ATPG) algorithms. Existing test methods for NCL use clocked hardware to control the timing of test. Such test hardware could introduce metastability issues into otherwise highly robust NCL devices. Also, existing test techniques for NCL handle the high-statefulness of NCL circuits by excessive incorporation of test hardware which imposes additional area, propagation delay and power consumption. This work, first, proposes a clockless self-timed ATPG that detects all faults on the gate inputs and a share of the GIF faults with no added design for test (DFT). Then, the efficacy of quiescent current (IDDQ) test for detecting GIF faults undetectable by a DFT-less ATPG is investigated. Finally, asynchronous test hardware, including test points, a scan cell, and an interleaved scan architecture, is proposed for NCL-based circuits. To the extent of our knowledge, this is the first work that develops clockless, self-timed test techniques for NCL while minimising the need for DFT, and also the first work conducted on IDDQ test of NCL. The proposed methods are applied to multiple NCL circuits with up to 2,633 NCL gates (10,000 CMOS Boolean gates), in 180 and 45 nm technologies and show average fault coverage of 88.98% for ATPG alone, 98.52% including IDDQ test, and 99.28% when incorporating test hardware. Given that this fault coverage includes detection of GIF faults, our work has 13% higher fault coverage than previous work. Also, because our proposed clockless test hardware eliminates the need for double-latching, it reduces the average area and delay overhead of previous studies by 32% and 50%, respectively

    Conception et test des circuits et systèmes numériques à haute fiabilité et sécurité

    Get PDF
    Research activities I carried on after my nomination as Chargé de Recherche deal with the definition of methodologies and tools for the design, the test and the reliability of secure digital circuits and trustworthy manufacturing. More recently, we have started a new research activity on the test of 3D stacked Integrated CIrcuits, based on the use of Through Silicon Vias. Moreover, thanks to the relationships I have maintained after my post-doc in Italy, I have kept on cooperating with Politecnico di Torino on the topics related to test and reliability of memories and microprocessors.Secure and Trusted DevicesSecurity is a critical part of information and communication technologies and it is the necessary basis for obtaining confidentiality, authentication, and integrity of data. The importance of security is confirmed by the extremely high growth of the smart-card market in the last 20 years. It is reported in "Le monde Informatique" in the article "Computer Crime and Security Survey" in 2007 that financial losses due to attacks on "secure objects" in the digital world are greater than $11 Billions. Since the race among developers of these secure devices and attackers accelerates, also due to the heterogeneity of new systems and their number, the improvement of the resistance of such components becomes today’s major challenge.Concerning all the possible security threats, the vulnerability of electronic devices that implement cryptography functions (including smart cards, electronic passports) has become the Achille’s heel in the last decade. Indeed, even though recent crypto-algorithms have been proven resistant to cryptanalysis, certain fraudulent manipulations on the hardware implementing such algorithms can allow extracting confidential information. So-called Side-Channel Attacks have been the first type of attacks that target the physical device. They are based on information gathered from the physical implementation of a cryptosystem. For instance, by correlating the power consumed and the data manipulated by the device, it is possible to discover the secret encryption key. Nevertheless, this point is widely addressed and integrated circuit (IC) manufacturers have already developed different kinds of countermeasures.More recently, new threats have menaced secure devices and the security of the manufacturing process. A first issue is the trustworthiness of the manufacturing process. From one side, secure devices must assure a very high production quality in order not to leak confidential information due to a malfunctioning of the device. Therefore, possible defects due to manufacturing imperfections must be detected. This requires high-quality test procedures that rely on the use of test features that increases the controllability and the observability of inner points of the circuit. Unfortunately, this is harmful from a security point of view, and therefore the access to these test features must be protected from unauthorized users. Another harm is related to the possibility for an untrusted manufacturer to do malicious alterations to the design (for instance to bypass or to disable the security fence of the system). Nowadays, many steps of the production cycle of a circuit are outsourced. For economic reasons, the manufacturing process is often carried out by foundries located in foreign countries. The threat brought by so-called Hardware Trojan Horses, which was long considered theoretical, begins to materialize.A second issue is the hazard of faults that can appear during the circuit’s lifetime and that may affect the circuit behavior by way of soft errors or deliberate manipulations, called Fault Attacks. They can be based on the intentional modification of the circuit’s environment (e.g., applying extreme temperature, exposing the IC to radiation, X-rays, ultra-violet or visible light, or tampering with clock frequency) in such a way that the function implemented by the device generates an erroneous result. The attacker can discover secret information by comparing the erroneous result with the correct one. In-the-field detection of any failing behavior is therefore of prime interest for taking further action, such as discontinuing operation or triggering an alarm. In addition, today’s smart cards use 90nm technology and according to the various suppliers of chip, 65nm technology will be effective on the horizon 2013-2014. Since the energy required to force a transistor to switch is reduced for these new technologies, next-generation secure systems will become even more sensitive to various classes of fault attacks.Based on these considerations, within the group I work with, we have proposed new methods, architectures and tools to solve the following problems:• Test of secure devices: unfortunately, classical techniques for digital circuit testing cannot be easily used in this context. Indeed, classical testing solutions are based on the use of Design-For-Testability techniques that add hardware components to the circuit, aiming to provide full controllability and observability of internal states. Because crypto‐ processors and others cores in a secure system must pass through high‐quality test procedures to ensure that data are correctly processed, testing of crypto chips faces a dilemma. In fact design‐for‐testability schemes want to provide high controllability and observability of the device while security wants minimal controllability and observability in order to hide the secret. We have therefore proposed, form one side, the use of enhanced scan-based test techniques that exploit compaction schemes to reduce the observability of internal information while preserving the high level of testability. From the other side, we have proposed the use of Built-In Self-Test for such devices in order to avoid scan chain based test.• Reliability of secure devices: we proposed an on-line self-test architecture for hardware implementation of the Advanced Encryption Standard (AES). The solution exploits the inherent spatial replications of a parallel architecture for implementing functional redundancy at low cost.• Fault Attacks: one of the most powerful types of attack for secure devices is based on the intentional injection of faults (for instance by using a laser beam) into the system while an encryption occurs. By comparing the outputs of the circuits with and without the injection of the fault, it is possible to identify the secret key. To face this problem we have analyzed how to use error detection and correction codes as counter measure against this type of attack, and we have proposed a new code-based architecture. Moreover, we have proposed a bulk built-in current-sensor that allows detecting the presence of undesired current in the substrate of the CMOS device.• Fault simulation: to evaluate the effectiveness of countermeasures against fault attacks, we developed an open source fault simulator able to perform fault simulation for the most classical fault models as well as user-defined electrical level fault models, to accurately model the effect of laser injections on CMOS circuits.• Side-Channel attacks: they exploit physical data-related information leaking from the device (e.g. current consumption or electro-magnetic emission). One of the most intensively studied attacks is the Differential Power Analysis (DPA) that relies on the observation of the chip power fluctuations during data processing. I studied this type of attack in order to evaluate the influence of the countermeasures against fault attack on the power consumption of the device. Indeed, the introduction of countermeasures for one type of attack could lead to the insertion of some circuitry whose power consumption is related to the secret key, thus allowing another type of attack more easily. We have developed a flexible integrated simulation-based environment that allows validating a digital circuit when the device is attacked by means of this attack. All architectures we designed have been validated through this tool. Moreover, we developed a methodology that allows to drastically reduce the time required to validate countermeasures against this type of attack.TSV- based 3D Stacked Integrated Circuits TestThe stacking process of integrated circuits using TSVs (Through Silicon Via) is a promising technology that keeps the development of the integration more than Moore’s law, where TSVs enable to tightly integrate various dies in a 3D fashion. Nevertheless, 3D integrated circuits present many test challenges including the test at different levels of the 3D fabrication process: pre-, mid-, and post- bond tests. Pre-bond test targets the individual dies at wafer level, by testing not only classical logic (digital logic, IOs, RAM, etc) but also unbounded TSVs. Mid-bond test targets the test of partially assembled 3D stacks, whereas finally post-bond test targets the final circuit.The activities carried out within this topic cover 2 main issues:• Pre-bond test of TSVs: the electrical model of a TSV buried within the substrate of a CMOS circuit is a capacitance connected to ground (when the substrate is connected to ground). The main assumption is that a defect may affect the value of that capacitance. By measuring the variation of the capacitance’s value it is possible to check whether the TSV is correctly fabricated or not. We have proposed a method to measure the value of the capacitance based on the charge/ discharge delay of the RC network containing the TSV.• Test infrastructures for 3D stacked Integrated Circuits: testing a die before stacking to another die introduces the problem of a dynamic test infrastructure, where test data must be routed to a specific die based on the reached fabrication step. New solutions are proposed in literature that allow reconfiguring the test paths within the circuit, based on on-the-fly requirements. We have started working on an extension of the IEEE P1687 test standard that makes use of an automatic die-detection based on pull-up resistors.Memory and Microprocessor Test and ReliabilityThanks to device shrinking and miniaturization of fabrication technology, performances of microprocessors and of memories have grown of more than 5 magnitude order in the last 30 years. With this technology trend, it is necessary to face new problems and challenges, such as reliability, transient errors, variability and aging.In the last five years I’ve worked in cooperation with the Testgroup of Politecnico di Torino (Italy) to propose a new method to on-line validate the correctness of the program execution of a microprocessor. The main idea is to monitor a small set of control signals of the processors in order to identify incorrect activation sequences. This approach can detect both permanent and transient errors of the internal logic of the processor.Concerning the test of memories, we have proposed a new approach to automatically generate test programs starting from a functional description of the possible faults in the memory.Moreover, we proposed a new methodology, based on microprocessor error probability profiling, that aims at estimating fault injection results without the need of a typical fault injection setup. The proposed methodology is based on two main ideas: a one-time fault-injection analysis of the microprocessor architecture to characterize the probability of successful execution of each of its instructions in presence of a soft-error, and a static and very fast analysis of the control and data flow of the target software application to compute its probability of success

    Programmable CMOS Analog-to-Digital Converter Design and Testability

    Get PDF
    In this work, a programmable second order oversampling CMOS delta-sigma analog-to-digital converter (ADC) design in 0.5µm n-well CMOS processes is presented for integration in sensor nodes for wireless sensor networks. The digital cascaded integrator comb (CIC) decimation filter is designed to operate at three different oversampling ratios of 16, 32 and 64 to give three different resolutions of 9, 12 and 14 bits, respectively which impact the power consumption of the sensor nodes. Since the major part of power consumed in the CIC decimator is by the integrators, an alternate design is introduced by inserting coder circuits and reusing the same integrators for different resolutions and oversampling ratios to reduce power consumption. The measured peak signal-to-noise ratio (SNR) for the designed second order delta-sigma modulator is 75.6dB at an oversampling ratio of 64, 62.3dB at an oversampling ratio of 32 and 45.3dB at an oversampling ratio of 16. The implementation of a built-in current sensor (BICS) which takes into account the increased background current of defect-free circuits and the effects of process variation on ΔIDDQ testing of CMOS data converters is also presented. The BICS uses frequency as the output for fault detection in CUT. A fault is detected when the output frequency deviates more than ±10% from the reference frequency. The output frequencies of the BICS for various model parameters are simulated to check for the effect of process variation on the frequency deviation. A design for on-chip testability of CMOS ADC by linear ramp histogram technique using synchronous counter as register in code detection unit (CDU) is also presented. A brief overview of the histogram technique, the formulae used to calculate the ADC parameters, the design implemented in 0.5µm n-well CMOS process, the results and effectiveness of the design are described. Registers in this design are replaced by 6T-SRAM cells and a hardware optimized on-chip testability of CMOS ADC by linear ramp histogram technique using 6T-SRAM as register in CDU is presented. The on-chip linear ramp histogram technique can be seamlessly combined with ΔIDDQ technique for improved testability, increased fault coverage and reliable operation

    Quiescent current testing of CMOS data converters

    Get PDF
    Power supply quiescent current (IDDQ) testing has been very effective in VLSI circuits designed in CMOS processes detecting physical defects such as open and shorts and bridging defects. However, in sub-micron VLSI circuits, IDDQ is masked by the increased subthreshold (leakage) current of MOSFETs affecting the efficiency of I¬DDQ testing. In this work, an attempt has been made to perform robust IDDQ testing in presence of increased leakage current by suitably modifying some of the test methods normally used in industry. Digital CMOS integrated circuits have been tested successfully using IDDQ and IDDQ methods for physical defects. However, testing of analog circuits is still a problem due to variation in design from one specific application to other. The increased leakage current further complicates not only the design but also testing. Mixed-signal integrated circuits such as the data converters are even more difficult to test because both analog and digital functions are built on the same substrate. We have re-examined both IDDQ and IDDQ methods of testing digital CMOS VLSI circuits and added features to minimize the influence of leakage current. We have designed built-in current sensors (BICS) for on-chip testing of analog and mixed-signal integrated circuits. We have also combined quiescent current testing with oscillation and transient current test techniques to map large number of manufacturing defects on a chip. In testing, we have used a simple method of injecting faults simulating manufacturing defects invented in our VLSI research group. We present design and testing of analog and mixed-signal integrated circuits with on-chip BICS such as an operational amplifier, 12-bit charge scaling architecture based digital-to-analog converter (DAC), 12-bit recycling architecture based analog-to-digital converter (ADC) and operational amplifier with floating gate inputs. The designed circuits are fabricated in 0.5 μm and 1.5 μm n-well CMOS processes and tested. Experimentally observed results of the fabricated devices are compared with simulations from SPICE using MOS level 3 and BSIM3.1 model parameters for 1.5 μm and 0.5 μm n-well CMOS technologies, respectively. We have also explored the possibility of using noise in VLSI circuits for testing defects and present the method we have developed

    Design-for-Test of Mixed-Signal Integrated Circuits

    Get PDF

    Prognostics and Health Management of Electronics by Utilizing Environmental and Usage Loads

    Get PDF
    Prognostics and health management (PHM) is a method that permits the reliability of a system to be evaluated in its actual application conditions. Thus by determining the advent of failure, procedures can be developed to mitigate, manage and maintain the system. Since, electronic systems control most systems today and their reliability is usually critical for system reliability, PHM techniques are needed for electronics. To enable prognostics, a methodology was developed to extract load-parameters required for damage assessment from irregular time-load data. As a part of the methodology an algorithm that extracts cyclic range and means, ramp-rates, dwell-times, dwell-loads and correlation between load parameters was developed. The algorithm enables significant reduction of the time-load data without compromising features that are essential for damage estimation. The load-parameters are stored in bins with a-priori calculated (optimal) bin-width. The binned data is then used with Gaussian kernel function for density estimation of the load-parameter for use in damage assessment and prognostics. The method was shown to accurately extract the desired load-parameters and enable condensed storage of load histories, thus improving resource efficiency of the sensor nodes. An approach was developed to assess the impact of uncertainties in measurement, model-input, and damage-models on prognostics. The approach utilizes sensitivity analysis to identify the dominant input variables that influence the model-output, and uses the distribution of measured load-parameters and input variables in a Monte-Carlo simulation to provide a distribution of accumulated damage. Using regression analysis of the accumulated damage distributions, the remaining life is then predicted with confidence intervals. The proposed method was demonstrated using an experimental setup for predicting interconnect failures on electronic board subjected to field conditions. A failure precursor based approach was developed for remaining life prognostics by analyzing resistance data in conjunction with usage temperature loads. Using the data from the PHM experiment, a model was developed to estimate the resistance based on measured temperature values. The difference between actual and estimated resistance value in time-domain were analyzed to predict the onset and progress of interconnect degradation. Remaining life was predicted by trending several features including mean-peaks, kurtosis, and 95% cumulative-values of the resistance-drift distributions

    Power supply current [IPS] based testing of CMOS amplifier circuit with and without floating gate input transistors

    Get PDF
    This work presents a case study, which attempts to improve the fault diagnosis and testability of the power supply current based testing methodology applied to a typical two-stage CMOS operational amplifier and is extended to operational amplifier with floating gate input transistors*. The proposed test method takes the advantage of good fault coverage through the use of a simple power supply current measurement based test technique, which only needs an ac input stimulus at the input and no additional circuitry. The faults simulating possible manufacturing defects have been introduced using the fault injection transistors. In the present work, variations of ac ripple in the power supply current IPS, passing through VDD under the application of an ac input stimulus is measured to detect injected faults in the CMOS amplifier. The effect of parametric variation is taken into consideration by setting tolerance limit of ± 5% on the fault-free IPS value. The fault is identified if the power supply current, IPS falls outside the deviation given by the tolerance limit. This method presented can also be generalized to the test structures of other floating-gate MOS analog and mixed signal integrated circuits

    Testing a CMOS operational amplifier circuit using a combination of oscillation and IDDQ test methods

    Get PDF
    This work presents a case study, which attempts to improve the fault diagnosis and testability of the oscillation testing methodology applied to a typical two-stage CMOS operational amplifier. The proposed test method takes the advantage of good fault coverage through the use of a simple oscillation based test technique, which needs no test signal generation and combines it with quiescent supply current (IDDQ) testing to provide a fault confirmation. A built in current sensor (BICS), which introduces insignificant performance degradation of the circuit-under-test (CUT), has been utilized to monitor the power supply quiescent current changes in the CUT. The testability has also been enhanced in the testing procedure using a simple fault-injection technique. The approach is attractive for its simplicity, robustness and capability of built-in-self test (BIST) implementation. It can also be generalized to the oscillation based test structures of other CMOS analog and mixed-signal integrated circuits. The practical results and simulations confirm the functionality of the proposed test method

    Selbsttest und Fehlertoleranz mit zugelassener milder Degradation in integrierten CMOS-Sensorsystemen

    Get PDF
    In dieser Arbeit wird eine Erweiterung bisheriger intelligenter Sensorsysteme, basierend auf Anforderungen aus der Industrie nach steigender Leistung und erweiterten Systemfähigkeiten, vorgestellt. Die hier untersuchte Erweiterung stellt zusätzliche Funktionen zur Erhöhung der Betriebssicherheit im Fehlerfall zur Verfügung. Ein so erweitertes Sensorsystem beinhaltet eine Selbsttestfunktion mit Fehlererkennung, Fehleranalyse, Fehlerbeseitigung und Fehlersignalisierung. Ziel der Erweiterung ist es, Gefahren verursacht durch technische Systeme, die Messergebnisse von einem so erweiterten Sensorsystemen auswerten, durch nichterkannte fehlerhafte Messergebnisse zu minimieren. Aus Kostengründen kann die Fehlerbeseitigung häufig unter dem Aspekt einer unvollständigen Fehlerbeseitigung vorgenommen werden. Im Fehlerfall verringert sich somit die Leistungsfähigkeit des Sensorsystems (milde Degradation). Der Schwerpunkt dieser Arbeit liegt bei der Fehlererkennung von defekten Sensorelementen. Der Grund für die Wahl dieses Schwerpunktes ist, dass für Sensorelemente keine herkömmlichen Selbsttestverfahren existieren und dass gerade die Sensorelemente einer besonders hohen Gefahr der Zerstörung ausgesetzt sind, weil sie direkt mit der Umwelt in Kontakt kommen. In der Arbeit werden zuerst die Grundlagen zu Sensoren und Sensorsystemen beschrieben. Es werden unterschiedliche Fehlererkennungsmethoden vorgestellt und auf die Notwendigkeiten im Hinblick auf einen Einsatz in Sensorsystemen bewertet. Speziell für integrierte Sensoren weist die Methode der Selbstanregung des Sensorelementes mittels elektrischer Stimulation eine hohe Flexibilität auf. Allerdings sind der Selbstanregung durch die begrenzten elektrischen Stimulationsamplituden und den nur geringen Empfindlichkeiten der integrierten Sensorelemente Grenzen gesetzt. Diese Nachteile werden durch die im Rahmen der Arbeit entwickelten Methode der Korrelationsdetektion mit einer festen Stimulationssequenz verringert. Dazu wird das Sensorelement direkt oder indirekt elektrisch mit einer festgelegten Stimulationssequenz angeregt. Bei der direkten Stimulation wird das Sensorelement entsprechend dem eigentlichen Messprinzip angeregt. Bei der indirekten Stimulation erfolgt die Anregung über die Querempfindlichkeit des Sensorelementes. In einem fehlerfreien Sensorsystem wird die Stimulation des Sensorelementes in eine elektrische Größe konvertiert und gelangt über die gesamte Signalverarbeitungskette, wie die eigentliche Messgröße, zum Ausgang des Sensorsystems. In einem fehlerfreien Sensorsystem ist also die Stimulationssequenz im Ausgangssignal vorhanden. Aufgrund der begrenzten elektrischen Stimulationsamplituden, der meistens geringen Empfindlichkeit des Sensorelementes auf die Stimulation und dem Grund, dass die eigentliche Messwertaufnahme nur minimal gestört werden darf, ist die Amplitude der Stimulationssequenz im Ausgang des Sensorsystems nur sehr gering. Um ein solches kleines Signal zu detektieren, wird ein auf die Stimulationssequenz optimiertes Matched-Filter mit nachfolgendem Schwellwertentscheider eingesetzt. Der Vorteil dieser Fehlererkennungsmethode ist, dass neben dem Sensorelement auch die gesamte Signalverarbeitung auf einen Fehler untersucht und kein Eingriff in diese Kette notwendig wird. Die Funktionalität der entwickelten Fehlererkennungsmethode konnte an zwei Anwendungsbeispielen, einem Druck- und Temperatursensorelement, demonstriert werden. Am Beispiel des Drucksensorsystems wurden anschließend zwei Methoden zur Fehlerbeseitigung unter Anwendung der milden Degradation vorgestellt. An extension of smart sensor systems based on industrial requirements for increasing performance and system capability will be examined in this Ph.D. thesis. The extension towards a dependable sensor systems includes additional functions for increasing the reliability. A dependable sensor system contains error detection, error analysis, error removal, and error indication functions. The goal of this extension is to minimize the danger for humans or environment caused by technical systems, which evaluate non-recognized faulty measurement results generated by a non-dependable sensor system. Due to cost issues a full error removal is often impossible. In the case of a detected fault a mild or partial performance degradation may be the result. The error detection of defective sensor elements is the key part of this work. This key part has been chosen because no conventional self-testing strategy for sensor elements are available and defective sensor elements are highly probable because they usually exposed to rough environments. The Ph.D. thesis starts with a survey of integrated sensor systems. Some different error detection methods are presented and analyzed for implementation in integrated sensor systems. The electrical self-stimulation of the sensor element has some advantages special for integrated sensor systems. Due to limited electrical stimulation amplitudes and low sensitivities the self-stimulation of the sensor elements is limited. These disadvantages can be reduced using a new error detection method based on correlation detection of a fixed stimulation sequence. The sensor element is electrically stimulated directly or indirectly using a fixed stimulation sequence. Direct stimulation uses the measurement principle of the sensor element whereas indirect stimulation uses the cross-sensitivities of a sensor element. In a faulty-free sensor system the electrical stimulation is converted by the sensor element into an electrical signal and this is read out like the measurement result by the whole sensor signal processing. Therefore, in a faulty-free system the output of the sensor system contains the stimulation sequence. Due to the limited electrical stimulation amplitudes, the normally low sensitivity of the sensor element for the stimulation sequence, and the only small tolerable disturbance of the measurement process only a low amplitude of the stimulation sequence occurs at the output of the sensor system. To detect such a small signal a matched filter for the stimulation sequence followed by a threshold comparator is used. The advantage of this error detect method is that apart from the sensor element also the complete sensor signal processing is tested and no split of the sensor signal processing is necessary. The functionality of the proposed error detection method has been proven for two examples: 1st a temperature and 2nd a pressure sensor element. Using the example of an integrated capacitance pressure sensor system two methods for error removal with mild performance degradation is presented

    Conception pour la testabilité des systèmes biomédicaux implantables

    Get PDF
    Architecture générale des systèmes implantables -- Principes de stimulation électrique -- Champs d'application des systèmes implantables -- Les particularités des circuits implantables -- Tendance future -- Conception pour la testabilité de la partie numérique des circuits implantables -- "Desigh and realization of an accurate built-in current sensor for Iddq testing and power dissipation measurement -- Conception pour la testabilité de la partie analogique des circuits implantables -- BIST for digital-to-analog and Analogo-to-digital converters -- Efficient and accurate testing of analog-to-digital converters using oscillation test method -- Design for testability of Embedded integrated operational amplifiers -- Vérification des interfaces bioélectroniques des systèmes implantables -- Monitorin the electrode and lead failures in implanted microstimulators and sensors -- Capteurs de température intégrés pour la vérification de l'état thermique des puces dédiées -- Built-in temperature sensors for on-line thermal monitoring of microelectronic structures -- Un protocole de communication fiable pour la programmation et la télémétrie des système implantables -- A reliable communication protoco for externally controlled biomedical implanted devices
    corecore