30 research outputs found

    New techniques for functional testing of microprocessor based systems

    Get PDF
    Electronic devices may be affected by failures, for example due to physical defects. These defects may be introduced during the manufacturing process, as well as during the normal operating life of the device due to aging. How to detect all these defects is not a trivial task, especially in complex systems such as processor cores. Nevertheless, safety-critical applications do not tolerate failures, this is the reason why testing such devices is needed so to guarantee a correct behavior at any time. Moreover, testing is a key parameter for assessing the quality of a manufactured product. Consolidated testing techniques are based on special Design for Testability (DfT) features added in the original design to facilitate test effectiveness. Design, integration, and usage of the available DfT for testing purposes are fully supported by commercial EDA tools, hence approaches based on DfT are the standard solutions adopted by silicon vendors for testing their devices. Tests exploiting the available DfT such as scan-chains manipulate the internal state of the system, differently to the normal functional mode, passing through unreachable configurations. Alternative solutions that do not violate such functional mode are defined as functional tests. In microprocessor based systems, functional testing techniques include software-based self-test (SBST), i.e., a piece of software (referred to as test program) which is uploaded in the system available memory and executed, with the purpose of exciting a specific part of the system and observing the effects of possible defects affecting it. SBST has been widely-studies by the research community for years, but its adoption by the industry is quite recent. My research activities have been mainly focused on the industrial perspective of SBST. The problem of providing an effective development flow and guidelines for integrating SBST in the available operating systems have been tackled and results have been provided on microprocessor based systems for the automotive domain. Remarkably, new algorithms have been also introduced with respect to state-of-the-art approaches, which can be systematically implemented to enrich SBST suites of test programs for modern microprocessor based systems. The proposed development flow and algorithms are being currently employed in real electronic control units for automotive products. Moreover, a special hardware infrastructure purposely embedded in modern devices for interconnecting the numerous on-board instruments has been interest of my research as well. This solution is known as reconfigurable scan networks (RSNs) and its practical adoption is growing fast as new standards have been created. Test and diagnosis methodologies have been proposed targeting specific RSN features, aimed at checking whether the reconfigurability of such networks has not been corrupted by defects and, in this case, at identifying the defective elements of the network. The contribution of my work in this field has also been included in the first suite of public-domain benchmark networks

    Advances in Logic Locking: Past, Present, and Prospects

    Get PDF
    Logic locking is a design concealment mechanism for protecting the IPs integrated into modern System-on-Chip (SoC) architectures from a wide range of hardware security threats at the IC manufacturing supply chain. Logic locking primarily helps the designer to protect the IPs against reverse engineering, IP piracy, overproduction, and unauthorized activation. For more than a decade, the research studies that carried out on this paradigm has been immense, in which the applicability, feasibility, and efficacy of the logic locking have been investigated, including metrics to assess the efficacy, impact of locking in different levels of abstraction, threat model definition, resiliency against physical attacks, tampering, and the application of machine learning. However, the security and strength of existing logic locking techniques have been constantly questioned by sophisticated logical and physical attacks that evolve in sophistication at the same rate as logic locking countermeasure approaches. By providing a comprehensive definition regarding the metrics, assumptions, and principles of logic locking, in this survey paper, we categorize the existing defenses and attacks to capture the most benefit from the logic locking techniques for IP protection, and illuminating the need for and giving direction to future research studies in this topic. This survey paper serves as a guide to quickly navigate and identify the state-of-the-art that should be considered and investigated for further studies on logic locking techniques, helping IP vendors, SoC designers, and researchers to be informed of the principles, fundamentals, and properties of logic locking

    Innovative Techniques for Testing and Diagnosing SoCs

    Get PDF
    We rely upon the continued functioning of many electronic devices for our everyday welfare, usually embedding integrated circuits that are becoming even cheaper and smaller with improved features. Nowadays, microelectronics can integrate a working computer with CPU, memories, and even GPUs on a single die, namely System-On-Chip (SoC). SoCs are also employed on automotive safety-critical applications, but need to be tested thoroughly to comply with reliability standards, in particular the ISO26262 functional safety for road vehicles. The goal of this PhD. thesis is to improve SoC reliability by proposing innovative techniques for testing and diagnosing its internal modules: CPUs, memories, peripherals, and GPUs. The proposed approaches in the sequence appearing in this thesis are described as follows: 1. Embedded Memory Diagnosis: Memories are dense and complex circuits which are susceptible to design and manufacturing errors. Hence, it is important to understand the fault occurrence in the memory array. In practice, the logical and physical array representation differs due to an optimized design which adds enhancements to the device, namely scrambling. This part proposes an accurate memory diagnosis by showing the efforts of a software tool able to analyze test results, unscramble the memory array, map failing syndromes to cell locations, elaborate cumulative analysis, and elaborate a final fault model hypothesis. Several SRAM memory failing syndromes were analyzed as case studies gathered on an industrial automotive 32-bit SoC developed by STMicroelectronics. The tool displayed defects virtually, and results were confirmed by real photos taken from a microscope. 2. Functional Test Pattern Generation: The key for a successful test is the pattern applied to the device. They can be structural or functional; the former usually benefits from embedded test modules targeting manufacturing errors and is only effective before shipping the component to the client. The latter, on the other hand, can be applied during mission minimally impacting on performance but is penalized due to high generation time. However, functional test patterns may benefit for having different goals in functional mission mode. Part III of this PhD thesis proposes three different functional test pattern generation methods for CPU cores embedded in SoCs, targeting different test purposes, described as follows: a. Functional Stress Patterns: Are suitable for optimizing functional stress during I Operational-life Tests and Burn-in Screening for an optimal device reliability characterization b. Functional Power Hungry Patterns: Are suitable for determining functional peak power for strictly limiting the power of structural patterns during manufacturing tests, thus reducing premature device over-kill while delivering high test coverage c. Software-Based Self-Test Patterns: Combines the potentiality of structural patterns with functional ones, allowing its execution periodically during mission. In addition, an external hardware communicating with a devised SBST was proposed. It helps increasing in 3% the fault coverage by testing critical Hardly Functionally Testable Faults not covered by conventional SBST patterns. An automatic functional test pattern generation exploiting an evolutionary algorithm maximizing metrics related to stress, power, and fault coverage was employed in the above-mentioned approaches to quickly generate the desired patterns. The approaches were evaluated on two industrial cases developed by STMicroelectronics; 8051-based and a 32-bit Power Architecture SoCs. Results show that generation time was reduced upto 75% in comparison to older methodologies while increasing significantly the desired metrics. 3. Fault Injection in GPGPU: Fault injection mechanisms in semiconductor devices are suitable for generating structural patterns, testing and activating mitigation techniques, and validating robust hardware and software applications. GPGPUs are known for fast parallel computation used in high performance computing and advanced driver assistance where reliability is the key point. Moreover, GPGPU manufacturers do not provide design description code due to content secrecy. Therefore, commercial fault injectors using the GPGPU model is unfeasible, making radiation tests the only resource available, but are costly. In the last part of this thesis, we propose a software implemented fault injector able to inject bit-flip in memory elements of a real GPGPU. It exploits a software debugger tool and combines the C-CUDA grammar to wisely determine fault spots and apply bit-flip operations in program variables. The goal is to validate robust parallel algorithms by studying fault propagation or activating redundancy mechanisms they possibly embed. The effectiveness of the tool was evaluated on two robust applications: redundant parallel matrix multiplication and floating point Fast Fourier Transform

    High Quality Delay Testing Scheme for a Self-Timed Microprocessor

    Get PDF
    RÉSUMÉ La popularité d’internet et la quantité toujours croissante de données qui transitent à travers ses terminaux nécessite d’importantes infrastructures de serveurs qui consomment énormément d’énergie. Par conséquent, et puisqu’une augmentation de la consommation d’énergie se traduit par une augmentation des coûts, la demande pour des processeurs efficaces en énergie est en forte hausse. Une manière d’augmenter l’efficacité énergétique des processeurs consiste à moduler la fréquence d’opération du système en fonction de la charge de travail. Les processeurs endochrones et asynchrones sont une des solutions mettant en œuvre ce principe de modulation de l’activité à la demande. Cependant, les méthodes de conception non conventionnelles qui leur sont associées, en particulier en termes de testabilité et d’automation, sont un frein au développement de ce type de systèmes. Ce travail s’intéresse au développement d’une méthode de test de haute qualité adressée aux pannes de retards dans une architecture de processeur endochrone spécifique, appelée AnARM. La méthode proposée consiste à détecter les pannes à faibles retards (PFR) dans l’AnARM en tirant profit des lignes à délais configurables intégrées. Ces pannes sont connues pour passer au travers des modèles de pannes de retards utilisés habituellement (les pannes de retards de portes). Ce travail s’intéresse principalement aux PFR qui échappent à la détection des pannes de retards de portes mais qui sont suffisamment longues pour provoquer des erreurs dans des conditions normales d’opération. D’autre part, la détection de pannes à très faibles retards est évitée, autant que possible, afin de limiter le nombre de faux positifs. Pour réaliser un test de haute qualité, ce travail propose, dans un premier temps, une métrique de test dédiée aux PFR, qui est mieux adaptée aux circuits endochrones, puis, dans un second temps, une méthode de test des pannes de retards basée sur la modulation de la vitesse des lignes à délais intégrés, qui s’adapte à un jeu de vecteurs de test préexistant.Ce travail présente une métrique de test ciblant les PFR, appelée pourcentage de marges pondérées (PoMP), ainsi qu’un nouveau modèle de test pour les PFR (appelé test de PFR idéal).----------ABSTRACT The popularity of the Internet and the huge amount of data that is transfered between devices nowadays requires very powerful servers that demand lots of power. Since higher power consumptions mean more expenses to companies, there is an increase in demand for power eÿcient processors. One of the ways to increase the power eÿciency of processors is to adapt the processing speeds and chip activity according the needed computation load. Self-timed or asynchronous processors are one of the solutions that apply this principle of activity on demand. However, their unconventional design methodology introduces several challenges in terms of testability and design automation. This work focuses on developing a high quality delay test for a specific architecture of self-timed processors called the AnARM. The proposed delay test focuses on catching e˙ective small-delay defects (SDDs) in the AnARM by taking advantage of built-in configurable delay lines. Those defects are known to escape one of the most commonly used delay fault models (the transition delay fault model). This work mainly focuses on e˙ective SDDs which can escape transition delay fault testing and are large enough to fail the circuit under normal operating conditions. At the same time, catching very small delay defects is avoided, when possible, to avoid falsely failing functional chips. To build the high quality delay test, this work develops an SDD test quality metric that is better suited for circuits with adaptable speeds. Then, it builds a delay test optimizer that adapts the built-in delay lines speeds to a preexisting at-speed pattern set to create a high quality SDD test. This work presents a novel SDD test quality metric called the weighted slack percentage (WeSPer), along with a new SDD testing model (named the ideal SDD test model). WeSPer is built to be a flexible metric capable of adapting to the availability of information about the circuit under test and the test environment. Since the AnARM can use multiple test speeds, WeSPer computation takes special care of assessing the effects of test frequency changes on the test quality. Specifically, special care is taken into avoiding overtesting the circuit. Overtesting will cause circuits under test to fail due to defects that are too small to affect the functionality of these circuits in their present state. A computation framework is built to compute WeSPer and compare it with other existing metrics in the literature over a large sets of process-voltage-temperature computation points. Simulations are done on a selected set of known benchmark circuits synthesized in the 28nm FD-SOI technology from STMicroelectronics

    Design and test for timing uncertainty in VLSI circuits.

    Get PDF
    由於特徵尺寸不斷縮小,集成電路在生產過程中的工藝偏差在運行環境中溫度和電壓等參數的波動以及在使用過程中的老化等效應越來越嚴重,導致芯片的時序行為出現很大的不確定性。多數情況下,芯片的關鍵路徑會不時出現時序錯誤。加入更多的時序餘量不是一種很好的解決方案,因為這種保守的設計方法會抵消工藝進步帶來的性能上的好處。這就為設計一個時序可靠的系統提出了極大的挑戰,其中的一些關鍵問題包括:(一)如何有效地分配有限的功率預算去優化那些正爆炸式增加的關鍵路徑的時序性能;(二)如何產生能夠捕捉準確的最壞情況時延的高品質測試向量;(三)為了能夠取得更好的功耗和性能上的平衡,我們將不得不允許芯片在使用過程中出現一些頻率很低的時序錯誤。隨之而來的問題是如何做到在線的檢錯和糾錯。為了解決上述問題,我們首先發明了一種新的技術用於識別所謂的虛假路徑,該方法使我們能夠發現比傳統方法更多的虛假路徑。當將所提取的虛假路徑集成到靜態時序分析工具里以後,我們可以得到更為準確的時序分析結果,同時也能節省本來用於優化這些路徑的成本。接著,考慮到現有的延時自動向量生成(ATPG) 方法會產生功能模式下無法出現的測試向量,這種向量可能會造成測試過程中在被激活的路徑周圍出現過多(或過少)的電源噪聲(PSN) ,從而導致測試過度或者測試不足情況。為此,我們提出了一種新的偽功能ATPG工具。通過同時考慮功能約束以及電路的物理佈局信息,我們使用類似ATPG 的算法產生狀態跳變使其能最大化已激活的路徑周圍的PSN影響。最後,基於近似電路的原理,我們提出了一種新的在線原位校正技術,即InTimeFix,用於糾正時序錯誤。由於實現近似電路的綜合僅需要簡單的電路結構分析,因此該技術能夠很容易的擴展到大型電路設計上去。With technology scaling, integrated circuits (ICs) suffer from increasing process, voltage, and temperature (PVT) variations and aging effects. In most cases, these reliability threats manifest themselves as timing errors on speed-paths (i.e., critical or near-critical paths) of the circuit. Embedding a large design guard band to prevent timing errors to occur is not an attractive solution, since this conservative design methodology diminishes the benefit of technology scaling. This creates several challenges on build a reliable systems, and the key problems include (i) how to optimize circuit’s timing performance with limited power budget for explosively increased potential speed-paths; (ii) how to generate high quality delay test pattern to capture ICs’ accurate worst-case delay; (iii) to have better power and performance tradeoff, we have to accept some infrequent timing errors in circuit’s the usage phase. Therefore, the question is how to achieve online timing error resilience.To address the above issues, we first develop a novel technique to identify so-called false paths, which facilitate us to find much more false paths than conventional methods. By integrating our identified false paths into static timing analysis tool, we are able to achieve more accurate timing information and also save the cost used to optimize false paths. Then, due to the fact that existing delay automated test pattern generation (ATPG) methods may generate test patterns that are functionally-unreachable, and such patterns may incur excessive (or limited) power supply noise (PSN) on sensitized paths in test mode, thus leading to over-testing or under-testing of the circuits, we propose a novel pseudo-functional ATPG tool. By taking both circuit layout information and functional constrains into account, we use ATPG like algorithm to justify transitions that pose the maximized functional PSN effects on sensitized critical paths. Finally, we propose a novel in-situ correction technique to mask timing errors, namely InTimeFix, by introducing redundant approximation circuit with more timing slack for speed-paths into the design. The synthesis of the approximation circuit relies on simple structural analysis of the original circuit, which is easily scalable to large IC designs.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Yuan, Feng.Thesis (Ph.D.)--Chinese University of Hong Kong, 2012.Includes bibliographical references (leaves 88-100).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Challenges to Solve Timing Uncertainty Problem --- p.2Chapter 1.2 --- Contributions and Thesis Outline --- p.5Chapter 2 --- Background --- p.7Chapter 2.1 --- Sources of Timing Uncertainty --- p.7Chapter 2.1.1 --- Process Variation --- p.7Chapter 2.1.2 --- Runtime Environment Fluctuation --- p.9Chapter 2.1.3 --- Aging Effect --- p.10Chapter 2.2 --- Technical Flow to Solve Timing Uncertainty Problem --- p.10Chapter 2.3 --- False Path --- p.12Chapter 2.3.1 --- Path Sensitization Criteria --- p.12Chapter 2.3.2 --- False Path Aware Timing Analysis --- p.13Chapter 2.4 --- Manufacturing Testing --- p.14Chapter 2.4.1 --- Functional Testing vs. Structural Testing --- p.14Chapter 2.4.2 --- Scan-Based DfT --- p.15Chapter 2.4.3 --- Pseudo-Functional Testing --- p.17Chapter 2.5 --- Timing Error Tolerance --- p.19Chapter 2.5.1 --- Timing Error Detection --- p.19Chapter 2.5.2 --- Timing Error Recover --- p.20Chapter 3 --- Timing-Independent False Path Identification --- p.23Chapter 3.1 --- Introduction --- p.23Chapter 3.2 --- Preliminaries and Motivation --- p.26Chapter 3.2.1 --- Motivation --- p.27Chapter 3.3 --- False Path Examination Considering Illegal States --- p.28Chapter 3.3.1 --- Path Sensitization Criterion --- p.28Chapter 3.3.2 --- Path-Aware Illegal State Identification --- p.30Chapter 3.3.3 --- Proposed Examination Procedure --- p.31Chapter 3.4 --- False Path Identification --- p.32Chapter 3.4.1 --- Overall Flow --- p.34Chapter 3.4.2 --- Static Implication Learning --- p.35Chapter 3.4.3 --- Suspicious Node Extraction --- p.36Chapter 3.4.4 --- S-Frontier Propagation --- p.37Chapter 3.5 --- Experimental Results --- p.38Chapter 3.6 --- Conclusion and Future Work --- p.42Chapter 4 --- PSN Aware Pseudo-Functional Delay Testing --- p.43Chapter 4.1 --- Introduction --- p.43Chapter 4.2 --- Preliminaries and Motivation --- p.45Chapter 4.2.1 --- Motivation --- p.46Chapter 4.3 --- Proposed Methodology --- p.48Chapter 4.4 --- Maximizing PSN Effects under Functional Constraints --- p.50Chapter 4.4.1 --- Pseudo-Functional Relevant Transitions Generation --- p.51Chapter 4.5 --- Experimental Results --- p.59Chapter 4.5.1 --- Experimental Setup --- p.59Chapter 4.5.2 --- Results and Discussion --- p.60Chapter 4.6 --- Conclusion --- p.64Chapter 5 --- In-Situ Timing Error Masking in Logic Circuits --- p.65Chapter 5.1 --- Introduction --- p.65Chapter 5.2 --- Prior Work and Motivation --- p.67Chapter 5.3 --- In-Situ Timing Error Masking with Approximate Logic --- p.69Chapter 5.3.1 --- Equivalent Circuit Construction with Approximate Logic --- p.70Chapter 5.3.2 --- Timing Error Masking with Approximate Logic --- p.72Chapter 5.4 --- Cost-Efficient Synthesis for InTimeFix --- p.75Chapter 5.4.1 --- Overall Flow --- p.76Chapter 5.4.2 --- Prime Critical Segment Extraction --- p.77Chapter 5.4.3 --- Prime Critical Segment Merging --- p.79Chapter 5.5 --- Experimental Results --- p.81Chapter 5.5.1 --- Experimental Setup --- p.81Chapter 5.5.2 --- Results and Discussion --- p.82Chapter 5.6 --- Conclusion --- p.85Chapter 6 --- Conclusion and Future Work --- p.86Bibliography --- p.10

    SCAN CHAIN BASED HARDWARE SECURITY

    Get PDF
    Hardware has become a popular target for attackers to hack into any computing and communication system. Starting from the legendary power analysis attacks discovered 20 years ago to the recent Intel Spectre and Meltdown attacks, security vulnerabilities in hardware design have been exploited for malicious purposes. With the emerging Internet of Things (IoT) applications, where the IoT devices are extremely resource constrained, many proven secure but computational expensive cryptography protocols cannot be applied on such devices. Thus there is an urgent need to understand the hardware vulnerabilities and develop cost effective mitigation methods. One established field in the semiconductor and integrated circuit (IC) industry, known as IC test, has the goal of ensuring that fabricated ICs are free of manufacturing defects and perform the required functionalities. Testing is essential to isolate faulty chips from good ones. The concept of design for test (DFT) has been integrated in the commercial IC design and fabrication process for several decades. Scan chain, which provides test engineer access to all the flip flops in the chip through the scan in (SI) and scan out (SO) ports, is the backbone of industrial testing methods and can be found in almost all the modern designs. In addition to IC testing, scan chain has found applications in intellectual property (IP) protection and IC identification. However, attackers can also leverage the controllability and observability of scan chain as a side channel to break systems such as cryptographic chips. This dissertation addresses these two important security problems by proposing (1) a practical scan chain based security primitive for IP protection and (2) a partial scan chain framework that can mitigate all the existing scan based attacks. First, we observe the fact that each D-flip-flop has two output ports, Q and Q’, designed to simplify the logic and has been used to reduce the power consumption for IC test. The availability of both Q and Q’ ports provide the opportunity for IP protection. More specifically, we can generate a digital fingerprint by selecting different connection styles between adjacent scan cells during the design of scan chain. This method has two major advantages: fingerprints are created as a post-silicon procedure and therefore there will be little fabrication overhead; altering the connection style requires the modification of test vectors for each fingerprinted IP and thus enables a non-intrusive fingerprint verification method. This addresses the overhead and detectability problems, two of the most challenging problems of designing practical IP fingerprinting techniques in the past two decades. Combined with the recently developed reconfigurable scan networks (RSNs) that are popular for embedded and IoT devices, we design an IC identification (ID) scheme utilizing the different connection styles. We perform experiments on standard benchmarks to demonstrate that our approach has low design overhead. We also conduct security analysis to show that such fingerprints and IC IDs are robust against various attacks. In the second part of this dissertation, we consider the scan chain side channel attack, which has been reported as one of the most severe side channel attacks to modern secure systems. We argue that the current countermeasures are restricted to the requirement of providing direct SI and SO for testing and thus suffers the vulnerability of leaving this side channel open to the attackers as well. Therefore, we propose a novel public-private partial scan chain based approach with the basic idea of removing the flip flops that store sensitive information from the scan chain. This will eliminate the scan chain side channel, but it also limits IC test. The key contribution in our proposed public-private partial scan chain design is that it can keep the full test coverage while providing security to the scan chain. This is achieved by chaining the removed flip flops into one or more private partial scan chains and adding protections to the SI and SO ports of such chains. Unlike the traditional partial scan design which not only fails to provide full fault coverage, but also incur huge overhead in test time and test vector generation time, we propose a set of techniques to ensure that the desired test vectors can be entered into the system efficiently. These techniques include test vector reordering, test vector reusing, and test vector generation based on a novel finite state machine (FSM) structure we have invented. On the other hand, to enable the test engineers the ability to observe the test output to diagnose the chip while not leaking information to the attackers, we propose two lightweight mechanisms, one based on linear feedback shift register (LFSR) and the other one based on configurable physical unclonable function (PUF). Finally, we discuss a protocol on how in-field test can be realized using our public-private partial scan chain. We conduct experiments with industrial scan design tools to demonstrate that the required hardware in our approach has negligible area overhead and gives full test coverage with reduced test time and does not need to re-generate test vectors. In sum, this dissertation focuses on the role of scan chain, a conventional design for test facility, in hardware security. We show that scan chain features can be leveraged to create practical IP protection techniques including IP watermarking and fingerprinting as well as IC identification and authentication. We also propose a novel public-private partial scan design principle to close the scan chain side channel to the attackers. Through this dissertation work, we demonstrate that it is possible to develop highly practical scan chain based techniques that can benefit both the community of IC test and hardware security

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process
    corecore