5,478 research outputs found

    Multi-Cycle at Speed Test

    Get PDF
    In this research, we focus on the development of an algorithm that is used to generate a minimal number of patterns for path delay test of integrated circuits using the multi-cycle at-speed test. We test the circuits in functional mode, where multiple functional cycles follow after the test pattern scan-in operation. This approach increases the delay correlation between the scan and functional test, due to more functionally realistic power supply noise. We use multiple at-speed cycles to compact K-longest paths per gate tests, which reduces the number of scan patterns. After a path is generated, we try to place each path in the first pattern in the pattern pool. If the path does not fit due to conflicts, we attempt to place it in later functional cycles. This compaction approach retains the greedy nature of the original dynamic compaction algorithm where it will stop if the path fits into a pattern. If the path is not able to compact in any of the functional cycles of patterns in the pool, we generate a new pattern. In this method, each path delay test is compared to at-speed patterns in the pool. The challenge is that the at-speed delay test in a given at-speed cycle must have its necessary value assignments set up in previous (preamble) cycles, and have the captured results propagated to a scan cell in the later (coda) cycles. For instance, if we consider three at-speed (capture) cycles after the scan-in operation, and if we need to place a fault in the first capture cycle, then we must generate it with two propagation cycles. In this case, we consider these propagation cycles as coda cycles, so the algorithm attempts to select the most observable path through them. Likewise, if we are placing the path test in the second capture cycle, then we need one preamble cycle and one coda cycle, and if we are placing the path test in the third capture cycle, we require two preamble cycles with no coda cycles

    Efficient Path Delay Test Generation with Boolean Satisfiability

    Get PDF
    This dissertation focuses on improving the accuracy and efficiency of path delay test generation using a Boolean satisfiability (SAT) solver. As part of this research, one of the most commonly used SAT solvers, MiniSat, was integrated into the path delay test generator CodGen. A mixed structural-functional approach was implemented in CodGen where longest paths were detected using the K Longest Path Per Gate (KLPG) algorithm and path justification and dynamic compaction were handled with the SAT solver. Advanced techniques were implemented in CodGen to further speed up the performance of SAT based path delay test generation using the knowledge of the circuit structure. SAT solvers are inherently circuit structure unaware, and significant speedup can be availed if structure information of the circuit is provided to the SAT solver. The advanced techniques explored include: Dynamic SAT Solving (DSS), Circuit Observability Don’t Care (Cir-ODC), SAT based static learning, dynamic learnt clause management and Approximate Observability Don’t Care (ACODC). Both ISCAS 89 and ITC 99 benchmarks as well as industrial circuits were used to demonstrate that the performance of CodGen was significantly improved with MiniSat and the use of circuit structure

    Pseudo-functional testing: bridging the gap between manufacturing test and functional operation.

    Get PDF
    Yuan, Feng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 60-65).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Manufacturing Test --- p.1Chapter 1.1.1 --- Functional Testing vs. Structural Testing --- p.2Chapter 1.1.2 --- Fault Model --- p.3Chapter 1.1.3 --- Automatic Test Pattern Generation --- p.4Chapter 1.1.4 --- Design for Testability --- p.6Chapter 1.2 --- Pseudo-Functional Manufacturing Test --- p.13Chapter 1.3 --- Thesis Motivation and Organization --- p.16Chapter 2 --- On Systematic Illegal State Identification --- p.19Chapter 2.1 --- Introduction --- p.19Chapter 2.2 --- Preliminaries and Motivation --- p.20Chapter 2.3 --- What is the Root Cause of Illegal States? --- p.22Chapter 2.4 --- Illegal State Identification Flow --- p.26Chapter 2.5 --- Justification Scheme Construction --- p.30Chapter 2.6 --- Experimental Results --- p.34Chapter 2.7 --- Conclusion --- p.35Chapter 3 --- Compression-Aware Pseudo-Functional Testing --- p.36Chapter 3.1 --- Introduction --- p.36Chapter 3.2 --- Motivation --- p.38Chapter 3.3 --- Proposed Methodology --- p.40Chapter 3.4 --- Pattern Generation in Compression-Aware Pseudo-Functional Testing --- p.42Chapter 3.4.1 --- Circuit Pre-Processing --- p.42Chapter 3.4.2 --- Pseudo-Functional Random Pattern Generation with Multi-Launch Cycles --- p.43Chapter 3.4.3 --- Compressible Test Pattern Generation for Pseudo-Functional Testing --- p.45Chapter 3.5 --- Experimental Results --- p.52Chapter 3.5.1 --- Experimental Setup --- p.52Chapter 3.5.2 --- Results and Discussion --- p.54Chapter 3.6 --- Conclusion --- p.56Chapter 4 --- Conclusion and Future Work --- p.58Bibliography --- p.6

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    Design and test for timing uncertainty in VLSI circuits.

    Get PDF
    由於特徵尺寸不斷縮小,集成電路在生產過程中的工藝偏差在運行環境中溫度和電壓等參數的波動以及在使用過程中的老化等效應越來越嚴重,導致芯片的時序行為出現很大的不確定性。多數情況下,芯片的關鍵路徑會不時出現時序錯誤。加入更多的時序餘量不是一種很好的解決方案,因為這種保守的設計方法會抵消工藝進步帶來的性能上的好處。這就為設計一個時序可靠的系統提出了極大的挑戰,其中的一些關鍵問題包括:(一)如何有效地分配有限的功率預算去優化那些正爆炸式增加的關鍵路徑的時序性能;(二)如何產生能夠捕捉準確的最壞情況時延的高品質測試向量;(三)為了能夠取得更好的功耗和性能上的平衡,我們將不得不允許芯片在使用過程中出現一些頻率很低的時序錯誤。隨之而來的問題是如何做到在線的檢錯和糾錯。為了解決上述問題,我們首先發明了一種新的技術用於識別所謂的虛假路徑,該方法使我們能夠發現比傳統方法更多的虛假路徑。當將所提取的虛假路徑集成到靜態時序分析工具里以後,我們可以得到更為準確的時序分析結果,同時也能節省本來用於優化這些路徑的成本。接著,考慮到現有的延時自動向量生成(ATPG) 方法會產生功能模式下無法出現的測試向量,這種向量可能會造成測試過程中在被激活的路徑周圍出現過多(或過少)的電源噪聲(PSN) ,從而導致測試過度或者測試不足情況。為此,我們提出了一種新的偽功能ATPG工具。通過同時考慮功能約束以及電路的物理佈局信息,我們使用類似ATPG 的算法產生狀態跳變使其能最大化已激活的路徑周圍的PSN影響。最後,基於近似電路的原理,我們提出了一種新的在線原位校正技術,即InTimeFix,用於糾正時序錯誤。由於實現近似電路的綜合僅需要簡單的電路結構分析,因此該技術能夠很容易的擴展到大型電路設計上去。With technology scaling, integrated circuits (ICs) suffer from increasing process, voltage, and temperature (PVT) variations and aging effects. In most cases, these reliability threats manifest themselves as timing errors on speed-paths (i.e., critical or near-critical paths) of the circuit. Embedding a large design guard band to prevent timing errors to occur is not an attractive solution, since this conservative design methodology diminishes the benefit of technology scaling. This creates several challenges on build a reliable systems, and the key problems include (i) how to optimize circuit’s timing performance with limited power budget for explosively increased potential speed-paths; (ii) how to generate high quality delay test pattern to capture ICs’ accurate worst-case delay; (iii) to have better power and performance tradeoff, we have to accept some infrequent timing errors in circuit’s the usage phase. Therefore, the question is how to achieve online timing error resilience.To address the above issues, we first develop a novel technique to identify so-called false paths, which facilitate us to find much more false paths than conventional methods. By integrating our identified false paths into static timing analysis tool, we are able to achieve more accurate timing information and also save the cost used to optimize false paths. Then, due to the fact that existing delay automated test pattern generation (ATPG) methods may generate test patterns that are functionally-unreachable, and such patterns may incur excessive (or limited) power supply noise (PSN) on sensitized paths in test mode, thus leading to over-testing or under-testing of the circuits, we propose a novel pseudo-functional ATPG tool. By taking both circuit layout information and functional constrains into account, we use ATPG like algorithm to justify transitions that pose the maximized functional PSN effects on sensitized critical paths. Finally, we propose a novel in-situ correction technique to mask timing errors, namely InTimeFix, by introducing redundant approximation circuit with more timing slack for speed-paths into the design. The synthesis of the approximation circuit relies on simple structural analysis of the original circuit, which is easily scalable to large IC designs.Detailed summary in vernacular field only.Detailed summary in vernacular field only.Yuan, Feng.Thesis (Ph.D.)--Chinese University of Hong Kong, 2012.Includes bibliographical references (leaves 88-100).Abstract also in Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Challenges to Solve Timing Uncertainty Problem --- p.2Chapter 1.2 --- Contributions and Thesis Outline --- p.5Chapter 2 --- Background --- p.7Chapter 2.1 --- Sources of Timing Uncertainty --- p.7Chapter 2.1.1 --- Process Variation --- p.7Chapter 2.1.2 --- Runtime Environment Fluctuation --- p.9Chapter 2.1.3 --- Aging Effect --- p.10Chapter 2.2 --- Technical Flow to Solve Timing Uncertainty Problem --- p.10Chapter 2.3 --- False Path --- p.12Chapter 2.3.1 --- Path Sensitization Criteria --- p.12Chapter 2.3.2 --- False Path Aware Timing Analysis --- p.13Chapter 2.4 --- Manufacturing Testing --- p.14Chapter 2.4.1 --- Functional Testing vs. Structural Testing --- p.14Chapter 2.4.2 --- Scan-Based DfT --- p.15Chapter 2.4.3 --- Pseudo-Functional Testing --- p.17Chapter 2.5 --- Timing Error Tolerance --- p.19Chapter 2.5.1 --- Timing Error Detection --- p.19Chapter 2.5.2 --- Timing Error Recover --- p.20Chapter 3 --- Timing-Independent False Path Identification --- p.23Chapter 3.1 --- Introduction --- p.23Chapter 3.2 --- Preliminaries and Motivation --- p.26Chapter 3.2.1 --- Motivation --- p.27Chapter 3.3 --- False Path Examination Considering Illegal States --- p.28Chapter 3.3.1 --- Path Sensitization Criterion --- p.28Chapter 3.3.2 --- Path-Aware Illegal State Identification --- p.30Chapter 3.3.3 --- Proposed Examination Procedure --- p.31Chapter 3.4 --- False Path Identification --- p.32Chapter 3.4.1 --- Overall Flow --- p.34Chapter 3.4.2 --- Static Implication Learning --- p.35Chapter 3.4.3 --- Suspicious Node Extraction --- p.36Chapter 3.4.4 --- S-Frontier Propagation --- p.37Chapter 3.5 --- Experimental Results --- p.38Chapter 3.6 --- Conclusion and Future Work --- p.42Chapter 4 --- PSN Aware Pseudo-Functional Delay Testing --- p.43Chapter 4.1 --- Introduction --- p.43Chapter 4.2 --- Preliminaries and Motivation --- p.45Chapter 4.2.1 --- Motivation --- p.46Chapter 4.3 --- Proposed Methodology --- p.48Chapter 4.4 --- Maximizing PSN Effects under Functional Constraints --- p.50Chapter 4.4.1 --- Pseudo-Functional Relevant Transitions Generation --- p.51Chapter 4.5 --- Experimental Results --- p.59Chapter 4.5.1 --- Experimental Setup --- p.59Chapter 4.5.2 --- Results and Discussion --- p.60Chapter 4.6 --- Conclusion --- p.64Chapter 5 --- In-Situ Timing Error Masking in Logic Circuits --- p.65Chapter 5.1 --- Introduction --- p.65Chapter 5.2 --- Prior Work and Motivation --- p.67Chapter 5.3 --- In-Situ Timing Error Masking with Approximate Logic --- p.69Chapter 5.3.1 --- Equivalent Circuit Construction with Approximate Logic --- p.70Chapter 5.3.2 --- Timing Error Masking with Approximate Logic --- p.72Chapter 5.4 --- Cost-Efficient Synthesis for InTimeFix --- p.75Chapter 5.4.1 --- Overall Flow --- p.76Chapter 5.4.2 --- Prime Critical Segment Extraction --- p.77Chapter 5.4.3 --- Prime Critical Segment Merging --- p.79Chapter 5.5 --- Experimental Results --- p.81Chapter 5.5.1 --- Experimental Setup --- p.81Chapter 5.5.2 --- Results and Discussion --- p.82Chapter 5.6 --- Conclusion --- p.85Chapter 6 --- Conclusion and Future Work --- p.86Bibliography --- p.10

    Transient error mitigation by means of approximate logic circuits

    Get PDF
    Mención Internacional en el título de doctorThe technological advances in the manufacturing of electronic circuits have allowed to greatly improve their performance, but they have also increased the sensitivity of electronic devices to radiation-induced errors. Among them, the most common effects are the SEEs, i.e., electrical perturbations provoked by the strike of high-energy particles, which may modify the internal state of a memory element (SEU) or generate erroneous transient pulses (SET), among other effects. These events pose a threat for the reliability of electronic circuits, and therefore fault-tolerance techniques must be applied to deal with them. The most common fault-tolerance techniques are based in full replication (DWC or TMR). These techniques are able to cover a wide range of failure mechanisms present in electronic circuits. However, they suffer from high overheads in terms of area and power consumption. For this reason, lighter alternatives are often sought at the expense of slightly reducing reliability for the least critical circuit sections. In this context a new paradigm of electronic design is emerging, known as approximate computing, which is based on improving the circuit performance in change of slight modifications of the intended functionality. This is an interesting approach for the design of lightweight fault-tolerant solutions, which has not been yet studied in depth. The main goal of this thesis consists in developing new lightweight fault-tolerant techniques with partial replication, by means of approximate logic circuits. These circuits can be designed with great flexibility. This way, the level of protection as well as the overheads can be adjusted at will depending on the necessities of each application. However, finding optimal approximate circuits for a given application is still a challenge. In this thesis a method for approximate circuit generation is proposed, denoted as fault approximation, which consists in assigning constant logic values to specific circuit lines. On the other hand, several criteria are developed to generate the most suitable approximate circuits for each application, by using this fault approximation mechanism. These criteria are based on the idea of approximating the least testable sections of circuits, which allows reducing overheads while minimising the loss of reliability. Therefore, in this thesis the selection of approximations is linked to testability measures. The first criterion for fault selection developed in this thesis uses static testability measures. The approximations are generated from the results of a fault simulation of the target circuit, and from a user-specified testability threshold. The amount of approximated faults depends on the chosen threshold, which allows to generate approximate circuits with different performances. Although this approach was initially intended for combinational circuits, an extension to sequential circuits has been performed as well, by considering the flip-flops as both inputs and outputs of the combinational part of the circuit. The experimental results show that this technique achieves a wide scalability, and an acceptable trade-off between reliability versus overheads. In addition, its computational complexity is very low. However, the selection criterion based in static testability measures has some drawbacks. Adjusting the performance of the generated approximate circuits by means of the approximation threshold is not intuitive, and the static testability measures do not take into account the changes as long as faults are approximated. Therefore, an alternative criterion is proposed, which is based on dynamic testability measures. With this criterion, the testability of each fault is computed by means of an implication-based probability analysis. The probabilities are updated with each new approximated fault, in such a way that on each iteration the most beneficial approximation is chosen, that is, the fault with the lowest probability. In addition, the computed probabilities allow to estimate the level of protection against faults that the generated approximate circuits provide. Therefore, it is possible to generate circuits which stick to a target error rate. By modifying this target, circuits with different performances can be obtained. The experimental results show that this new approach is able to stick to the target error rate with reasonably good precision. In addition, the approximate circuits generated with this technique show better performance than with the approach based in static testability measures. In addition, the fault implications have been reused too in order to implement a new type of logic transformation, which consists in substituting functionally similar nodes. Once the fault selection criteria have been developed, they are applied to different scenarios. First, an extension of the proposed techniques to FPGAs is performed, taking into account the particularities of this kind of circuits. This approach has been validated by means of radiation experiments, which show that a partial replication with approximate circuits can be even more robust than a full replication approach, because a smaller area reduces the probability of SEE occurrence. Besides, the proposed techniques have been applied to a real application circuit as well, in particular to the microprocessor ARM Cortex M0. A set of software benchmarks is used to generate the required testability measures. Finally, a comparative study of the proposed approaches with approximate circuit generation by means of evolutive techniques have been performed. These approaches make use of a high computational capacity to generate multiple circuits by trial-and-error, thus reducing the possibility of falling into local minima. The experimental results demonstrate that the circuits generated with evolutive approaches are slightly better in performance than the circuits generated with the techniques here proposed, although with a much higher computational effort. In summary, several original fault mitigation techniques with approximate logic circuits are proposed. These approaches are demonstrated in various scenarios, showing that the scalability and adaptability to the requirements of each application are their main virtuesLos avances tecnológicos en la fabricación de circuitos electrónicos han permitido mejorar en gran medida sus prestaciones, pero también han incrementado la sensibilidad de los mismos a los errores provocados por la radiación. Entre ellos, los más comunes son los SEEs, perturbaciones eléctricas causadas por el impacto de partículas de alta energía, que entre otros efectos pueden modificar el estado de los elementos de memoria (SEU) o generar pulsos transitorios de valor erróneo (SET). Estos eventos suponen un riesgo para la fiabilidad de los circuitos electrónicos, por lo que deben ser tratados mediante técnicas de tolerancia a fallos. Las técnicas de tolerancia a fallos más comunes se basan en la replicación completa del circuito (DWC o TMR). Estas técnicas son capaces de cubrir una amplia variedad de modos de fallo presentes en los circuitos electrónicos. Sin embargo, presentan un elevado sobrecoste en área y consumo. Por ello, a menudo se buscan alternativas más ligeras, aunque no tan efectivas, basadas en una replicación parcial. En este contexto surge una nueva filosofía de diseño electrónico, conocida como computación aproximada, basada en mejorar las prestaciones de un diseño a cambio de ligeras modificaciones de la funcionalidad prevista. Es un enfoque atractivo y poco explorado para el diseño de soluciones ligeras de tolerancia a fallos. El objetivo de esta tesis consiste en desarrollar nuevas técnicas ligeras de tolerancia a fallos por replicación parcial, mediante el uso de circuitos lógicos aproximados. Estos circuitos se pueden diseñar con una gran flexibilidad. De este forma, tanto el nivel de protección como el sobrecoste se pueden regular libremente en función de los requisitos de cada aplicación. Sin embargo, encontrar los circuitos aproximados óptimos para cada aplicación es actualmente un reto. En la presente tesis se propone un método para generar circuitos aproximados, denominado aproximación de fallos, consistente en asignar constantes lógicas a ciertas líneas del circuito. Por otro lado, se desarrollan varios criterios de selección para, mediante este mecanismo, generar los circuitos aproximados más adecuados para cada aplicación. Estos criterios se basan en la idea de aproximar las secciones menos testables del circuito, lo que permite reducir los sobrecostes minimizando la perdida de fiabilidad. Por tanto, en esta tesis la selección de aproximaciones se realiza a partir de medidas de testabilidad. El primer criterio de selección de fallos desarrollado en la presente tesis hace uso de medidas de testabilidad estáticas. Las aproximaciones se generan a partir de los resultados de una simulación de fallos del circuito objetivo, y de un umbral de testabilidad especificado por el usuario. La cantidad de fallos aproximados depende del umbral escogido, lo que permite generar circuitos aproximados con diferentes prestaciones. Aunque inicialmente este método ha sido concebido para circuitos combinacionales, también se ha realizado una extensión a circuitos secuenciales, considerando los biestables como entradas y salidas de la parte combinacional del circuito. Los resultados experimentales demuestran que esta técnica consigue una buena escalabilidad, y unas prestaciones de coste frente a fiabilidad aceptables. Además, tiene un coste computacional muy bajo. Sin embargo, el criterio de selección basado en medidas estáticas presenta algunos inconvenientes. No resulta intuitivo ajustar las prestaciones de los circuitos aproximados a partir de un umbral de testabilidad, y las medidas estáticas no tienen en cuenta los cambios producidos a medida que se van aproximando fallos. Por ello, se propone un criterio alternativo de selección de fallos, basado en medidas de testabilidad dinámicas. Con este criterio, la testabilidad de cada fallo se calcula mediante un análisis de probabilidades basado en implicaciones. Las probabilidades se actualizan con cada nuevo fallo aproximado, de forma que en cada iteración se elige la aproximación más favorable, es decir, el fallo con menor probabilidad. Además, las probabilidades calculadas permiten estimar la protección frente a fallos que ofrecen los circuitos aproximados generados, por lo que es posible generar circuitos que se ajusten a una tasa de fallos objetivo. Modificando esta tasa se obtienen circuitos aproximados con diferentes prestaciones. Los resultados experimentales muestran que este método es capaz de ajustarse razonablemente bien a la tasa de fallos objetivo. Además, los circuitos generados con esta técnica muestran mejores prestaciones que con el método basado en medidas estáticas. También se han aprovechado las implicaciones de fallos para implementar un nuevo tipo de transformación lógica, consistente en sustituir nodos funcionalmente similares. Una vez desarrollados los criterios de selección de fallos, se aplican a distintos campos. En primer lugar, se hace una extensión de las técnicas propuestas para FPGAs, teniendo en cuenta las particularidades de este tipo de circuitos. Esta técnica se ha validado mediante experimentos de radiación, los cuales demuestran que una replicación parcial con circuitos aproximados puede ser incluso más robusta que una replicación completa, ya que un área más pequeña reduce la probabilidad de SEEs. Por otro lado, también se han aplicado las técnicas propuestas en esta tesis a un circuito de aplicación real, el microprocesador ARM Cortex M0, utilizando un conjunto de benchmarks software para generar las medidas de testabilidad necesarias. Por ´último, se realiza un estudio comparativo de las técnicas desarrolladas con la generación de circuitos aproximados mediante técnicas evolutivas. Estas técnicas hacen uso de una gran capacidad de cálculo para generar múltiples circuitos mediante ensayo y error, reduciendo la posibilidad de caer en algún mínimo local. Los resultados confirman que, en efecto, los circuitos generados mediante técnicas evolutivas son ligeramente mejores en prestaciones que con las técnicas aquí propuestas, pero con un coste computacional mucho mayor. En definitiva, se proponen varias técnicas originales de mitigación de fallos mediante circuitos aproximados. Se demuestra que estas técnicas tienen diversas aplicaciones, haciendo de la flexibilidad y adaptabilidad a los requisitos de cada aplicación sus principales virtudes.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Raoul Velazco.- Secretario: Almudena Lindoso Muñoz.- Vocal: Jaume Segura Fuste

    Approximate logic circuits: Theory and applications

    Get PDF
    CMOS technology scaling, the process of shrinking transistor dimensions based on Moore's law, has been the thrust behind increasingly powerful integrated circuits for over half a century. As dimensions are scaled to few tens of nanometers, process and environmental variations can significantly alter transistor characteristics, thus degrading reliability and reducing performance gains in CMOS designs with technology scaling. Although design solutions proposed in recent years to improve reliability of CMOS designs are power-efficient, the performance penalty associated with these solutions further reduces performance gains with technology scaling, and hence these solutions are not well-suited for high-performance designs. This thesis proposes approximate logic circuits as a new logic synthesis paradigm for reliable, high-performance computing systems. Given a specification, an approximate logic circuit is functionally equivalent to the given specification for a "significant" portion of the input space, but has a smaller delay and power as compared to a circuit implementation of the original specification. This contributions of this thesis include (i) a general theory of approximation and efficient algorithms for automated synthesis of approximations for unrestricted random logic circuits, (ii) logic design solutions based on approximate circuits to improve reliability of designs with negligible performance penalty, and (iii) efficient decomposition algorithms based on approxiiii mate circuits to improve performance of designs during logic synthesis. This thesis concludes with other potential applications of approximate circuits and identifies. open problems in logic decomposition and approximate circuit synthesis

    Reliability-aware circuit design to mitigate impact of device defects and variability in emerging memristor-based applications

    Get PDF
    In the last decades, semiconductor industry has fostered a fast downscale in technology, propelling the large scale integration of CMOS-based systems. The benefits in miniaturization are numerous, highlighting faster switching frequency, lower voltage supply and higher device density. However, this aggressive scaling trend it has not been without challenges, such as leakage currents, yield reduction or the increase in the overall system power dissipation. New materials, changes in the device structures and new architectures are key to keep the miniaturization trend. It is foreseen that 2D integration will eventually come to an insurmountable physical and economic limit, in which new strategic directions are required, such as the development of new device structures, 3D architectures or heterogeneous systems that takes advantage of the best of different technologies, both the ones already consolidated as well as emergent ones that provide performance and efficiency improvements in applications. In this context, memristor arises as one of several candidates in the race to find suitable emergent devices. Memristor, a blend of the words memory and resistor, is a passive device postulated by Leon Chua in 1971. In contrast with the other fundamental passive elements, memristors have the distinctive feature of modifying their resistance according to the charge that passes through these devices, and remaining unaltered when charge no longer flows. Although when it appeared no physical device implementation was acknowledged, HP Labs claimed in 2008 the manufacture of the first real memristor. This milestone triggered an unexpectedly high research activity about memristors, both in searching new materials and structures as well as in potential applications. Nowadays, memristors are not only appreciated in memory systems by their nonvolatile storage properties, but in many other fields, such as digital computing, signal processing circuits, or non-conventional applications like neuromorphic computing or chaotic circuits. In spite of their promising features, memristors show a primarily downside: they show significant device variation and limited lifetime due degradation compared with other alternatives. This Thesis explores the challenges that memristor variation and malfunction imposes in potential applications. The main goal is to propose circuits and strategies that either avoid reliability problems or take advantage of them. Throughout a collection of scenarios in which reliability issues are present, their impact is studied by means of simulations. This thesis is contextualized and their objectives are exposed in Chapter 1. In Chapter 2 the memristor is introduced, at both conceptual and experimental levels, and different compact levels are presented to be later used in simulations. Chapter 3 deepens in the phenomena that causes the lack of reliability in memristors, and models that include these defects in simulations are provided. The rest of the Thesis covers different applications. Therefore, Chapter 4 exhibits nonvolatile memory systems, and specifically an online test method for faulty cells. Digital computing is presented in Chapter 5, where a solution for the yield reduction in logic operations due to memristors variability is proposed. Lastly, Chapter 6 reviews applications in the analog domain, and it focuses in the exploitation of results observed in faulty memristor-based interconnect mediums for chaotic systems synchronization purposes. Finally, the Thesis concludes in Chapter 7 along with perspectives about future work.Este trabajo desarrolla un novedoso dispositivo condensador basado en el uso de la nanotecnología. El dispositivo parte del concepto existente de metal-aislador-metal (MIM), pero en lugar de una capa aislante continua, se utilizan nanopartículas dieléctricas. Las nanopartículas son principalmente de óxido de silicio (sílice) y poliestireno (PS) y los valores de diámetro son 255nm y 295nm respectivamente. Las nanopartículas contribuyen a una alta relación superficie/volumen y están fácilmente disponibles a bajo costo. La tecnología de depósito desarrollada en este trabajo se basa en la técnica de electrospray, que es una tecnología de fabricación ascendente (bottom-up) que permite el procesamiento por lotes y logra un buen compromiso entre una gran superficie y un bajo tiempo de depósito. Con el objetivo de aumentar la superficie de depósito, la configuración de electrospray ha sido ajustada para permitir áreas de depósito de 1cm2 a 25cm2. El dispositivo fabricado, los llamados condensadores de metal aislante de nanopartículas (NP-MIM) ofrecen valores de capacidad más altos que un condensador convencional similar con una capa aislante continua. En el caso de los NP-MIM de sílice, se alcanza un factor de hasta 1000 de mejora de la capacidad, mientras que los NP-MIM de poliestireno exhibe una ganancia de capacidad en el rango de 11. Además, los NP-MIM de sílice muestran comportamientos capacitivos en específicos rangos de frecuencias que depende de la humedad y el grosor de la capa de nanopartículas, mientras que los NP-MIM de poliestireno siempre mantienen su comportamiento capacitivo. Los dispositivos fabricados se han caracterizado mediante medidas de microscopía electrónica de barrido (SEM) complementadas con perforaciones de haz de iones focalizados (FIB) para caracterizar la topografía de los NP-MIMs. Los dispositivos también se han caracterizado por medidas de espectroscopia de impedancia, a diferentes temperaturas y humedades. El origen de la capacitancia aumentada está asociado en parte a la humedad en las interfaces de las nanopartículas. Se ha desarrollado un modelo de un circuito basado en elementos distribuidos para ajustar y predecir el comportamiento eléctrico de los NP-MIMs. En resumen, esta tesis muestra el diseño, fabricación, caracterización y modelización de un nuevo y prometedor condensador nanopartículas metal-aislante-metal que puede abrir el camino al desarrollo de una nueva tecnología de supercondensadores MIM

    Reliability-aware circuit design to mitigate impact of device defects and variability in emerging memristor-based applications

    Get PDF
    In the last decades, semiconductor industry has fostered a fast downscale in technology, propelling the large scale integration of CMOS-based systems. The benefits in miniaturization are numerous, highlighting faster switching frequency, lower voltage supply and higher device density. However, this aggressive scaling trend it has not been without challenges, such as leakage currents, yield reduction or the increase in the overall system power dissipation. New materials, changes in the device structures and new architectures are key to keep the miniaturization trend. It is foreseen that 2D integration will eventually come to an insurmountable physical and economic limit, in which new strategic directions are required, such as the development of new device structures, 3D architectures or heterogeneous systems that takes advantage of the best of different technologies, both the ones already consolidated as well as emergent ones that provide performance and efficiency improvements in applications. In this context, memristor arises as one of several candidates in the race to find suitable emergent devices. Memristor, a blend of the words memory and resistor, is a passive device postulated by Leon Chua in 1971. In contrast with the other fundamental passive elements, memristors have the distinctive feature of modifying their resistance according to the charge that passes through these devices, and remaining unaltered when charge no longer flows. Although when it appeared no physical device implementation was acknowledged, HP Labs claimed in 2008 the manufacture of the first real memristor. This milestone triggered an unexpectedly high research activity about memristors, both in searching new materials and structures as well as in potential applications. Nowadays, memristors are not only appreciated in memory systems by their nonvolatile storage properties, but in many other fields, such as digital computing, signal processing circuits, or non-conventional applications like neuromorphic computing or chaotic circuits. In spite of their promising features, memristors show a primarily downside: they show significant device variation and limited lifetime due degradation compared with other alternatives. This Thesis explores the challenges that memristor variation and malfunction imposes in potential applications. The main goal is to propose circuits and strategies that either avoid reliability problems or take advantage of them. Throughout a collection of scenarios in which reliability issues are present, their impact is studied by means of simulations. This thesis is contextualized and their objectives are exposed in Chapter 1. In Chapter 2 the memristor is introduced, at both conceptual and experimental levels, and different compact levels are presented to be later used in simulations. Chapter 3 deepens in the phenomena that causes the lack of reliability in memristors, and models that include these defects in simulations are provided. The rest of the Thesis covers different applications. Therefore, Chapter 4 exhibits nonvolatile memory systems, and specifically an online test method for faulty cells. Digital computing is presented in Chapter 5, where a solution for the yield reduction in logic operations due to memristors variability is proposed. Lastly, Chapter 6 reviews applications in the analog domain, and it focuses in the exploitation of results observed in faulty memristor-based interconnect mediums for chaotic systems synchronization purposes. Finally, the Thesis concludes in Chapter 7 along with perspectives about future work.Este trabajo desarrolla un novedoso dispositivo condensador basado en el uso de la nanotecnología. El dispositivo parte del concepto existente de metal-aislador-metal (MIM), pero en lugar de una capa aislante continua, se utilizan nanopartículas dieléctricas. Las nanopartículas son principalmente de óxido de silicio (sílice) y poliestireno (PS) y los valores de diámetro son 255nm y 295nm respectivamente. Las nanopartículas contribuyen a una alta relación superficie/volumen y están fácilmente disponibles a bajo costo. La tecnología de depósito desarrollada en este trabajo se basa en la técnica de electrospray, que es una tecnología de fabricación ascendente (bottom-up) que permite el procesamiento por lotes y logra un buen compromiso entre una gran superficie y un bajo tiempo de depósito. Con el objetivo de aumentar la superficie de depósito, la configuración de electrospray ha sido ajustada para permitir áreas de depósito de 1cm2 a 25cm2. El dispositivo fabricado, los llamados condensadores de metal aislante de nanopartículas (NP-MIM) ofrecen valores de capacidad más altos que un condensador convencional similar con una capa aislante continua. En el caso de los NP-MIM de sílice, se alcanza un factor de hasta 1000 de mejora de la capacidad, mientras que los NP-MIM de poliestireno exhibe una ganancia de capacidad en el rango de 11. Además, los NP-MIM de sílice muestran comportamientos capacitivos en específicos rangos de frecuencias que depende de la humedad y el grosor de la capa de nanopartículas, mientras que los NP-MIM de poliestireno siempre mantienen su comportamiento capacitivo. Los dispositivos fabricados se han caracterizado mediante medidas de microscopía electrónica de barrido (SEM) complementadas con perforaciones de haz de iones focalizados (FIB) para caracterizar la topografía de los NP-MIMs. Los dispositivos también se han caracterizado por medidas de espectroscopia de impedancia, a diferentes temperaturas y humedades. El origen de la capacitancia aumentada está asociado en parte a la humedad en las interfaces de las nanopartículas. Se ha desarrollado un modelo de un circuito basado en elementos distribuidos para ajustar y predecir el comportamiento eléctrico de los NP-MIMs. En resumen, esta tesis muestra el diseño, fabricación, caracterización y modelización de un nuevo y prometedor condensador nanopartículas metal-aislante-metal que puede abrir el camino al desarrollo de una nueva tecnología de supercondensadores MIM.Postprint (published version

    Reliability-aware circuit design to mitigate impact of device defects and variability in emerging memristor-based applications

    Get PDF
    In the last decades, semiconductor industry has fostered a fast downscale in technology, propelling the large scale integration of CMOS-based systems. The benefits in miniaturization are numerous, highlighting faster switching frequency, lower voltage supply and higher device density. However, this aggressive scaling trend it has not been without challenges, such as leakage currents, yield reduction or the increase in the overall system power dissipation. New materials, changes in the device structures and new architectures are key to keep the miniaturization trend. It is foreseen that 2D integration will eventually come to an insurmountable physical and economic limit, in which new strategic directions are required, such as the development of new device structures, 3D architectures or heterogeneous systems that takes advantage of the best of different technologies, both the ones already consolidated as well as emergent ones that provide performance and efficiency improvements in applications. In this context, memristor arises as one of several candidates in the race to find suitable emergent devices. Memristor, a blend of the words memory and resistor, is a passive device postulated by Leon Chua in 1971. In contrast with the other fundamental passive elements, memristors have the distinctive feature of modifying their resistance according to the charge that passes through these devices, and remaining unaltered when charge no longer flows. Although when it appeared no physical device implementation was acknowledged, HP Labs claimed in 2008 the manufacture of the first real memristor. This milestone triggered an unexpectedly high research activity about memristors, both in searching new materials and structures as well as in potential applications. Nowadays, memristors are not only appreciated in memory systems by their nonvolatile storage properties, but in many other fields, such as digital computing, signal processing circuits, or non-conventional applications like neuromorphic computing or chaotic circuits. In spite of their promising features, memristors show a primarily downside: they show significant device variation and limited lifetime due degradation compared with other alternatives. This Thesis explores the challenges that memristor variation and malfunction imposes in potential applications. The main goal is to propose circuits and strategies that either avoid reliability problems or take advantage of them. Throughout a collection of scenarios in which reliability issues are present, their impact is studied by means of simulations. This thesis is contextualized and their objectives are exposed in Chapter 1. In Chapter 2 the memristor is introduced, at both conceptual and experimental levels, and different compact levels are presented to be later used in simulations. Chapter 3 deepens in the phenomena that causes the lack of reliability in memristors, and models that include these defects in simulations are provided. The rest of the Thesis covers different applications. Therefore, Chapter 4 exhibits nonvolatile memory systems, and specifically an online test method for faulty cells. Digital computing is presented in Chapter 5, where a solution for the yield reduction in logic operations due to memristors variability is proposed. Lastly, Chapter 6 reviews applications in the analog domain, and it focuses in the exploitation of results observed in faulty memristor-based interconnect mediums for chaotic systems synchronization purposes. Finally, the Thesis concludes in Chapter 7 along with perspectives about future work.Este trabajo desarrolla un novedoso dispositivo condensador basado en el uso de la nanotecnología. El dispositivo parte del concepto existente de metal-aislador-metal (MIM), pero en lugar de una capa aislante continua, se utilizan nanopartículas dieléctricas. Las nanopartículas son principalmente de óxido de silicio (sílice) y poliestireno (PS) y los valores de diámetro son 255nm y 295nm respectivamente. Las nanopartículas contribuyen a una alta relación superficie/volumen y están fácilmente disponibles a bajo costo. La tecnología de depósito desarrollada en este trabajo se basa en la técnica de electrospray, que es una tecnología de fabricación ascendente (bottom-up) que permite el procesamiento por lotes y logra un buen compromiso entre una gran superficie y un bajo tiempo de depósito. Con el objetivo de aumentar la superficie de depósito, la configuración de electrospray ha sido ajustada para permitir áreas de depósito de 1cm2 a 25cm2. El dispositivo fabricado, los llamados condensadores de metal aislante de nanopartículas (NP-MIM) ofrecen valores de capacidad más altos que un condensador convencional similar con una capa aislante continua. En el caso de los NP-MIM de sílice, se alcanza un factor de hasta 1000 de mejora de la capacidad, mientras que los NP-MIM de poliestireno exhibe una ganancia de capacidad en el rango de 11. Además, los NP-MIM de sílice muestran comportamientos capacitivos en específicos rangos de frecuencias que depende de la humedad y el grosor de la capa de nanopartículas, mientras que los NP-MIM de poliestireno siempre mantienen su comportamiento capacitivo. Los dispositivos fabricados se han caracterizado mediante medidas de microscopía electrónica de barrido (SEM) complementadas con perforaciones de haz de iones focalizados (FIB) para caracterizar la topografía de los NP-MIMs. Los dispositivos también se han caracterizado por medidas de espectroscopia de impedancia, a diferentes temperaturas y humedades. El origen de la capacitancia aumentada está asociado en parte a la humedad en las interfaces de las nanopartículas. Se ha desarrollado un modelo de un circuito basado en elementos distribuidos para ajustar y predecir el comportamiento eléctrico de los NP-MIMs. En resumen, esta tesis muestra el diseño, fabricación, caracterización y modelización de un nuevo y prometedor condensador nanopartículas metal-aislante-metal que puede abrir el camino al desarrollo de una nueva tecnología de supercondensadores MIM
    corecore