739 research outputs found

    A Review of Bayesian Methods in Electronic Design Automation

    Full text link
    The utilization of Bayesian methods has been widely acknowledged as a viable solution for tackling various challenges in electronic integrated circuit (IC) design under stochastic process variation, including circuit performance modeling, yield/failure rate estimation, and circuit optimization. As the post-Moore era brings about new technologies (such as silicon photonics and quantum circuits), many of the associated issues there are similar to those encountered in electronic IC design and can be addressed using Bayesian methods. Motivated by this observation, we present a comprehensive review of Bayesian methods in electronic design automation (EDA). By doing so, we hope to equip researchers and designers with the ability to apply Bayesian methods in solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which can be sent to [email protected]

    Efficient Aging-aware Failure Probability Estimation Using Augmented Reliability and Subset Simulation

    Get PDF
    A circuit-aging simulation that efficiently calculates temporal change of rare circuit-failure probability is proposed. While conventional methods required a long computational time due to the necessity of conducting separate calculations of failure probability at each device age, the proposed Monte Carlo based method requires to run only a single set of simulation. By applying the augmented reliability and subset simulation framework, the change of failure probability along the lifetime of the device can be evaluated through the analysis of the Monte Carlo samples. Combined with the two-step sample generation technique, the proposed method reduces the computational time to about 1/6 of that of the conventional method while maintaining a sufficient estimation accuracy

    Wireless Monitoring Systems for Long-Term Reliability Assessment of Bridge Structures based on Compressed Sensing and Data-Driven Interrogation Methods.

    Full text link
    The state of the nation’s highway bridges has garnered significant public attention due to large inventories of aging assets and insufficient funds for repair. Current management methods are based on visual inspections that have many known limitations including reliance on surface evidence of deterioration and subjectivity introduced by trained inspectors. To address the limitations of current inspection practice, structural health monitoring (SHM) systems can be used to provide quantitative measures of structural behavior and an objective basis for condition assessment. SHM systems are intended to be a cost effective monitoring technology that also automates the processing of data to characterize damage and provide decision information to asset managers. Unfortunately, this realization of SHM systems does not currently exist. In order for SHM to be realized as a decision support tool for bridge owners engaged in performance- and risk-based asset management, technological hurdles must still be overcome. This thesis focuses on advancing wireless SHM systems. An innovative wireless monitoring system was designed for permanent deployment on bridges in cold northern climates which pose an added challenge as the potential for solar harvesting is reduced and battery charging is slowed. First, efforts advancing energy efficient usage strategies for WSNs were made. With WSN energy consumption proportional to the amount of data transmitted, data reduction strategies are prioritized. A novel data compression paradigm termed compressed sensing is advanced for embedment in a wireless sensor microcontroller. In addition, fatigue monitoring algorithms are embedded for local data processing leading to dramatic data reductions. In the second part of the thesis, a radical top-down design strategy (in contrast to global vibration strategies) for a monitoring system is explored to target specific damage concerns of bridge owners. Data-driven algorithmic approaches are created for statistical performance characterization of long-term bridge response. Statistical process control and reliability index monitoring are advanced as a scalable and autonomous means of transforming data into information relevant to bridge risk management. Validation of the wireless monitoring system architecture is made using the Telegraph Road Bridge (Monroe, Michigan), a multi-girder short-span highway bridge that represents a major fraction of the U.S. national inventory.PhDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116749/1/ocosean_1.pd

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Statistical Performance Modeling of SRAMs

    Get PDF
    Yield analysis is a critical step in memory designs considering a variety of performance constraints. Traditional circuit level Monte-Carlo simulations for yield estimation of Static Random Access Memory (SRAM) cell is quite time consuming due to their characteristic of low failure rate, while statistical method of yield sensitivity analysis is meaningful for its high efficiency. This thesis proposes a novel statistical model to conduct yield sensitivity prediction on SRAM cells at the simulation level, which excels regular circuit simulations in a significant runtime speedup. Based on the theory of Kriging method that is widely used in geostatistics, we develop a series of statistical model building and updating strategies to obtain satisfactory accuracy and efficiency in SRAM yield sensitivity analysis. Generally, this model applies to the yield and sensitivity evaluation with varying design parameters, under the constraints of most SRAM performance metric. Moreover, it is potentially suitable for any designated distribution of the process variation regardless of the sampling method

    Statistical analysis and design of subthreshold operation memories

    Get PDF
    This thesis presents novel methods based on a combination of well-known statistical techniques for faster estimation of memory yield and their application in the design of energy-efficient subthreshold memories. The emergence of size-constrained Internet-of-Things (IoT) devices and proliferation of the wearable market has brought forward the challenge of achieving the maximum energy efficiency per operation in these battery operated devices. Achieving this sought-after minimum energy operation is possible under sub-threshold operation of the circuit. However, reliable memory operation is currently unattainable at these ultra-low operating voltages because of the memory circuit's vanishing noise margins which shrink further in the presence of random process variations. The statistical methods, presented in this thesis, make the yield optimization of the sub-threshold memories computationally feasible by reducing the SPICE simulation overhead. We present novel modifications to statistical sampling techniques that reduce the SPICE simulation overhead in estimating memory failure probability. These sampling scheme provides 40x reduction in finding most probable failure point and 10x reduction in estimating failure probability using the SPICE simulations compared to the existing proposals. We then provide a novel method to create surrogate models of the memory margins with better extrapolation capability than the traditional regression methods. These models, based on Gaussian process regression, encode the sensitivity of the memory margins with respect to each individual threshold variation source in a one-dimensional kernel. We find that our proposed additive kernel based models have 32% smaller out-of-sample error (that is, better extrapolation capability outside training set) than using the six-dimensional universal kernel like Radial Basis Function (RBF). The thesis also explores the topological modifications to the SRAM bitcell to achieve faster read operation at the sub-threshold operating voltages. We present a ten-transistor SRAM bitcell that achieves 2x faster read operation than the existing ten-transistor sub-threshold SRAM bitcells, while ensuring similar noise margins. The SRAM bitcell provides 70% reduction in dynamic energy at the cost of 42% increase in the leakage energy per read operation. Finally, we investigate the energy efficiency of the eDRAM gain-cells as an alternative to the SRAM bitcells in the size-constrained IoT devices. We find that reducing their write path leakage current is the only way to reduce the read energy at Minimum Energy operation Point (MEP). Further, we study the effect of transistor up-sizing under the presence of threshold voltage variations on the mean MEP read energy by performing statistical analysis based on the ANOVA test of the full-factorial experimental design.Esta tesis presenta nuevos métodos basados en una combinación de técnicas estadísticas conocidas para la estimación rápida del rendimiento de la memoria y su aplicación en el diseño de memorias de energia eficiente de sub-umbral. La aparición de los dispositivos para el Internet de las cosas (IOT) y la proliferación del mercado portátil ha presentado el reto de lograr la máxima eficiencia energética por operación de estos dispositivos operados con baterias. La eficiencia de energía es posible si se considera la operacion por debajo del umbral de los circuitos. Sin embargo, la operación confiable de memoria es actualmente inalcanzable en estos bajos niveles de voltaje debido a márgenes de ruido de fuga del circuito de memoria, los cuales se pueden reducir aún más en presencia de variaciones randomicas de procesos. Los métodos estadísticos, que se presentan en esta tesis, hacen que la optimización del rendimiento de las memorias por debajo del umbral computacionalmente factible mediante la simulación SPICE. Presentamos nuevas modificaciones a las técnicas de muestreo estadístico que reducen la sobrecarga de simulación SPICE en la estimación de la probabilidad de fallo de memoria. Estos esquemas de muestreo proporciona una reducción de 40 veces en la búsqueda de puntos de fallo más probable, y 10 veces la reducción en la estimación de la probabilidad de fallo mediante las simulaciones SPICE en comparación con otras propuestas existentes. A continuación, se proporciona un método novedoso para crear modelos sustitutos de los márgenes de memoria con una mejor capacidad de extrapolación que los métodos tradicionales de regresión. Estos modelos, basados en el proceso de regresión Gaussiano, codifican la sensibilidad de los márgenes de memoria con respecto a cada fuente de variación de umbral individual en un núcleo de una sola dimensión. Los modelos propuestos, basados en kernel aditivos, tienen un error 32% menor que el error out-of-sample (es decir, mejor capacidad de extrapolación fuera del conjunto de entrenamiento) en comparacion con el núcleo universal de seis dimensiones como la función de base radial (RBF). La tesis también explora las modificaciones topológicas a la celda binaria SRAM para alcanzar velocidades de lectura mas rapidas dentro en el contexto de operaciones en el umbral de tensiones de funcionamiento. Presentamos una celda binaria SRAM de diez transistores que consigue aumentar en 2 veces la operación de lectura en comparacion con las celdas sub-umbral de SRAM de diez transistores existentes, garantizando al mismo tiempo los márgenes de ruido similares. La celda binaria SRAM proporciona una reducción del 70% en energía dinámica a costa del aumento del 42% en la energía de fuga por las operaciones de lectura. Por último, se investiga la eficiencia energética de las células de ganancia eDRAM como una alternativa a los bitcells SRAM en los dispositivos de tamaño limitado IOT. Encontramos que la reducción de la corriente de fuga en el path de escritura es la única manera de reducir la energía de lectura en el Punto Mínimo de Energía (MEP). Además, se estudia el efecto del transistor de dimensionamiento en virtud de la presencia de variaciones de voltaje de umbral en la media de energia de lecture MEP mediante el análisis estadístico basado en la prueba de ANOVA del diseño experimental factorial completo.Postprint (published version

    Solid State Circuits Technologies

    Get PDF
    The evolution of solid-state circuit technology has a long history within a relatively short period of time. This technology has lead to the modern information society that connects us and tools, a large market, and many types of products and applications. The solid-state circuit technology continuously evolves via breakthroughs and improvements every year. This book is devoted to review and present novel approaches for some of the main issues involved in this exciting and vigorous technology. The book is composed of 22 chapters, written by authors coming from 30 different institutions located in 12 different countries throughout the Americas, Asia and Europe. Thus, reflecting the wide international contribution to the book. The broad range of subjects presented in the book offers a general overview of the main issues in modern solid-state circuit technology. Furthermore, the book offers an in depth analysis on specific subjects for specialists. We believe the book is of great scientific and educational value for many readers. I am profoundly indebted to the support provided by all of those involved in the work. First and foremost I would like to acknowledge and thank the authors who worked hard and generously agreed to share their results and knowledge. Second I would like to express my gratitude to the Intech team that invited me to edit the book and give me their full support and a fruitful experience while working together to combine this book
    corecore