1,232 research outputs found

    Robust and Efficient Uncertainty Quantification and Validation of RFIC Isolation

    Get PDF
    Modern communication and identification products impose demanding constraints on reliability of components. Due to this statistical constraints more and more enter optimization formulations of electronic products. Yield constraints often require efficient sampling techniques to obtain uncertainty quantification also at the tails of the distributions. These sampling techniques should outperform standard Monte Carlo techniques, since these latter ones are normally not efficient enough to deal with tail probabilities. One such a technique, Importance Sampling, has successfully been applied to optimize Static Random Access Memories (SRAMs) while guaranteeing very small failure probabilities, even going beyond 6-sigma variations of parameters involved. Apart from this, emerging uncertainty quantifications techniques offer expansions of the solution that serve as a response surface facility when doing statistics and optimization. To efficiently derive the coefficients in the expansions one either has to solve a large number of problems or a huge combined problem. Here parameterized Model Order Reduction (MOR) techniques can be used to reduce the work load. To also reduce the amount of parameters we identify those that only affect the variance in a minor way. These parameters can simply be set to a fixed value. The remaining parameters can be viewed as dominant. Preservation of the variation also allows to make statements about the approximation accuracy obtained by the parameter-reduced problem. This is illustrated on an RLC circuit. Additionally, the MOR technique used should not affect the variance significantly. Finally we consider a methodology for reliable RFIC isolation using floor-plan modeling and isolation grounding. Simulations show good comparison with measurements

    A Review of Bayesian Methods in Electronic Design Automation

    Full text link
    The utilization of Bayesian methods has been widely acknowledged as a viable solution for tackling various challenges in electronic integrated circuit (IC) design under stochastic process variation, including circuit performance modeling, yield/failure rate estimation, and circuit optimization. As the post-Moore era brings about new technologies (such as silicon photonics and quantum circuits), many of the associated issues there are similar to those encountered in electronic IC design and can be addressed using Bayesian methods. Motivated by this observation, we present a comprehensive review of Bayesian methods in electronic design automation (EDA). By doing so, we hope to equip researchers and designers with the ability to apply Bayesian methods in solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which can be sent to [email protected]

    Statistical analysis and design of subthreshold operation memories

    Get PDF
    This thesis presents novel methods based on a combination of well-known statistical techniques for faster estimation of memory yield and their application in the design of energy-efficient subthreshold memories. The emergence of size-constrained Internet-of-Things (IoT) devices and proliferation of the wearable market has brought forward the challenge of achieving the maximum energy efficiency per operation in these battery operated devices. Achieving this sought-after minimum energy operation is possible under sub-threshold operation of the circuit. However, reliable memory operation is currently unattainable at these ultra-low operating voltages because of the memory circuit's vanishing noise margins which shrink further in the presence of random process variations. The statistical methods, presented in this thesis, make the yield optimization of the sub-threshold memories computationally feasible by reducing the SPICE simulation overhead. We present novel modifications to statistical sampling techniques that reduce the SPICE simulation overhead in estimating memory failure probability. These sampling scheme provides 40x reduction in finding most probable failure point and 10x reduction in estimating failure probability using the SPICE simulations compared to the existing proposals. We then provide a novel method to create surrogate models of the memory margins with better extrapolation capability than the traditional regression methods. These models, based on Gaussian process regression, encode the sensitivity of the memory margins with respect to each individual threshold variation source in a one-dimensional kernel. We find that our proposed additive kernel based models have 32% smaller out-of-sample error (that is, better extrapolation capability outside training set) than using the six-dimensional universal kernel like Radial Basis Function (RBF). The thesis also explores the topological modifications to the SRAM bitcell to achieve faster read operation at the sub-threshold operating voltages. We present a ten-transistor SRAM bitcell that achieves 2x faster read operation than the existing ten-transistor sub-threshold SRAM bitcells, while ensuring similar noise margins. The SRAM bitcell provides 70% reduction in dynamic energy at the cost of 42% increase in the leakage energy per read operation. Finally, we investigate the energy efficiency of the eDRAM gain-cells as an alternative to the SRAM bitcells in the size-constrained IoT devices. We find that reducing their write path leakage current is the only way to reduce the read energy at Minimum Energy operation Point (MEP). Further, we study the effect of transistor up-sizing under the presence of threshold voltage variations on the mean MEP read energy by performing statistical analysis based on the ANOVA test of the full-factorial experimental design.Esta tesis presenta nuevos métodos basados en una combinación de técnicas estadísticas conocidas para la estimación rápida del rendimiento de la memoria y su aplicación en el diseño de memorias de energia eficiente de sub-umbral. La aparición de los dispositivos para el Internet de las cosas (IOT) y la proliferación del mercado portátil ha presentado el reto de lograr la máxima eficiencia energética por operación de estos dispositivos operados con baterias. La eficiencia de energía es posible si se considera la operacion por debajo del umbral de los circuitos. Sin embargo, la operación confiable de memoria es actualmente inalcanzable en estos bajos niveles de voltaje debido a márgenes de ruido de fuga del circuito de memoria, los cuales se pueden reducir aún más en presencia de variaciones randomicas de procesos. Los métodos estadísticos, que se presentan en esta tesis, hacen que la optimización del rendimiento de las memorias por debajo del umbral computacionalmente factible mediante la simulación SPICE. Presentamos nuevas modificaciones a las técnicas de muestreo estadístico que reducen la sobrecarga de simulación SPICE en la estimación de la probabilidad de fallo de memoria. Estos esquemas de muestreo proporciona una reducción de 40 veces en la búsqueda de puntos de fallo más probable, y 10 veces la reducción en la estimación de la probabilidad de fallo mediante las simulaciones SPICE en comparación con otras propuestas existentes. A continuación, se proporciona un método novedoso para crear modelos sustitutos de los márgenes de memoria con una mejor capacidad de extrapolación que los métodos tradicionales de regresión. Estos modelos, basados en el proceso de regresión Gaussiano, codifican la sensibilidad de los márgenes de memoria con respecto a cada fuente de variación de umbral individual en un núcleo de una sola dimensión. Los modelos propuestos, basados en kernel aditivos, tienen un error 32% menor que el error out-of-sample (es decir, mejor capacidad de extrapolación fuera del conjunto de entrenamiento) en comparacion con el núcleo universal de seis dimensiones como la función de base radial (RBF). La tesis también explora las modificaciones topológicas a la celda binaria SRAM para alcanzar velocidades de lectura mas rapidas dentro en el contexto de operaciones en el umbral de tensiones de funcionamiento. Presentamos una celda binaria SRAM de diez transistores que consigue aumentar en 2 veces la operación de lectura en comparacion con las celdas sub-umbral de SRAM de diez transistores existentes, garantizando al mismo tiempo los márgenes de ruido similares. La celda binaria SRAM proporciona una reducción del 70% en energía dinámica a costa del aumento del 42% en la energía de fuga por las operaciones de lectura. Por último, se investiga la eficiencia energética de las células de ganancia eDRAM como una alternativa a los bitcells SRAM en los dispositivos de tamaño limitado IOT. Encontramos que la reducción de la corriente de fuga en el path de escritura es la única manera de reducir la energía de lectura en el Punto Mínimo de Energía (MEP). Además, se estudia el efecto del transistor de dimensionamiento en virtud de la presencia de variaciones de voltaje de umbral en la media de energia de lecture MEP mediante el análisis estadístico basado en la prueba de ANOVA del diseño experimental factorial completo.Postprint (published version

    Design and Robustness Analysis on Non-volatile Storage and Logic Circuit

    Get PDF
    By combining the flexibility of MOS logic and the non-volatility of spintronic devices, spin-MOS logic and storage circuitry offer a promising approach to implement highly integrated, power-efficient, and nonvolatile computing and storage systems. Besides the persistent errors due to process variations, however, the functional correctness of Spin-MOS circuitry suffers from additional non-persistent errors that are incurred by the randomness of spintronic device operations, i.e., thermal fluctuations. This work quantitatively investigates the impact of thermal fluctuations on the operations of two typical Spin-MOS circuitry: one transistor and one magnetic tunnel junction (1T1J) spin-transfer torque random access memory (STT-RAM) cell and a nonvolatile latch design. A new nonvolatile latch design is proposed based on magnetic tunneling junction (MTJ) devices. In the standby mode, the latched data can be retained in the MTJs without consuming any power. Two types of operation errors can occur, namely, persistent and non-persistent errors. These are quantitatively analyzed by including models for process variations and thermal fluctuations during the read and write operations. A mixture importance sampling methodology is applied to enable yield-driven design and extend its application beyond memories to peripheral circuits and logic blocks. Several possible design techniques to reduce thermal induced non-persistent error rate are also discussed

    Importance sampling for high speed statistical Monte-Carlo simulations

    Get PDF
    As transistor dimensions of Static Random AccessMemory (SRAM) become smaller with each new technology generation, they become increasingly susceptible to statistical variations in their parameters. These statistical variations can result in failing memory. SRAM is used as a building block for the construction of large Integrated Circuits (IC). To ensure SRAM does not degrade the yield (fraction of functional devices) of ICs, very low failure probabilities of Pfail = 10-10 are strived for. For instance in SRAMmemory design one aims to get a 0.1% yield loss for 10Mbit memory, which means that 1 in 10 billion cells fails (Pfail = 10-10; this corresponds with an occurrence of -6.4s when dealing with a normal distribution). To simulate such probabilities, traditional Monte-Carlo simulations are not sufficient and more advanced techniques are required. Importance Sampling is a technique that is relatively easy to implement and provides sufficiently accurate results. Importance sampling is a well known technique in statistics to estimate the occurrences of rare events. Rare or extreme events can be associated with dramatic costs, like in finance or because of reasons of safety in environment (dikes, power plants). Recently this technique also received new attention in circuit design. Importance sampling tunes Monte Carlo to the area in parameter space from where the rare events are generated. By this a speed up of several orders can be achieved when compared to standard Monte Carlo methods. We describe the underlying mathematics. Experiments reveal the intrinsic power of the method. The efficiency of the method increases when the dimension of the parameter space increases. The method could be a valuable extension to the statistical capacities of any circuit simulator A Matlab implementation is included in the Appendix

    Efficient cache architectures for reliable hybrid voltage operation using EDC codes

    Get PDF
    Semiconductor technology evolution enables the design of sensor-based battery-powered ultra-low-cost chips (e.g., below 1 p) required for new market segments such as body, urban life and environment monitoring. Caches have been shown to be the highest energy and area consumer in those chips. This paper proposes a novel, hybrid-operation (high Vcc, ultra-low Vcc), single-Vcc domain cache architecture based on replacing energy-hungry bitcells (e.g., 10T) by more energy-efficient and smaller cells (e.g., 8T) enhanced with Error Detection and Correction (EDC) features for high reliability and performance predictability. Our architecture is proven to largely outperform existing solutions in terms of energy and area.Postprint (author’s final draft

    HyDe: A Hybrid PCM/FeFET/SRAM Device-search for Optimizing Area and Energy-efficiencies in Analog IMC Platforms

    Full text link
    Today, there are a plethora of In-Memory Computing (IMC) devices- SRAMs, PCMs & FeFETs, that emulate convolutions on crossbar-arrays with high throughput. Each IMC device offers its own pros & cons during inference of Deep Neural Networks (DNNs) on crossbars in terms of area overhead, programming energy and non-idealities. A design-space exploration is, therefore, imperative to derive a hybrid-device architecture optimized for accurate DNN inference under the impact of non-idealities from multiple devices, while maintaining competitive area & energy-efficiencies. We propose a two-phase search framework (HyDe) that exploits the best of all worlds offered by multiple devices to determine an optimal hybrid-device architecture for a given DNN topology. Our hybrid models achieve upto 2.30-2.74x higher TOPS/mm^2 at 22-26% higher energy-efficiencies than baseline homogeneous models for a VGG16 DNN topology. We further propose a feasible implementation of the HyDe-derived hybrid-device architectures in the 2.5D design space using chiplets to reduce design effort and cost in the hardware fabrication involving multiple technology processes.Comment: Accepted to IEEE Journal on Emerging and Selected Topics in Circuits and Systems (JETCAS
    corecore