316 research outputs found

    A Review of Bayesian Methods in Electronic Design Automation

    Full text link
    The utilization of Bayesian methods has been widely acknowledged as a viable solution for tackling various challenges in electronic integrated circuit (IC) design under stochastic process variation, including circuit performance modeling, yield/failure rate estimation, and circuit optimization. As the post-Moore era brings about new technologies (such as silicon photonics and quantum circuits), many of the associated issues there are similar to those encountered in electronic IC design and can be addressed using Bayesian methods. Motivated by this observation, we present a comprehensive review of Bayesian methods in electronic design automation (EDA). By doing so, we hope to equip researchers and designers with the ability to apply Bayesian methods in solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which can be sent to [email protected]

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Enhancing Variation-aware Analog Circuits Sizing

    Get PDF
    Today's analog design and verification face significant challenges due to circuit complexity and short time-to-market windows. Moreover, variations in design parameters have an adversely impact on the correctness and performance of analog circuits. Circuit sizing consists in determining the device sizes and biasing voltages and currents such that the circuit satisfies its specifications. Traditionally, analog circuit sizing has been carried out by optimization-based methods, which of course will still be important in the future. Unfortunately, these techniques cannot guarantee an exhaustive coverage of the design search space and hence, are not able to ensure the non-existence of higher quality design solutions. The sizing problem becomes more complicated and computationally expensive under design parameters fluctuation. Indeed, existing yield analysis methods are computationally expensive and still encounter issues in problems with a high-dimensional process parameter space. In this thesis, we present new approaches for enhancing variation-aware analog circuit sizing. The circuit sizing problem is encoded using nonlinear constraints. A new algorithm using Satisfiability Modulo Theory (SMT) solving techniques exhaustively explores the analog design space and computes a continuous set of feasible sizing solutions. Next, a yield optimization stage aims to select the candidate design solution with the highest yield rate in the presence of process parameters variation. For this purpose, a novel method for the computation of parametric yield is proposed. The method combines the advantages of sparse regression and SMT solving techniques. The key idea is to characterize the failure regions as a collection of hyperrectangles in the parameters space. The yield estimation is based on a geometric calculation of probabilistic volumes subtended by the located hyperrectangles. The method can provide very large speed-up over Monte Carlo methods, when a high prediction accuracy is required. A new approach for improving analog yield optimization is also proposed. The optimization is performed in two steps. First, a global optimization phase samples the most potential optimal sub-regions of the feasible design space. The global search locates a design point near the optimal solution. Second, a local optimization phase uses the near optimal solution as a starting point. Also, it constructs linear interpolating models of the yield to explore the basin of convergence and to reach the global optimum. We illustrate the efficiency of the proposed methods on various analog circuits. The application of the yield analysis method on an integrated ring oscillator and a 6T static RAM proves that it is suitable for handling problems with tens of process parameters and can provide speedup of 5X-2000X over Monte Carlo methods. Furthermore, the application of our yield optimization methodology on the examples of a two-stage amplifier and a cascode amplifier shows that our approach can achieve higher quality in analog synthesis and unrivaled coverage of the analog design space when compared to traditional optimization techniques

    Low Power Circuits for Smart Flexible ECG Sensors

    Get PDF
    Cardiovascular diseases (CVDs) are the world leading cause of death. In-home heart condition monitoring effectively reduced the CVD patient hospitalization rate. Flexible electrocardiogram (ECG) sensor provides an affordable, convenient and comfortable in-home monitoring solution. The three critical building blocks of the ECG sensor i.e., analog frontend (AFE), QRS detector, and cardiac arrhythmia classifier (CAC), are studied in this research. A fully differential difference amplifier (FDDA) based AFE that employs DC-coupled input stage increases the input impedance and improves CMRR. A parasitic capacitor reuse technique is proposed to improve the noise/area efficiency and CMRR. An on-body DC bias scheme is introduced to deal with the input DC offset. Implemented in 0.35m CMOS process with an area of 0.405mm2, the proposed AFE consumes 0.9W at 1.8V and shows excellent noise effective factor of 2.55, and CMRR of 76dB. Experiment shows the proposed AFE not only picks up clean ECG signal with electrodes placed as close as 2cm under both resting and walking conditions, but also obtains the distinct -wave after eye blink from EEG recording. A personalized QRS detection algorithm is proposed to achieve an average positive prediction rate of 99.39% and sensitivity rate of 99.21%. The user-specific template avoids the complicate models and parameters used in existing algorithms while covers most situations for practical applications. The detection is based on the comparison of the correlation coefficient of the user-specific template with the ECG segment under detection. The proposed one-target clustering reduced the required loops. A continuous-in-time discrete-in-amplitude (CTDA) artificial neural network (ANN) based CAC is proposed for the smart ECG sensor. The proposed CAC achieves over 98% classification accuracy for 4 types of beats defined by AAMI (Association for the Advancement of Medical Instrumentation). The CTDA scheme significantly reduces the input sample numbers and simplifies the sample representation to one bit. Thus, the number of arithmetic operations and the ANN structure are greatly simplified. The proposed CAC is verified by FPGA and implemented in 0.18m CMOS process. Simulation results show it can operate at clock frequencies from 10KHz to 50MHz. Average power for the patient with 75bpm heart rate is 13.34W

    Circuits and Systems Advances in Near Threshold Computing

    Get PDF
    Modern society is witnessing a sea change in ubiquitous computing, in which people have embraced computing systems as an indispensable part of day-to-day existence. Computation, storage, and communication abilities of smartphones, for example, have undergone monumental changes over the past decade. However, global emphasis on creating and sustaining green environments is leading to a rapid and ongoing proliferation of edge computing systems and applications. As a broad spectrum of healthcare, home, and transport applications shift to the edge of the network, near-threshold computing (NTC) is emerging as one of the promising low-power computing platforms. An NTC device sets its supply voltage close to its threshold voltage, dramatically reducing the energy consumption. Despite showing substantial promise in terms of energy efficiency, NTC is yet to see widescale commercial adoption. This is because circuits and systems operating with NTC suffer from several problems, including increased sensitivity to process variation, reliability problems, performance degradation, and security vulnerabilities, to name a few. To realize its potential, we need designs, techniques, and solutions to overcome these challenges associated with NTC circuits and systems. The readers of this book will be able to familiarize themselves with recent advances in electronics systems, focusing on near-threshold computing

    Energy-Aware Data Movement In Non-Volatile Memory Hierarchies

    Get PDF
    While technology scaling enables increased density for memory cells, the intrinsic high leakage power of conventional CMOS technology and the demand for reduced energy consumption inspires the use of emerging technology alternatives such as eDRAM and Non-Volatile Memory (NVM) including STT-MRAM, PCM, and RRAM. The utilization of emerging technology in Last Level Cache (LLC) designs which occupies a signifcant fraction of total die area in Chip Multi Processors (CMPs) introduces new dimensions of vulnerability, energy consumption, and performance delivery. To be specific, a part of this research focuses on eDRAM Bit Upset Vulnerability Factor (BUVF) to assess vulnerable portion of the eDRAM refresh cycle where the critical charge varies depending on the write voltage, storage and bit-line capacitance. This dissertation broaden the study on vulnerability assessment of LLC through investigating the impact of Process Variations (PV) on narrow resistive sensing margins in high-density NVM arrays, including on-chip cache and primary memory. Large-latency and power-hungry Sense Amplifers (SAs) have been adapted to combat PV in the past. Herein, a novel approach is proposed to leverage the PV in NVM arrays using Self-Organized Sub-bank (SOS) design. SOS engages the preferred SA alternative based on the intrinsic as-built behavior of the resistive sensing timing margin to reduce the latency and power consumption while maintaining acceptable access time. On the other hand, this dissertation investigates a novel technique to prioritize the service to 1) Extensive Read Reused Accessed blocks of the LLC that are silently dropped from higher levels of cache, and 2) the portion of the working set that may exhibit distant re-reference interval in L2. In particular, we develop a lightweight Multi-level Access History Profiler to effciently identify ERRA blocks through aggregating the LLC block addresses tagged with identical Most Signifcant Bits into a single entry. Experimental results indicate that the proposed technique can reduce the L2 read miss ratio by 51.7% on average across PARSEC and SPEC2006 workloads. In addition, this dissertation will broaden and apply advancements in theories of subspace recovery to pioneer computationally-aware in-situ operand reconstruction via the novel Logic In Interconnect (LI2) scheme. LI2 will be developed, validated, and re?ned both theoretically and experimentally to realize a radically different approach to post-Moore\u27s Law computing by leveraging low-rank matrices features offering data reconstruction instead of fetching data from main memory to reduce energy/latency cost per data movement. We propose LI2 enhancement to attain high performance delivery in the post-Moore\u27s Law era through equipping the contemporary micro-architecture design with a customized memory controller which orchestrates the memory request for fetching low-rank matrices to customized Fine Grain Reconfigurable Accelerator (FGRA) for reconstruction while the other memory requests are serviced as before. The goal of LI2 is to conquer the high latency/energy required to traverse main memory arrays in the case of LLC miss, by using in-situ construction of the requested data dealing with low-rank matrices. Thus, LI2 exchanges a high volume of data transfers with a novel lightweight reconstruction method under specific conditions using a cross-layer hardware/algorithm approach

    Thermal Management for Dependable On-Chip Systems

    Get PDF
    This thesis addresses the dependability issues in on-chip systems from a thermal perspective. This includes an explanation and analysis of models to show the relationship between dependability and tempature. Additionally, multiple novel methods for on-chip thermal management are introduced aiming to optimize thermal properties. Analysis of the methods is done through simulation and through infrared thermal camera measurements

    Detection and Diagnosis of Out-of-Specification Failures in Mixed-Signal Circuits

    Get PDF
    Verifying whether a circuit meets its intended specifications, as well as diagnosing the circuits that do not, is indispensable at every stage of integrated circuit design. Otherwise, a significant portion of fabricated circuits could fail or behave correctly only under certain conditions. Shrinking process technologies and increased integration has further complicated this task. This is especially true of mixed-signal circuits, where a slight parametric shift in an analog component can change the output significantly. We are thus rapidly approaching a proverbial wall, where migrating existing circuits to advanced technology nodes and/or designing the next generation circuits may not be possible without suitable verification and debug strategies. Traditional approaches target accuracy and not scalability, limiting their use to high-dimensional systems. Relaxing the accuracy requirement mitigates the computational cost. Simultaneously, quantifying the level of inaccuracy retains the effectiveness of these metrics. We exercise this accuracy vs. turn-around-time trade-off to deal with multiple mixed-signal problems across both the pre- and post-silicon domains. We first obtain approximate failure probability estimates along with their confidence bands using limited simulation budgets. We then generate “failure regions” that naturally explain the parametric interactions resulting in predicted failures. These two pre-silicon contributions together enable us to estimate and reduce the failure probability, which we demonstrate on a high-dimensional phase-locked loop test-case. We leverage this pre-silicon knowledge towards test-set selection and post-silicon debug to alleviate the limited controllability and observability in the post-silicon domain. We select a set of test-points that maximizes the probability of observing failures. We then use post-silicon measurements at these test-points to identify systematic deviations from pre-silicon belief. This is demonstrated using the phase-locked loop test-case, where we boost the number of failures to observable levels and use the obtained measurements to root-cause underlying parametric shifts. The pre-silicon contributions can also be extended to perform equivalence checking and to help diagnose detected model-mismatches. The resultant calibrated model allows us to apply our work to the system level as well. The equivalence checking and model-mismatch diagnosis is successfully demonstrated using a high-level abstraction model for the phase-locked loop test-case
    • …
    corecore