220 research outputs found

    Stochastic modeling of high-speed data links with nonlinear dynamic terminations

    Get PDF
    This paper addresses the statistical modeling and simulation of high-speed interconnects with uncertain physical properties and nonlinear dynamical terminations. The proposed approach is based on the expansion of voltage and current variables in terms of orthogonal polynomials of random variables. It extends the available literature results on the generation of an augmented deterministic SPICE equivalent of the stochastic link to the case in which the terminations are nonlinear and dynamical, like those modeling IC buffers. A single and standard SPICE simulation of the aforementioned equivalent circuit allows to efficiently compute the expansion coefficients that provide statistical information pertinent to the interconnect response. The feasibility and strength of the approach are demonstrated by means of a coupled microstrip interconnect with drivers and receiver

    Worst-Case Analysis of Electrical and Electronic Equipment via Affine Arithmetic

    Get PDF
    In the design and fabrication process of electronic equipment, there are many unkown parameters which significantly affect the product performance. Some uncertainties are due to manufacturing process fluctuations, while others due to the environment such as operating temperature, voltage, and various ambient aging stressors. It is desirable to consider these uncertainties to ensure product performance, improve yield, and reduce design cost. Since direct electromagnetic compatibility measurements impact on both cost and time-to-market, there has been a growing demand for the availability of tools enabling the simulation of electrical and electronic equipment with the inclusion of the effects of system uncertainties. In this framework, the assessment of device response is no longer regarded as deterministic but as a random process. It is traditionally analyzed using the Monte Carlo or other sampling-based methods. The drawback of the above methods is large number of required samples to converge, which are time-consuming for practical applications. As an alternative, the inherent worst-case approaches such as interval analysis directly provide an estimation of the true bounds of the responses. However, such approaches might provide unnecessarily strict margins, which are very unlikely to occur. A recent technique, affine arithmetic, advances the interval based methods by means of handling correlated intervals. However, it still leads to over-conservatism due to the inability of considering probability information. The objective of this thesis is to improve the accuracy of the affine arithmetic and broaden its application in frequency-domain analysis. We first extend the existing literature results to the efficient time-domain analysis of lumped circuits considering the uncertainties. Then we provide an extension of the basic affine arithmetic to the frequency-domain simulation of circuits. Classical tools for circuit analysis are used within a modified affine framework accounting for complex algebra and uncertainty interval partitioning for the accurate and efficient computation of the worst case bounds of the responses of both lumped and distributed circuits. The performance of the proposed approach is investigated through extensive simulations in several case studies. The simulation results are compared with the Monte Carlo method in terms of both simulation time and accuracy

    How affine arithmetic helps beat uncertainties in electrical systems

    Get PDF
    The ever-increasing impact of uncertainties in electronic circuits and systems is requiring the development of robust design tools capable of taking this inherent variability into account. Due to the computational inefficiency of repeated design trials, there has been a growing demand for smart simulation tools that can inherently and effectively capture the results of parameter variations on the system responses. To improve product performance, improve yield and reduce design cost, it is particularly relevant for the designer to be able to estimate worst-case responses. Within this framework, the article addresses the worst-case simulation of lumped and distributed electrical circuits. The application of interval-based methods, like interval analysis, Taylor models and affine arithmetic, is discussed and compared. The article reviews in particular the application of the affine arithmetic to complex algebra and fundamental matrix operations for the numerical frequency-domain simulation. A comprehensive and unambiguous discussion appears in fact to be missing in the available literature. The affine arithmetic turns out to be accurate and more efficient than traditional solutions based on Monte Carlo analysis. A selection of relevant examples, ranging from linear lumped circuits to distributed transmission-line structures, is used to illustrate this technique

    Training Set Optimization in an Artificial Neural Network Constructed for High Bandwidth Interconnects Design

    Get PDF
    In this article, a novel training set optimization method in an artificial neural network (ANN) constructed for high bandwidth interconnects design is proposed based on rigorous probability analysis. In general, the accuracy of an ANN is enhanced by increasing training set size. However, generating large training sets is inevitably time-consuming and resource-demanding, and sometimes even impossible due to limited prototypes or measurement scenarios. Especially, when the number of channels in required design are huge such as graphics double data rate (GDDR) memory and high bandwidth memory (HBM). Therefore, optimizing the training set selection process is crucial to minimizing the training datasets for developing an efficient ANN. According to rigorous mathematical analysis of the uniformity of the training data by probability distribution function, optimization flow of the range selection is proposed to improve accuracy and efficiency. The optimal number of training data samples is further determined by studying the prediction error rates. The performance of the proposed method in terms of accuracy is validated by comparing the scattering parameters of arbitrarily chosen strip and microstrip type GDDR interconnects obtained from EM simulations with those predicted by ANNs using default and the proposed training-set selection methods

    Crosstalk Noise Analysis for Nano-Meter VLSI Circuits.

    Full text link
    Scaling of device dimensions into the nanometer process technology has led to a considerable reduction in the gate delays. However, interconnect delays have not scaled in proportion to gate delays, and global-interconnect delays account for a major portion of the total circuit delay. Also, due to process-technology scaling, the spacing between adjacent interconnect wires keeps shrinking, which leads to an increase in the amount of coupling capacitance between interconnect wires. Hence, coupling noise has become an important issue which must be modeled while performing timing verification for VLSI chips. As delay noise strongly depends on the skew between aggressor-victim input transitions, it is not possible to a priori identify the victim-input transition that results in the worst-case delay noise. This thesis presents an analytical result that would obviate the need to search for the worst-case victim-input transition and simplify the aggressor-victim alignment problem significantly. We also propose a heuristic approach to compute the worst-case aggressor alignment that maximizes the victim receiver-output arrival time with current-source driver models. We develop algorithms to compute the set of top-k aggressors in the circuit, which could be fixed to reduce the delay noise of the circuit. Process variations cause variability in the aggressor-victim alignment which leads to variability in the delay noise. This variability is modeled by deriving closed-form expressions of the mean, the standard deviation and the correlations of the delay-noise distribution. We also propose an approach to estimate the confidence bounds on the path delay-noise distribution. Finally, we show that the interconnect corners obtained without incorporating the effects of coupling noise could lead to significant errors, and propose an approach to compute the interconnect corners considering the impact of coupling noise.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64663/1/gravkis_1.pd

    Closed-form crosstalk noise metrics for physical design applications

    Get PDF
    ABSTRACT In this paper we present efficient closed-form formulas to estimate capacitive coupling-induced crosstalk noise for distributed RC coupling trees. The efficiency of our approach stems from the fact that only the five basic operations are used in the expressions: addition ( ), subtraction ( ), multiplication ( ), division ( ) and square root ( ). The formulas do not require exponent computation or numerical iterations. We have developed closed-form expressions for the peak crosstalk noise amplitude, the peak noise occurring time and the width of the noise waveform. Our approximations are conservative and yet achieve acceptable accuracy. The formulas are simple enough to be used in the inner loops of performance optimization algorithms or as cost functions to guide routers. They capture the influence of coupling direction (near-end and far-end coupling) and coupling location (near-driver and nearreceiver)

    SIGNAL PROCESSING TECHNIQUES AND APPLICATIONS

    Get PDF
    As the technologies scaling down, more transistors can be fabricated into the same area, which enables the integration of many components into the same substrate, referred to as system-on-chip (SoC). The components on SoC are connected by on-chip global interconnects. It has been shown in the recent International Technology Roadmap of Semiconductors (ITRS) that when scaling down, gate delay decreases, but global interconnect delay increases due to crosstalk. The interconnect delay has become a bottleneck of the overall system performance. Many techniques have been proposed to address crosstalk, such as shielding, buffer insertion, and crosstalk avoidance codes (CACs). The CAC is a promising technique due to its good crosstalk reduction, less power consumption and lower area. In this dissertation, I will present analytical delay models for on-chip interconnects with improved accuracy. This enables us to have a more accurate control of delays for transition patterns and lead to a more efficient CAC, whose worst-case delay is 30-40% smaller than the best of previously proposed CACs. As the clock frequency approaches multi-gigahertz, the parasitic inductance of on-chip interconnects has become significant and its detrimental effects, including increased delay, voltage overshoots and undershoots, and increased crosstalk noise, cannot be ignored. We introduce new CACs to address both capacitive and inductive couplings simultaneously.Quantum computers are more powerful in solving some NP problems than the classical computers. However, quantum computers suffer greatly from unwanted interactions with environment. Quantum error correction codes (QECCs) are needed to protect quantum information against noise and decoherence. Given their good error-correcting performance, it is desirable to adapt existing iterative decoding algorithms of LDPC codes to obtain LDPC-based QECCs. Several QECCs based on nonbinary LDPC codes have been proposed with a much better error-correcting performance than existing quantum codes over a qubit channel. In this dissertation, I will present stabilizer codes based on nonbinary QC-LDPC codes for qubit channels. The results will confirm the observation that QECCs based on nonbinary LDPC codes appear to achieve better performance than QECCs based on binary LDPC codes.As the technologies scaling down further to nanoscale, CMOS devices suffer greatly from the quantum mechanical effects. Some emerging nano devices, such as resonant tunneling diodes (RTDs), quantum cellular automata (QCA), and single electron transistors (SETs), have no such issues and are promising candidates to replace the traditional CMOS devices. Threshold gate, which can implement complex Boolean functions within a single gate, can be easily realized with these devices. Several applications dealing with real-valued signals have already been realized using nanotechnology based threshold gates. Unfortunately, the applications using finite fields, such as error correcting coding and cryptography, have not been realized using nanotechnology. The main obstacle is that they require a great number of exclusive-ORs (XORs), which cannot be realized in a single threshold gate. Besides, the fan-in of a threshold gate in RTD nanotechnology needs to be bounded for both reliability and performance purpose. In this dissertation, I will present a majority-class threshold architecture of XORs with bounded fan-in, and compare it with a Boolean-class architecture. I will show an application of the proposed XORs for the finite field multiplications. The analysis results will show that the majority class outperforms the Boolean class architectures in terms of hardware complexity and latency. I will also introduce a sort-and-search algorithm, which can be used for implementations of any symmetric functions. Since XOR is a special symmetric function, it can be implemented via the sort-and-search algorithm. To leverage the power of multi-input threshold functions, I generalize the previously proposed sort-and-search algorithm from a fan-in of two to arbitrary fan-ins, and propose an architecture of multi-input XORs with bounded fan-ins

    Machine-learning-based hybrid random-fuzzy uncertainty quantification for EMC and SI assessment

    Get PDF
    Modeling the effects of uncertainty is of crucial importance in the signal integrity and Electromagnetic Compatibility assessment of electronic products. In this article, a novel machine-learning-based approach for uncertainty quantification problems involving both random and epistemic variables is presented. The proposed methodology leverages evidence theory to represent probabilistic and epistemic uncertainties in a common framework. Then, Bayesian optimization is used to efficiently propagate this hybrid uncertainty on the performance of the system under study. Two suitable application examples validate the accuracy and efficiency of the proposed method

    Stability, Causality, and Passivity in Electrical Interconnect Models

    Get PDF
    Modern packaging design requires extensive signal integrity simulations in order to assess the electrical performance of the system. The feasibility of such simulations is granted only when accurate and efficient models are available for all system parts and components having a significant influence on the signals. Unfortunately, model derivation is still a challenging task, despite the extensive research that has been devoted to this topic. In fact, it is a common experience that modeling or simulation tasks sometimes fail, often without a clear understanding of the main reason. This paper presents the fundamental properties of causality, stability, and passivity that electrical interconnect models must satisfy in order to be physically consistent. All basic definitions are reviewed in time domain, Laplace domain, and frequency domain, and all significant interrelations between these properties are outlined. This background material is used to interpret several common situations where either model derivation or model use in a computer-aided design environment fails dramatically.We show that the root cause for these difficulties can always be traced back to the lack of stability, causality, or passivity in the data providing the structure characterization and/or in the model itsel
    corecore