586 research outputs found

    Application of Taylor models to the worst-case analysis of stripline interconnects

    Get PDF
    This paper outlines a preliminary application of Taylor models to the worst-case analysis of transmission lines with bounded uncertain parameters. Taylor models are an algebraic technique that represents uncertain quantities in terms of a Taylor expansion complemented by an interval remainder encompassing approximation and truncation errors. The Taylor model formulation is propagated from input uncertainties to output responses through a suitable redef nition of the algebraic operations involved in their calculation. While the Taylor expansion def nes an analytical and parametric model of the response, the remainder provides a conservative bound inside which the true value is guaranteed to lie. The approach is validated against the SPICE simulation of a coupled stripline and shows promising accuracy and eff ciency

    Worst-Case Analysis of Electrical and Electronic Equipment via Affine Arithmetic

    Get PDF
    In the design and fabrication process of electronic equipment, there are many unkown parameters which significantly affect the product performance. Some uncertainties are due to manufacturing process fluctuations, while others due to the environment such as operating temperature, voltage, and various ambient aging stressors. It is desirable to consider these uncertainties to ensure product performance, improve yield, and reduce design cost. Since direct electromagnetic compatibility measurements impact on both cost and time-to-market, there has been a growing demand for the availability of tools enabling the simulation of electrical and electronic equipment with the inclusion of the effects of system uncertainties. In this framework, the assessment of device response is no longer regarded as deterministic but as a random process. It is traditionally analyzed using the Monte Carlo or other sampling-based methods. The drawback of the above methods is large number of required samples to converge, which are time-consuming for practical applications. As an alternative, the inherent worst-case approaches such as interval analysis directly provide an estimation of the true bounds of the responses. However, such approaches might provide unnecessarily strict margins, which are very unlikely to occur. A recent technique, affine arithmetic, advances the interval based methods by means of handling correlated intervals. However, it still leads to over-conservatism due to the inability of considering probability information. The objective of this thesis is to improve the accuracy of the affine arithmetic and broaden its application in frequency-domain analysis. We first extend the existing literature results to the efficient time-domain analysis of lumped circuits considering the uncertainties. Then we provide an extension of the basic affine arithmetic to the frequency-domain simulation of circuits. Classical tools for circuit analysis are used within a modified affine framework accounting for complex algebra and uncertainty interval partitioning for the accurate and efficient computation of the worst case bounds of the responses of both lumped and distributed circuits. The performance of the proposed approach is investigated through extensive simulations in several case studies. The simulation results are compared with the Monte Carlo method in terms of both simulation time and accuracy

    How affine arithmetic helps beat uncertainties in electrical systems

    Get PDF
    The ever-increasing impact of uncertainties in electronic circuits and systems is requiring the development of robust design tools capable of taking this inherent variability into account. Due to the computational inefficiency of repeated design trials, there has been a growing demand for smart simulation tools that can inherently and effectively capture the results of parameter variations on the system responses. To improve product performance, improve yield and reduce design cost, it is particularly relevant for the designer to be able to estimate worst-case responses. Within this framework, the article addresses the worst-case simulation of lumped and distributed electrical circuits. The application of interval-based methods, like interval analysis, Taylor models and affine arithmetic, is discussed and compared. The article reviews in particular the application of the affine arithmetic to complex algebra and fundamental matrix operations for the numerical frequency-domain simulation. A comprehensive and unambiguous discussion appears in fact to be missing in the available literature. The affine arithmetic turns out to be accurate and more efficient than traditional solutions based on Monte Carlo analysis. A selection of relevant examples, ranging from linear lumped circuits to distributed transmission-line structures, is used to illustrate this technique

    Combined parametric and worst case circuit analysis via Taylor models

    Get PDF
    This paper proposes a novel paradigm to generate a parameterized model of the response of linear circuits with the inclusion of worst case bounds. The methodology leverages the so-called Taylor models and represents parameter-dependent responses in terms of a multivariate Taylor polynomial, in conjunction with an interval remainder accounting for the approximation error. The Taylor model representation is propagated from input parameters to circuit responses through a suitable redefinition of the basic operations, such as addition, multiplication or matrix inversion, that are involved in the circuit solution. Specifically, the remainder is propagated in a conservative way based on the theory of interval analysis. While the polynomial part provides an accurate, analytical and parametric representation of the response as a function of the selected design parameters, the complementary information on the remainder error yields a conservative, yet tight, estimation of the worst case bounds. Specific and novel solutions are proposed to implement complex-valued matrix operations and to overcome well-known issues in the state-of-the-art Taylor model theory, like the determination of the upper and lower bound of the multivariate polynomial part. The proposed framework is applied to the frequency-domain analysis of linear circuits. An in-depth discussion of the fundamental theory is complemented by a selection of relevant examples aimed at illustrating the technique and demonstrating its feasibility and strength

    Stochastic Time-Domain Mapping for Comprehensive Uncertainty Assessment in Eye Diagrams

    Get PDF
    The eye diagram is one of the most common tools used for quality assessment in high-speed links. This article proposes a method of predicting the shape of the inner eye for a link subject to uncertainties. The approach relies on machine learning regression and is tested on the very challenging example of flexible link for smart textiles. Several sources of uncertainties are taken into account related to both manufacturing tolerances and physical deformation. The resulting model is fast and accurate. It is also extremely versatile: rather than focusing on a specific metric derived from the eye diagram, its aim is to fully reconstruct the inner eye and enable designers to use it as they see fit. This article investigates the features and convergence of three alternative machine learning algorithms, including the single-output support vector machine regression, together with its least squares variant, and the vector-valued kernel ridge regression. The latter method is arguably the most promising, resulting in an accurate, fast and robust tool enabling a complete parametric stochastic map of the eye

    Design and modelling of variability tolerant on-chip communication structures for future high performance system on chip designs

    Get PDF
    The incessant technology scaling has enabled the integration of functionally complex System-on-Chip (SoC) designs with a large number of heterogeneous systems on a single chip. The processing elements on these chips are integrated through on-chip communication structures which provide the infrastructure necessary for the exchange of data and control signals, while meeting the strenuous physical and design constraints. The use of vast amounts of on chip communications will be central to future designs where variability is an inherent characteristic. For this reason, in this thesis we investigate the performance and variability tolerance of typical on-chip communication structures. Understanding of the relationship between variability and communication is paramount for the designers; i.e. to devise new methods and techniques for designing performance and power efficient communication circuits in the forefront of challenges presented by deep sub-micron (DSM) technologies. The initial part of this work investigates the impact of device variability due to Random Dopant Fluctuations (RDF) on the timing characteristics of basic communication elements. The characterization data so obtained can be used to estimate the performance and failure probability of simple links through the methodology proposed in this work. For the Statistical Static Timing Analysis (SSTA) of larger circuits, a method for accurate estimation of the probability density functions of different circuit parameters is proposed. Moreover, its significance on pipelined circuits is highlighted. Power and area are one of the most important design metrics for any integrated circuit (IC) design. This thesis emphasises the consideration of communication reliability while optimizing for power and area. A methodology has been proposed for the simultaneous optimization of performance, area, power and delay variability for a repeater inserted interconnect. Similarly for multi-bit parallel links, bandwidth driven optimizations have also been performed. Power and area efficient semi-serial links, less vulnerable to delay variations than the corresponding fully parallel links are introduced. Furthermore, due to technology scaling, the coupling noise between the link lines has become an important issue. With ever decreasing supply voltages, and the corresponding reduction in noise margins, severe challenges are introduced for performing timing verification in the presence of variability. For this reason an accurate model for crosstalk noise in an interconnection as a function of time and skew is introduced in this work. This model can be used for the identification of skew condition that gives maximum delay noise, and also for efficient design verification

    A framework to explore low-power architecture and variability-aware timing estimation of FPGAs

    Get PDF
    Master'sMASTER OF ENGINEERIN

    B*tree representation based thermal and variability aware floorplanning frame work

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Novel methods to quantify aleatory and epistemic uncertainty in high speed networks

    Get PDF
    2017 Summer.Includes bibliographical references.With the sustained miniaturization of integrated circuits to sub-45 nm regime and the increasing packaging density, random process variations have been found to result in unpredictability in circuit performance. In existing literature, this unpredictability has been modeled by creating polynomial expansions of random variables. But the existing methods prove inefficient because as the number of random variables within a system increase, the time and computational cost increases in a near-polynomial fashion. In order to mitigate this poor scalability of conventional approaches, several techniques are presented, in this dissertation, to sparsify the polynomial expansion. The sparser polynomial expansion is created, by identifying the contribution of each random variable on the total response of the system. This sparsification is performed primarily using two different methods. It translates to immense savings, in the time required, and the memory cost of computing the expansion. One of the two methods presented is applied to aleatory variability problems while the second method is applied to problems involving epistemic uncertainty. The accuracy of the proposed approaches is validated through multiple numerical examples
    corecore