15 research outputs found

    Multi-objective optimization of RF circuit blocks via surrogate models and NBI and SPEA2 methods

    Get PDF
    Multi-objective optimization techniques can be categorized globally into deterministic and evolutionary methods. Examples of such methods are the Normal Boundary Intersection (NBI) method and the Strength Pareto Evolutionary Algorithm (SPEA2), respectively. With both methods one explores trade-offs between conflicting performances. Surrogate models can replace expensive circuit simulations so enabling faster computation of circuit performances. As surrogate models of behavioral parameters and performance outcomes, we consider look-up tables with interpolation and Neural Network models

    Finding all convex tangrams

    Get PDF

    Parameter estimation for a generalized Gaussian distribution

    No full text
    Esta tesis no contiene Introducción, Prologo ni Resumen

    Determining the essentially different partitions of all Japanese convex tangrams

    Get PDF
    In this report we consider the set of the 16 possible convex tangrams that can be composed with the 7 so-called “Sei Shonagon Chie no Ita” (or Japanese) tans, see [9]. The set of these Japanese tans is slightly different from the well-known set of 7 Chinese tans with which 13 (out of those 16) convex tangrams can be formed. In [4], [5] the problem of determining all essentially different partitions of the 13 “Chinese” convex tangrams was investigated and solved. In this report we will address the same problem for the “Japanese” convex tangrams. The approach to solve both problems is more or less analogous, but the “Japanese” problem is much harder than the “Chinese” one, since the number of “Japanese” solutions is much larger than the “Chinese” ones. In fact, only for a few “Japanese” tangram shapes their solutions can be found by a rigorous analysis supported by a large number of clarifying diagrams. The solutions for the remaining shapes have to be determined using a dedicated computer program. Both approaches will be discussed here and all essentially different solutions with the “Japanese” tans are presented. As far as we know all presented results are not yet published before

    Hybrid importance sampling Monte Carlo approach for yield estimation in circuit design

    Get PDF
    The dimension of transistors shrinks with each new technology developed in the semiconductor industry. The extreme scaling of transistors introduces important statistical variations in their process parameters. A large digital integrated circuit consists of a very large number (in millions or billions) of transistors, and therefore the number of statistical parameters may become very large if mismatch variations are modeled. The parametric variations often cause to the circuit performance degradation. Such degradation can lead to a circuit failure that directly affects the yield of the producing company and its fame for reliable products. As a consequence, the failure probability of a circuit must be estimated accurately enough. In this paper, we consider the Importance Sampling Monte Carlo method as a reference probability estimator for estimating tail probabilities. We propose a Hybrid ISMC approach for dealing with circuits having a large number of input parameters and provide a fast estimation of the probability. In the Hybrid approach, we replace the expensive to use circuit model by its cheap surrogate for most of the simulations. The expensive circuit model is used only for getting the training sets (to fit the surrogates) and near to the failure threshold for reducing the bias introduced by the replacement

    Speeding up rare event simulations using Kriging models

    No full text
    \u3cp\u3eWe consider the Importance Sampling Monte Carlo (ISMC) as a reference probability estimator for estimating very small probabilities in the context of analog circuits design. We propose a surrogate based hybrid ISMC method to accelerate the estimation of probabilities when the budget of simulations is limited. The Kriging model is used as a surrogate of the simulator because it provides the uncertainties around the predictions that are useful to obtain confidence bounds for any model predictions, and consequently for the probability estimators.\u3c/p\u3

    Digital linear control theory applied to automatic stepsize control in electrical circuit simulation

    No full text
    Adaptive stepsize control is used to control the local errors of the numerical solution. For optimization purposes smoother stepsize controllers are wanted, such that the errors and stepsizes also behave smoothly. We consider approaches from digital linear control theory applied to multistep BDF-methods

    Robust periodic steady state analysis of autonomous oscillators based on generalized eigenvalues

    Get PDF
    In this paper, we present a new gauge technique for the Newton Raphson method to solve the periodic steady state (PSS) analysis of free-running oscillators in the time domain. To find the frequency a new equation is added to the system of equations. Our equation combines a generalized eigenvector with the time derivative of the solution. It is dynamically updated within each Newton-Raphson iteration. The method is applied to an analytic benchmark problem and to an LC oscillator. It provides better convergence properties than when using the popular phase-shift condition. It also does not need additional information about the solution. The method can also easily be implemented within the Harmonic Balance framework

    Fast time-domain simulation for reliable fault detection

    No full text
    \u3cp\u3eImperfections in manufacturing processes may cause unwanted connections (faults) that are added to the nominal, golden , design of an electronic circuit. By fault simulation we simulate all situations: a huge number of new connections and each with many different values, up to the regime of large deviations, for the newly added element. We also consider opens (broken connections). A strategy is developed to efficiently simulate the faulty solutions until their moment of detection. We fully exploit the hierarchical structure of the circuit. Fast fault simulation is achieved in which the golden solution and all faulty solutions are calculated over the same time step.\u3c/p\u3

    Access time optimization of SRAM memory with statistical yield constraint

    Get PDF
    A product may fail when design parameters are subject to large deviations. To guarantee yield one likes to determine bounds on the parameter range such that the fail probability P_fail is small. For Static Random Access Memory (SRAM) characteristics like Static Noise Margin and Read Current, obtained from simulation output, are important in the failure criteria. They also have non-Gaussian distributions. With regular Monte Carlo (MC) sampling we can simply determine the fraction of failures when varying parameters. We are interested to ef¿ciently sample for a tiny fail probability P_fail = 10^10. For a normal distribution this corresponds with parameter variations up to 6.4 times the standard deviation s. Importance Sampling (IS) allows to tune Monte Carlo sampling to areas of particular interest while correcting the counting of failure events with a correction factor. To estimate the number of samples needed we apply Large Deviations Theory, ¿rst to sharply estimate the amount of samples needed for regular MC, and next for IS. With a suitably chosen distribution IS can be orders more ef¿cient than regular MC to determine the fail probability Pfail . We apply this to determine the fail probabilities the SRAM characteristics Static Noise Margin and Read Current. Next we accurately and ef¿ciently minimize the access time of an SRAM block, consisting of SRAM cells and a (selecting) Sense Ampli¿er, while guaranteeing a statistical constraint on the yield target
    corecore