1,713 research outputs found
Efficient Aging-aware Failure Probability Estimation Using Augmented Reliability and Subset Simulation
A circuit-aging simulation that efficiently calculates temporal change of rare circuit-failure probability is proposed. While conventional methods required a long computational time due to the necessity of conducting separate calculations of failure probability at each device age, the proposed Monte Carlo based method requires to run only a single set of simulation. By applying the augmented reliability and subset simulation framework, the change of failure probability along the lifetime of the device can be evaluated through the analysis of the Monte Carlo samples. Combined with the two-step sample generation technique, the proposed method reduces the computational time to about 1/6 of that of the conventional method while maintaining a sufficient estimation accuracy
Worst-Case Analysis of Electrical and Electronic Equipment via Affine Arithmetic
In the design and fabrication process of electronic equipment, there are many unkown parameters which significantly affect the product performance. Some uncertainties are due to manufacturing process fluctuations, while others due to the environment such as operating temperature, voltage, and various ambient aging stressors. It is desirable to consider these uncertainties to ensure product performance, improve yield, and reduce design cost. Since direct electromagnetic compatibility measurements impact on both cost and time-to-market, there has been a growing demand for the availability of tools enabling the simulation of electrical and electronic equipment with the inclusion of the effects of system uncertainties.
In this framework, the assessment of device response is no longer regarded as deterministic but as a random process. It is traditionally analyzed using the Monte Carlo or other sampling-based methods. The drawback of the above methods is large number of required samples to converge, which are time-consuming for practical applications. As an alternative, the inherent worst-case approaches such as interval analysis directly provide an estimation of the true bounds of the responses. However, such approaches might provide unnecessarily strict margins, which are very unlikely to occur. A recent technique, affine arithmetic, advances the interval based methods by means of handling correlated intervals. However, it still leads to over-conservatism due to the inability of considering probability information.
The objective of this thesis is to improve the accuracy of the affine arithmetic and broaden its application in frequency-domain analysis. We first extend the existing literature results to the efficient time-domain analysis of lumped circuits considering the uncertainties. Then we provide an extension of the basic affine arithmetic to the frequency-domain simulation of circuits. Classical tools for circuit analysis are used within a modified affine framework accounting for complex algebra and uncertainty interval partitioning for the accurate and efficient computation of the worst case bounds of the responses of both lumped and distributed circuits.
The performance of the proposed approach is investigated through extensive simulations in several case studies. The simulation results are compared with the Monte Carlo method in terms of both simulation time and accuracy
Recommended from our members
Stochastic Yield Analysis of Rare Failure Events in High-Dimensional Variation Space
As semiconductor industry kept shrinking the feature size to nanometer scale, circuit reliability has become an area of growing concern due to the uncertainty introduced by process variations. For highly-replicated standard cells, the failure event for each individual component must be extremely rare in order to maintain sufficiently high yield rate. Existing yield analysis approaches works fine at low dimension, but less effective either when there are a large amount of circuit parameters, or when the failure samples are distributed in multiple regions. In this thesis, four novel high sigma analysis approaches have been proposed. First, we propose an adaptive importance sampling (AIS) algorithm. AIS has several iterations of sampling region adjustments, while existing methods pre-decide a static sampling distribution. At each iteration, AIS generates samples from current proposed distribution. Next, AIS carefully assigns weight to each sample based on its tilted occurrence probability between failure region and current failure region distribution. Then we design two adaptive frameworks based on Resampling and population Metropolis-Hastings (MH) to iteratively search for failure regions. Second, we develop an Adaptive Clustering and Sampling (ACS) method to estimate the failure rate of high-dimensional and multi-failure-region circuit cases. The basic idea of the algorithm is to cluster failure samples and build global sampling distribution at each iteration. Specifically, in clustering step, we propose a multi-cone clustering method, which partitions the parametric space and clusters failure samples. Then global sampling distribution is constructed from a set of weighted Gaussian distributions. Next, we calculate importance weight for each sample based on the discrepancy between sampling distribution and target distribution. Failure probability is updated at the end of each iteration. This clustering and sampling procedure proceeds iteratively until all the failure regions are covered.Moreover, two meta-model based approaches are proposed for high sigma analysis. The Low-Rank Tensor Approximation (LRTA) formulate the meta-model in tensor space by representing a multi-way tensor into a finite sum of rank-one tensor. The polynomial degree of our LRTA model grows linearly with circuit dimension, which makes it especially promising for high-dimensional circuit problems. Then we solve our LRTA model efficiently with a robust greedy algorithm, and calibrate iteratively with an adaptive sampling method. The meta-model based importance sampling (MIS) method utilizes Gaussian Process meta-model to construct quasi-optimal importance sampling distribution, and performs Markov Chain Monte Carlo (MCMC) simulation to generate new samples from the proposed distribution. By updating our global Importance Sampling estimator in an iterated framework, MIS leads to better efficiency and higher accuracy than traditional importance sampling methods. Experiment results validate that the proposed approaches are 3 orders faster than Monte Carlo, and more accurate than both academia solutions such as importance sampling and classification based methods, and industrial solutions such as mixture IS used by Intel
Surrogate based Optimization and Verification of Analog and Mixed Signal Circuits
Nonlinear Analog and Mixed Signal (AMS) circuits are very complex and expensive to design and verify. Deeper technology scaling has made these designs susceptible to noise and process variations which presents a growing concern due to the degradation in the circuit performances and risks of design failures. In fact, due to process parameters, AMS circuits like phase locked loops may present chaotic behavior that can be confused with noisy behavior. To design and verify circuits, current industrial designs rely heavily on simulation based verification and knowledge based optimization techniques. However, such techniques lack mathematical rigor necessary to catch up with the growing design constraints besides being computationally intractable. Given all aforementioned barriers, new techniques are needed to ensure that circuits are robust and optimized despite process variations and possible chaotic behavior. In this thesis, we develop a methodology for optimization and verification of AMS circuits advancing three frontiers in the variability-aware design flow. The first frontier is a robust circuit sizing methodology wherein a multi-level circuit optimization approach
is proposed. The optimization is conducted in two phases. First, a global sizing phase powered by a regional sensitivity analysis to quickly scout the feasible design space that reduces the optimization search. Second, nominal sizing step based on space mapping of two AMS circuits models at different levels of abstraction is developed for the sake of breaking the re-design loop without performance penalties. The second frontier concerns a dynamics verification scheme of the circuit behavior (i.e., study the chaotic vs. stochastic circuit behavior). It is based on a surrogate generation approach
and a statistical proof by contradiction technique using Gaussian Kernel measure in the state space domain. The last frontier focus on quantitative verification approaches to predict parametric yield for both a single and multiple circuit performance constraints.
The single performance approach is based on a combination of geometrical intertwined reachability analysis and a non-parametric statistical verification scheme. On the other hand, the multiple performances approach involves process parameter reduction, state space based pattern matching, and multiple hypothesis testing procedures. The performance of the proposed methodology is demonstrated on several benchmark analog and mixed signal circuits. The optimization approach greatly improves computational efficiency while locating a comparable/better design point than
other approaches. Moreover, great improvements were achieved using our verification methods with many orders of speedup compared to existing techniques
Metamodel-assisted design optimization of piezoelectric flex transducer for maximal bio-kinetic energy conversion
Energy Harvesting Devices (EHD) have been widely used to generate electrical power from the bio-kinetic energy of human body movement. A novel Piezoelectric Flex Transducer (PFT) based on the Cymbal device has been proposed by Daniels et al. (2013) for the purpose of energy harvesting. To further improve the efficiency of the device, optimal design of the PFT for maximum output power subject to stress and displacement constraints is carried out in this paper. Sequential Quadratic Programming (SQP) on metamodels generated with Genetic Programming from a 140-point optimal Latin hypercube design of experiments is used in the optimization. Finally, the optimal design is validated by finite element simulations. The simulations show that the magnitude of the electrical power generated from this optimal PFT harvesting device can be up to 6.5 mw when a safety design factor of 2.0 is applied
34th Midwest Symposium on Circuits and Systems-Final Program
Organized by the Naval Postgraduate School Monterey California. Cosponsored by the IEEE Circuits and Systems Society.
Symposium Organizing Committee: General Chairman-Sherif Michael, Technical Program-Roberto Cristi, Publications-Michael Soderstrand, Special Sessions- Charles W. Therrien, Publicity: Jeffrey Burl, Finance: Ralph Hippenstiel, and Local Arrangements: Barbara Cristi
- …